Curiosity-Driven Reinforcement Learning from Human Feedback

Bibliographic Details
Title: Curiosity-Driven Reinforcement Learning from Human Feedback
Authors: Sun, Haoran, Chai, Yekun, Wang, Shuohuan, Sun, Yu, Wu, Hua, Wang, Haifeng
Publication Year: 2025
Collection: Computer Science
Subject Terms: Computer Science - Computation and Language
More Details: Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models (LLMs) with human preferences, but often at the cost of reduced output diversity. This trade-off between diversity and alignment quality remains a significant challenge. Drawing inspiration from curiosity-driven exploration in reinforcement learning, we introduce curiosity-driven RLHF (CD-RLHF), a framework that incorporates intrinsic rewards for novel states, alongside traditional sparse extrinsic rewards, to optimize both output diversity and alignment quality. We demonstrate the effectiveness of CD-RLHF through extensive experiments on a range of tasks, including text summarization and instruction following. Our approach achieves significant gains in diversity on multiple diversity-oriented metrics while maintaining alignment with human preferences comparable to standard RLHF. We make our code publicly available at https://github.com/ernie-research/CD-RLHF.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2501.11463
Accession Number: edsarx.2501.11463
Database: arXiv
More Details
Description not available.