Date
Publisher
arXiv
Effective conversational agents like large language models (LLMs) must
personalize their interactions to adapt to user preferences, personalities, and
attributes across diverse domains like education and healthcare. Current
methods like Reinforcement Learning from Human Feedback (RLHF), often
prioritize helpfulness and safety but fall short in fostering truly empathetic,
adaptive, and personalized dialogues. Existing personalization approaches
typically rely on extensive user history, limiting their effectiveness for new
or context-limited users. To address these limitations, we propose leveraging a
user model to incorporate a curiosity-based intrinsic reward into multi-turn
RLHF. This novel reward mechanism encourages the LLM agent to actively infer
user traits by optimizing conversations to improve its user model's accuracy.
Consequently, the agent delivers more personalized interactions by learning
more about the user. We demonstrate our method's effectiveness in two distinct
domains: significantly improving personalization performance in a
conversational recommendation task, and personalizing conversations for
different learning styles in an educational setting. We show improved
generalization capabilities compared to traditional multi-turn RLHF, all while
maintaining conversation quality. Our method offers a promising solution for
creating more personalized, adaptive, and engaging conversational agents.
What is the application?
Who age?
Why use AI?
Study design
