Paper Reading AI Learner

Bayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation

2024-05-02 03:35:21
David Eric Austin, Anton Korikov, Armin Toroghi, Scott Sanner

Abstract

Designing preference elicitation (PE) methodologies that can quickly ascertain a user's top item preferences in a cold-start setting is a key challenge for building effective and personalized conversational recommendation (ConvRec) systems. While large language models (LLMs) constitute a novel technology that enables fully natural language (NL) PE dialogues, we hypothesize that monolithic LLM NL-PE approaches lack the multi-turn, decision-theoretic reasoning required to effectively balance the NL exploration and exploitation of user preferences towards an arbitrary item set. In contrast, traditional Bayesian optimization PE methods define theoretically optimal PE strategies, but fail to use NL item descriptions or generate NL queries, unrealistically assuming users can express preferences with direct item ratings and comparisons. To overcome the limitations of both approaches, we formulate NL-PE in a Bayesian Optimization (BO) framework that seeks to generate NL queries which actively elicit natural language feedback to reduce uncertainty over item utilities to identify the best recommendation. We demonstrate our framework in a novel NL-PE algorithm, PEBOL, which uses Natural Language Inference (NLI) between user preference utterances and NL item descriptions to maintain preference beliefs and BO strategies such as Thompson Sampling (TS) and Upper Confidence Bound (UCB) to guide LLM query generation. We numerically evaluate our methods in controlled experiments, finding that PEBOL achieves up to 131% improvement in MAP@10 after 10 turns of cold start NL-PE dialogue compared to monolithic GPT-3.5, despite relying on a much smaller 400M parameter NLI model for preference inference.

Abstract (translated)

设计能够在冷启动设置中快速确定用户top物品偏好的偏好诱发(PE)方法是一个构建有效且个性化的会话推荐(ConvRec)系统的重要挑战。虽然大型语言模型(LLMs)是一种新兴技术,允许实现完全自然语言(NL)PE对话,但我们假设单体LLM NL-PE方法缺乏多轮、决策理论推理,以有效地平衡NL探索和利用用户偏好的对任意物品集的平衡。相比之下,传统贝叶斯优化PE方法定义了理论最优的PE策略,但无法使用NL项目描述或生成NL查询,错误地假设用户可以通过直接项目评分和比较表达偏好。为了克服两者的局限,我们在贝叶斯优化(BO)框架中设计了一种NL-PE方法,旨在生成NL查询,积极引导自然语言反馈以降低对物品效用的不确定性,以确定最佳推荐。我们在新颖的NL-PE算法PEBOL中展示了我们的框架,该算法使用自然语言推理(NLI)在用户偏好陈述和NL项目描述之间保持偏好信念和BO策略,如Thompson采样(TS)和Upper Confidence Bound(UCB),以引导LLM查询生成。我们在控制实验中数值评估了我们的方法,发现PEBOL在10轮冷启动NL-PE对话后,MAP@10提高了131%,尽管在偏好推理中依赖了参数大小为400M的模型。

URL

https://arxiv.org/abs/2405.00981

PDF

https://arxiv.org/pdf/2405.00981.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot