Ask Optimal Questions: Aligning Large Language Models with Retriever's Preference in Conversational Search
CoRR(2024)
摘要
Conversational search, unlike single-turn retrieval tasks, requires
understanding the current question within a dialogue context. The common
approach of rewrite-then-retrieve aims to decontextualize questions to be
self-sufficient for off-the-shelf retrievers, but most existing methods produce
sub-optimal query rewrites due to the limited ability to incorporate signals
from the retrieval results. To overcome this limitation, we present a novel
framework RetPO (Retriever's Preference Optimization), which is designed to
optimize a language model (LM) for reformulating search queries in line with
the preferences of the target retrieval systems. The process begins by
prompting a large LM to produce various potential rewrites and then collects
retrieval performance for these rewrites as the retrievers' preferences.
Through the process, we construct a large-scale dataset called RF collection,
containing Retrievers' Feedback on over 410K query rewrites across 12K
conversations. Furthermore, we fine-tune a smaller LM using this dataset to
align it with the retrievers' preferences as feedback. The resulting model
achieves state-of-the-art performance on two recent conversational search
benchmarks, significantly outperforming existing baselines, including GPT-3.5.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn