SuRe: Summarizing Retrievals Using Answer Candidates for Open-domain QA of LLMs
arXiv (Cornell University)(2024)
摘要
Large language models (LLMs) have made significant advancements in variousnatural language processing tasks, including question answering (QA) tasks.While incorporating new information with the retrieval of relevant passages isa promising way to improve QA with LLMs, the existing methods often requireadditional fine-tuning which becomes infeasible with recent LLMs. Augmentingretrieved passages via prompting has the potential to address this limitation,but this direction has been limitedly explored. To this end, we design a simpleyet effective framework to enhance open-domain QA (ODQA) with LLMs, based onthe summarized retrieval (SuRe). SuRe helps LLMs predict more accurate answersfor a given question, which are well-supported by the summarized retrieval thatcould be viewed as an explicit rationale extracted from the retrieved passages.Specifically, SuRe first constructs summaries of the retrieved passages foreach of the multiple answer candidates. Then, SuRe confirms the most plausibleanswer from the candidate set by evaluating the validity and ranking of thegenerated summaries. Experimental results on diverse ODQA benchmarksdemonstrate the superiority of SuRe, with improvements of up to 4.6match (EM) and 4.0can be integrated with a broad range of retrieval methods and LLMs. Finally,the generated summaries from SuRe show additional advantages to measure theimportance of retrieved passages and serve as more preferred rationales bymodels and humans.
更多查看译文
关键词
Information Retrieval,Schema Matching,Description Logics,Knowledge Representation,Semantic Similarity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn