Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation
CoRR(2024)
摘要
We explore the use of long-context capabilities in large language models to
create synthetic reading comprehension data from entire books. Previous efforts
to construct such datasets relied on crowd-sourcing, but the emergence of
transformers with a context size of 1 million or more tokens now enables
entirely automatic approaches. Our objective is to test the capabilities of
LLMs to analyze, understand, and reason over problems that require a detailed
comprehension of long spans of text, such as questions involving character
arcs, broader themes, or the consequences of early actions later in the story.
We propose a holistic pipeline for automatic data generation including question
generation, answering, and model scoring using an “Evaluator”. We find that a
relative approach, comparing answers between models in a pairwise fashion and
ranking with a Bradley-Terry model, provides a more consistent and
differentiating scoring mechanism than an absolute scorer that rates answers
individually. We also show that LLMs from different model families produce
moderate agreement in their ratings. We ground our approach using the manually
curated NarrativeQA dataset, where our evaluator shows excellent agreement with
human judgement and even finds errors in the dataset. Using our automatic
evaluation approach, we show that using an entire book as context produces
superior reading comprehension performance compared to baseline no-context
(parametric knowledge only) and retrieval-based approaches.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn