Evaluating Frontier Models for Dangerous Capabilities
CoRR(2024)
摘要
To understand the risks posed by a new AI system, we must understand what it
can and cannot do. Building on prior work, we introduce a programme of new
"dangerous capability" evaluations and pilot them on Gemini 1.0 models. Our
evaluations cover four areas: (1) persuasion and deception; (2) cyber-security;
(3) self-proliferation; and (4) self-reasoning. We do not find evidence of
strong dangerous capabilities in the models we evaluated, but we flag early
warning signs. Our goal is to help advance a rigorous science of dangerous
capability evaluation, in preparation for future models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn