Granite Code Models: A Family of Open Foundation Models for Code Intelligence
CoRR(2024)
摘要
Large Language Models (LLMs) trained on code are revolutionizing the software
development process. Increasingly, code LLMs are being integrated into software
development environments to improve the productivity of human programmers, and
LLM-based agents are beginning to show promise for handling complex tasks
autonomously. Realizing the full potential of code LLMs requires a wide range
of capabilities, including code generation, fixing bugs, explaining and
documenting code, maintaining repositories, and more. In this work, we
introduce the Granite series of decoder-only code models for code generative
tasks, trained with code written in 116 programming languages. The Granite Code
models family consists of models ranging in size from 3 to 34 billion
parameters, suitable for applications ranging from complex application
modernization tasks to on-device memory-constrained use cases. Evaluation on a
comprehensive set of tasks demonstrates that Granite Code models consistently
reaches state-of-the-art performance among available open-source code LLMs. The
Granite Code model family was optimized for enterprise software development
workflows and performs well across a range of coding tasks (e.g. code
generation, fixing and explanation), making it a versatile all around code
model. We release all our Granite Code models under an Apache 2.0 license for
both research and commercial use.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn