When Scaling Meets LLM Finetuning: the Effect of Data, Model and Finetuning Method

arXiv (Cornell University)(2024)

引用 0|浏览140
摘要
While large language models (LLMs) often adopt finetuning to unlock theircapabilities for downstream applications, our understanding on the inductivebiases (especially the scaling properties) of different finetuning methods isstill limited. To fill this gap, we conduct systematic experiments studyingwhether and how different scaling factors, including LLM model size,pretraining data size, new finetuning parameter size and finetuning data size,affect the finetuning performance. We consider two types of finetuning –full-model tuning (FMT) and parameter efficient tuning (PET, including prompttuning and LoRA), and explore their scaling behaviors in the data-limitedregime where the LLM model size substantially outweighs the finetuning datasize. Based on two sets of pretrained bilingual LLMs from 1B to 16B andexperiments on bilingual machine translation and multilingual summarizationbenchmarks, we find that 1) LLM finetuning follows a powerbased multiplicativejoint scaling law between finetuning data size and each other scaling factor;2) LLM finetuning benefits more from LLM model scaling than pretraining datascaling, and PET parameter scaling is generally ineffective; and 3) the optimalfinetuning method is highly task- and finetuning data-dependent. We hope ourfindings could shed light on understanding, selecting and developing LLMfinetuning methods.
更多
查看译文
关键词
LLM finetuning,Scaling Laws,Full-model finetuning,Parameter efficient tuning,Machine Translation,Multilingual Summarization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
0
您的评分 :

暂无评分

数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn