Asymmetry in Low-Rank Adapters of Foundation Models
ICML(2024)
摘要
Parameter-efficient fine-tuning optimizes large, pre-trained foundationmodels by updating a subset of parameters; in this class, Low-Rank Adaptation(LoRA) is particularly effective. Inspired by an effort to investigate thedifferent roles of LoRA matrices during fine-tuning, this paper characterizesand leverages unexpected asymmetry in the importance of low-rank adaptermatrices. Specifically, when updating the parameter matrices of a neuralnetwork by adding a product BA, we observe that the B and A matrices havedistinct functions: A extracts features from the input, while B uses thesefeatures to create the desired output. Based on this observation, wedemonstrate that fine-tuning B is inherently more effective than fine-tuningA, and that a random untrained A should perform nearly as well as afine-tuned one. Using an information-theoretic lens, we also bound thegeneralization of low-rank adapters, showing that the parameter savings ofexclusively training B improves the bound. We support our conclusions withexperiments on RoBERTa, BART-Large, LLaMA-2, and ViTs.
更多查看译文
关键词
Susceptibility Mapping
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn