Subgraph-level Universal Prompt Tuning
arXiv (Cornell University)(2024)
摘要
In the evolving landscape of machine learning, the adaptation of pre-trainedmodels through prompt tuning has become increasingly prominent. This trend isparticularly observable in the graph domain, where diverse pre-trainingstrategies present unique challenges in developing effective prompt-basedtuning methods for graph neural networks. Previous approaches have beenlimited, focusing on specialized prompting functions tailored to models withedge prediction pre-training tasks. These methods, however, suffer from a lackof generalizability across different pre-training strategies. Recently, asimple prompt tuning method has been designed for any pre-training strategy,functioning within the input graph's feature space. This allows it totheoretically emulate any type of prompting function, thereby significantlyincreasing its versatility for a range of downstream applications.Nevertheless, the capacity of such simple prompts to fully grasp the complexcontexts found in graphs remains an open question, necessitating furtherinvestigation. Addressing this challenge, our work introduces theSubgraph-level Universal Prompt Tuning (SUPT) approach, focusing on thedetailed context within subgraphs. In SUPT, prompt features are assigned at thesubgraph-level, preserving the method's universal capability. This requiresextremely fewer tuning parameters than fine-tuning-based methods, outperformingthem in 42 out of 45 full-shot scenario experiments with an average improvementof over 2.5achieving an average performance increase of more than 6.6
更多查看译文
关键词
Reconfigurable Computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn