Fight Fire with Fire: Towards Robust Graph Neural Networks on Dynamic Graphs Via Actively Defense
PROCEEDINGS OF THE VLDB ENDOWMENT(2024)
摘要
Graph neural networks (GNNs) have achieved great success on various graph tasks. However, recent studies have revealed that GNNs are vulnerable to injective attacks. Due to the openness of platforms, attackers can inject malicious nodes with carefully designed edges and node features, making GNNs misclassify the labels of target nodes. To resist such adversarial attacks, recent researchers propose GNN defenders. They assume that the attack patterns have been known, e.g., attackers tend to add edges between dissimilar nodes. Then, they remove edges between dissimilar nodes from attacked graphs, aiming to alleviate the negative impact of adversarial attacks. Nevertheless, on dynamic graphs, attackers can change their attack strategies at different times, making existing passive GNN defenders that are passively designed for specific attack patterns fail to resist attacks. In this paper, we propose a novel active GNN defender for dynamic graphs, namely ADGNN, which actively injects guardian nodes to protect target nodes from effective attacks. Specifically, we first formulate an active defense objective to design guardian node behaviors. This objective targets to disrupt the prediction of attackers and protect easily attacked nodes, thereby preventing attackers from generating effective attacks. Then, we propose a gradient-based algorithm with two acceleration techniques to optimize this objective. Extensive experiments on four real-world graph datasets demonstrate the effectiveness of our proposed defender and its capacity to enhance existing GNN defenders.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn