Enhancing Node-Level Adversarial Defenses by Lipschitz Regularization of Graph Neural Networks
Graph neural networks (GNNs) have shown considerable promise for graph-structured data. However, they are also known to be unstable and vulnerable to perturbations and attacks. Recently, the Lipschitz constant has been adopted as a control on the stability of Euclidean neural networks, but calculating the exact constant is also known to be difficult even for very shallow networks. In this paper, we extend the Lipschitz analysis to graphs by providing a systematic scheme for estimating upper bounds of the Lipschitz constants of GNNs. We also derive concrete bounds for widely used GNN architectures including GCN, GraphSAGE and GAT. We then use these Lipschitz bounds for regularized GNN training for improved stability. Our numerical results on Lipschitz regularization of GNNs not only illustrate enhanced test accuracy under random noise, but also show consistent improvement for state-of-the-art defense methods against adversarial attacks.