Skip to main content

Fast Graph Neural Tangent Kernel via Kronecker Sketching

Publication ,  Conference
Jiang, S; Man, Y; Song, Z; Yu, Z; Zhuo, D
Published in: Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022
June 30, 2022

Many deep learning tasks have to deal with graphs (e.g., protein structures, social networks, source code abstract syntax trees). Due to the importance of these tasks, people turned to Graph Neural Networks (GNNs) as the de facto method for learning on graphs. GNNs have become widely applied due to their convincing performance. Unfortunately, one major barrier to using GNNs is that GNNs require substantial time and resources to train. Recently, a new method for learning on graph data is Graph Neural Tangent Kernel (GNTK) (Du et al. 2019a). GNTK is an application of Neural Tangent Kernel (NTK) (Jacot, Gabriel, and Hongler 2018) (a kernel method) on graph data, and solving NTK regression is equivalent to using gradient descent to train an infinite-wide neural network. The key benefit of using GNTK is that, similar to any kernel method, GNTK’s parameters can be solved directly in a single step. This can avoid time-consuming gradient descent. Meanwhile, sketching has become increasingly used in speeding up various optimization problems, including solving kernel regression. Given a kernel matrix of n graphs, using sketching in solving kernel regression can reduce the running time to o(n3). But unfortunately such methods usually requires extensive knowledge about the kernel matrix beforehand, while in the case of GNTK we find that the construction of the kernel matrix is already O(n2N4), assuming each graph has N nodes. The kernel matrix construction time can be a major performance bottleneck when the size of graphs N increases. A natural question to ask is thus whether we can speed up the kernel matrix construction to improve GNTK regression’s end-to-end running time. This paper provides the first algorithm to construct the kernel matrix in o(n2N3) running time.

Duke Scholars

Published In

Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022

DOI

Publication Date

June 30, 2022

Volume

36

Start / End Page

7033 / 7041
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Jiang, S., Man, Y., Song, Z., Yu, Z., & Zhuo, D. (2022). Fast Graph Neural Tangent Kernel via Kronecker Sketching. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 7033–7041). https://doi.org/10.1609/aaai.v36i6.20662
Jiang, S., Y. Man, Z. Song, Z. Yu, and D. Zhuo. “Fast Graph Neural Tangent Kernel via Kronecker Sketching.” In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022, 36:7033–41, 2022. https://doi.org/10.1609/aaai.v36i6.20662.
Jiang S, Man Y, Song Z, Yu Z, Zhuo D. Fast Graph Neural Tangent Kernel via Kronecker Sketching. In: Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022. 2022. p. 7033–41.
Jiang, S., et al. “Fast Graph Neural Tangent Kernel via Kronecker Sketching.” Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022, vol. 36, 2022, pp. 7033–41. Scopus, doi:10.1609/aaai.v36i6.20662.
Jiang S, Man Y, Song Z, Yu Z, Zhuo D. Fast Graph Neural Tangent Kernel via Kronecker Sketching. Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022. 2022. p. 7033–7041.

Published In

Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022

DOI

Publication Date

June 30, 2022

Volume

36

Start / End Page

7033 / 7041