ON REPRESENTING LINEAR PROGRAMS BY GRAPH NEURAL NETWORKS
Learning to optimize is a rapidly growing area that aims to solve optimization problems or improve existing optimization algorithms using machine learning (ML).In particular, the graph neural network (GNN) is considered a suitable ML model for optimization problems whose variables and constraints are permutation-invariant, for example, the linear program (LP).While the literature has reported encouraging numerical results, this paper establishes the theoretical foundation of applying GNNs to solving LPs.Given any size limit of LPs, we construct a GNN that maps different LPs to different outputs.We show that properly built GNNs can reliably predict feasibility, boundedness, and an optimal solution for each LP in a broad class.Our proofs are based upon the recently-discovered connections between the Weisfeiler-Lehman isomorphism test and the GNN.To validate our results, we train a simple GNN and present its accuracy in mapping LPs to their feasibilities and solutions.