On the Convergence of a Distributed Augmented Lagrangian Method for Nonconvex Optimization
In this paper, we propose a distributed algorithm for optimization problems that involve a separable, possibly nonconvex objective function subject to convex local constraints and linear coupling constraints. The method is based on the accelerated distributed augmented Lagrangians (ADAL) algorithm that was recently developed by the authors to address convex problems. Here, we extend this line of work in two ways. First, we establish convergence of the method to a local minimum of the problem, using assumptions that are common in the analysis of nonconvex optimization methods. To the best of our knowledge, this is the first work that shows convergence to local minima specifically for a distributed augmented Lagrangian (AL) method applied to nonconvex optimization problems; distributed AL methods are known to perform very well when used to solve convex problems. Second, we propose a more general and decentralized rule to select the stepsizes of the method. This improves on the authors' original ADAL method, where the stepsize selection used global information at initialization. Numerical results are included to verify the correctness and efficiency of the proposed distributed method.
Chatzipanagiotis, N; Zavlanos, MM
Volume / Issue
Start / End Page
International Standard Serial Number (ISSN)
Digital Object Identifier (DOI)