On the Convergence of a Distributed Augmented Lagrangian Method for Nonconvex Optimization

Journal Article (Journal Article)

In this paper, we propose a distributed algorithm for optimization problems that involve a separable, possibly nonconvex objective function subject to convex local constraints and linear coupling constraints. The method is based on the accelerated distributed augmented Lagrangians (ADAL) algorithm that was recently developed by the authors to address convex problems. Here, we extend this line of work in two ways. First, we establish convergence of the method to a local minimum of the problem, using assumptions that are common in the analysis of nonconvex optimization methods. To the best of our knowledge, this is the first work that shows convergence to local minima specifically for a distributed augmented Lagrangian (AL) method applied to nonconvex optimization problems; distributed AL methods are known to perform very well when used to solve convex problems. Second, we propose a more general and decentralized rule to select the stepsizes of the method. This improves on the authors' original ADAL method, where the stepsize selection used global information at initialization. Numerical results are included to verify the correctness and efficiency of the proposed distributed method.

Full Text

Duke Authors

Cited Authors

  • Chatzipanagiotis, N; Zavlanos, MM

Published Date

  • September 1, 2017

Published In

Volume / Issue

  • 62 / 9

Start / End Page

  • 4405 - 4420

International Standard Serial Number (ISSN)

  • 0018-9286

Digital Object Identifier (DOI)

  • 10.1109/TAC.2017.2658438

Citation Source

  • Scopus