Broadcast-based distributed alternating direction method of multipliers
We consider a multi agent optimization problem where a network of agents collectively solves a global optimization problem with the objective function given by the sum of locally known convex functions. We propose a fully distributed broadcast-based Alternating Direction Method of Multipliers (ADMM), in which each agent broadcasts the outcome of his local processing to all his neighbors. We show that both the objective function values and the feasibility violation converge with rate O(1/T), where T is the number of iterations. This improves upon the O(1/√T) convergence rate of subgradient-based methods. We also characterize the effect of network structure and the choice of communication matrix on the convergence speed. Because of its broadcast nature, the storage requirements of our algorithm are much more modest compared to the distributed algorithms that use pairwise communication between agents.