Skip to main content

Distributed Online Convex Optimization with Improved Dynamic Regret

Publication ,  Journal Article
Zhang, Y; Ravier, RJ; Tarokh, V; Zavlanos, MM
November 12, 2019

In this paper, we consider the problem of distributed online convex optimization, where a group of agents collaborate to track the global minimizers of a sum of time-varying objective functions in an online manner. Specifically, we propose a novel distributed online gradient descent algorithm that relies on an online adaptation of the gradient tracking technique used in static optimization. We show that the dynamic regret bound of this algorithm has no explicit dependence on the time horizon and, therefore, can be tighter than existing bounds especially for problems with long horizons. Our bound depends on a new regularity measure that quantifies the total change in the gradients at the optimal points at each time instant. Furthermore, when the optimizer is approximatly subject to linear dynamics, we show that the dynamic regret bound can be further tightened by replacing the regularity measure that captures the path length of the optimizer with the accumulated prediction errors, which can be much lower in this special case. We present numerical experiments to corroborate our theoretical results.

Duke Scholars

Publication Date

November 12, 2019
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Zhang, Y., Ravier, R. J., Tarokh, V., & Zavlanos, M. M. (2019). Distributed Online Convex Optimization with Improved Dynamic Regret.
Zhang, Yan, Robert J. Ravier, Vahid Tarokh, and Michael M. Zavlanos. “Distributed Online Convex Optimization with Improved Dynamic Regret,” November 12, 2019.
Zhang Y, Ravier RJ, Tarokh V, Zavlanos MM. Distributed Online Convex Optimization with Improved Dynamic Regret. 2019 Nov 12;
Zhang Y, Ravier RJ, Tarokh V, Zavlanos MM. Distributed Online Convex Optimization with Improved Dynamic Regret. 2019 Nov 12;

Publication Date

November 12, 2019