Attack-Resilient State Estimation in the Presence of Noise
We consider the problem of attack-resilient state estimation in the presence of noise. We focus on the most general model for sensor attacks where {any} signal can be injected via the compromised sensors. An $l_0$-based state estimator that can be formulated as a mixed-integer linear program and its convex relaxation based on the $l_1$ norm are presented. For both $l_0$ and $l_1$-based state estimators, we derive rigorous analytic bounds on the state-estimation errors. We show that the worst-case error is linear with the size of the noise, meaning that the attacker cannot exploit noise and modeling errors to introduce unbounded state-estimation errors. Finally, we show how the presented attack-resilient state estimators can be used for sound attack detection and identification, and provide conditions on the size of attack vectors that will ensure correct identification of compromised sensors.