Embracing Privacy, Robustness, and Efficiency with Trustworthy Federated Learning on Edge Devices
While Federated Learning (FL) offers a privacy guarantee as a promising distributed learning paradigm, the robustness and efficiency issues hinder the practice of FL on heterogeneous edge devices. In this paper, we will discuss several state-of-the-art methods that try to complement FL with robustness and efficiency. We first introduce FL-WBC and FADE, which confer robustness against the training-stage attack and the inference-stage attack respectively. FL-WBC proposes a client-side purification mechanism to mitigate the impact of the model poisoning attack, and FADE adopts adversarial decoupled learning to attain efficient adversarial training in FL with resource-constrained edge devices. Then we explore how we can improve the training efficiency of FL under statistical and systematic heterogeneity, with FedCor and FedSEA respectively. FedCor develops a correlation-based client selection strategy that can effectively accelerate the convergence of FL with statistical heterogeneity, and FedSEA introduces a semi-asynchronous framework to tackle devices with systematic heterogeneity. We finally discuss potential future directions toward practical private, robust. and efficient FL on edge devices.