
We investigate the concept of utility-preserving federated learning (UPFL) in the context of deep neural networks. We theoretically prove and experimentally validate that UPFL achieves the same accuracy as centralized training independent of the data distribution across the clients. We demonstrate that UPFL can fully take advantage of the momentum and weight decay techniques compared to centralized training, but it incurs substantial communication overhead. Ordinary federated learning, on the other hand, provides much higher communication efficiency, but it can partially benefit from the aforementioned techniques to improve utility. Given that, we propose a method called weighted gradient accumulation to gain more benefit from the momentum and weight decay akin to UPFL, while providing practical communication efficiency similar to ordinary federated learning.
Federated Learning ; Utility-preserving Federated Learning ; Weighted Gradient Accumulation
Federated Learning ; Utility-preserving Federated Learning ; Weighted Gradient Accumulation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
