
This report reviews and reproduces the foundational work of McMahan et al. (2017) on Federated Learning, specifically focusing on the Federated Averaging (FedAvg) algorithm. The report highlights how FL enables collaborative model training on decentralized, privacy-sensitive data while avoiding the need to transfer raw data to central servers. We summarize the methodology, key innovations, and empirical results of the original study, and complement it with our own implementation using PyTorch. We replicate experiments on the MNIST dataset using a Multi-Layer Perceptron (MLP), observe convergence behavior, and discuss performance metrics including accuracy and communication efficiency. Limitations and directions for future work, such as privacy guarantees and dynamic client participation, are also discussed.
federated learning, FedAvg, decentralized data, communication-efficient learning, privacy-preserving ML, non-IID, PyTorch, MNIST, machine learning, model aggregation
federated learning, FedAvg, decentralized data, communication-efficient learning, privacy-preserving ML, non-IID, PyTorch, MNIST, machine learning, model aggregation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
