Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Software
Data sources: ZENODO
addClaim

On the Detectability of Active Gradient Inversion Attacks in Federated Learning (IEEE S&P '26) - Source Code

Authors: Parrella, Giuseppe; Mazzocca, Carlo; FOGGIA, PASQUALE; Carletti, Vincenzo; Vento, Mario;

On the Detectability of Active Gradient Inversion Attacks in Federated Learning (IEEE S&P '26) - Source Code

Abstract

This artifact accompanies the paper "On the Detectability of Active Gradient Inversion Attacks in Federated Learning," accepted for publication at the IEEE Symposium on Security and Privacy (IEEE S&P) 2026. Federated Learning allows multiple clients to collaboratively train a Machine Learning model while keeping their private data on-site. However, the gradients exchanged during training remain vulnerable to Gradient Inversion Attacks, allowing a malicious server to reconstruct the clients' local data. In active attacks, the server deliberately manipulates the global model to facilitate this reconstruction. This repository provides the official implementation to reproduce our comprehensive analysis of four state-of-the-art active gradient inversion attacks. It also contains the source code for our novel, lightweight client-side detection techniques. These defenses identify statistically improbable weight structures alongside anomalous loss and gradient dynamics, enabling clients to effectively detect active attacks without modifying the standard federated learning protocol. Please refer to the documentation included in the repository for detailed instructions on setting up the environment, running the minimal working example, and reproducing the experimental results.

Powered by OpenAIRE graph
Found an issue? Give us feedback