Downloads provided by UsageCounts
As machine learning (ML) has become pervasive throughout various fields (industry, healthcare, social networks), privacy concerns regarding the data used for its training have gained a critical importance. In settings where several parties wish to collaboratively train a common model without jeopardizing their sensitive data, the need for a private training protocol is particularly stringent and implies to protect the data against both the model's end-users and the other actors of the training phase. In this context of secure collaborative learning, Differential Privacy (DP) and Fully Homomorphic Encryption (FHE) are two complementary countermeasures of growing interest to thwart privacy attacks in ML systems. Central to many collaborative training protocols, in the line of PATE, is majority voting aggregation. Thus, in this paper, we design SHIELD, a probabilistic approximate majority voting operator which is faster when homomorphically executed than existing approaches based on exact argmax computation over an histogram of votes. As an additional benefit, the inaccuracy of SHIELD is used as a feature to provably enable DP guarantees. Although SHIELD may have other applications, we focus here on one setting and seamlessly integrate it in the SPEED collaborative training framework from \cite{grivet2021speed} to improve its computational efficiency. After thoroughly describing the FHE implementation of our algorithm and its DP analysis, we present experimental results. To the best of our knowledge, it is the first work in which relaxing the accuracy of an algorithm is constructively usable as a degree of freedom to achieve better FHE performances.
[INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI], Differential Privacy (DP), [INFO.INFO-LG] Computer Science [cs]/Machine Learning [cs.LG], privacy, artificial intelligence, machine learning, private training protocol, Fully Homomorphic Encryption (FHE), Differential Privacy, Homomorphic Encryption, Federated Learning, collaborative training
[INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI], Differential Privacy (DP), [INFO.INFO-LG] Computer Science [cs]/Machine Learning [cs.LG], privacy, artificial intelligence, machine learning, private training protocol, Fully Homomorphic Encryption (FHE), Differential Privacy, Homomorphic Encryption, Federated Learning, collaborative training
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 6 | |
| downloads | 5 |

Views provided by UsageCounts
Downloads provided by UsageCounts