Downloads provided by UsageCounts
A library for running membership inference attacks (MIA) against machine learning models. Check out the documentation. These are attacks against privacy of the training data. In MIA, an attacker tries to guess whether a given example was used during training of a target model or not, only by querying the model. See more in the paper by Shokri et al. Currently, you can use the library to evaluate the robustness of your Keras or PyTorch models to MIA. Features: Implements the original shadow model attack Is customizable, can use any scikit learn's Estimator-like object as a shadow or attack model Is tested with Keras and PyTorch
machine-learning, privacy, adversarial-machine-learning
machine-learning, privacy, adversarial-machine-learning
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 2 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 36 | |
| downloads | 2 |

Views provided by UsageCounts
Downloads provided by UsageCounts