Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Article . 2022
License: CC BY
Data sources: Datacite
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Article . 2022
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

DARE : Delta debugging Aided Robustness Enhancement

Authors: ZhangYingyi;

DARE : Delta debugging Aided Robustness Enhancement

Abstract

Description With the aim of improving the universal robustness of deep learning models and thus protecting against unknown attacks, we put forward a novel model training framework, called DARE (Delta debugging Aided Robustness Enhancement). DARE consists of three stages, i.e., model transformation, data augmentation, and model tuning and synchronization. Model transformation takes the responsibility to construct an isomorphic regression model to the original classification model via extending its underlying structure, making it more sensitive to small perturbations for suppression and improvement. In order to conform the transformed model, the second stage, data augmentation, performs a novel data collection and transformation strategy via mining model training history in a delta debugging fashion. Finally, the transformed model will be trained over the collected training data and then the fine-tuned model weights will be synchronized to the original classification model to obtain the more robust model. We have conducted an extensive evaluation on 9 DL models. The results show that our approach significantly outperforms existing adversarial training techniques. Project Structure ├─ adv │ ├─ fintune_adv.py #Adversarial training │ ├─ generate_adv.py #Generate adversarial samples ├─ dare │ ├─ dare.py #Fine-tuning with DARE │ ├─ dare-s.py #A variant of dare (remove the slicing process) │ ├─ dare-sl.py #A variant of dare (removes the model transformation process) │ ├─ get_ats.py #Data augmentation │ ├─ nnslicer.py #Model slicing ├─ train │ ├─ models │ ├─ alexnet.py #Training code for alexnet │ ├─ params.py #Parameter for vgg16 and vgg19 │ ├─ train.py #Training code for vgg16 and vgg19 │ ├─ utils.py #Code for vgg16 and vgg19 Dataset and Models To evaluate the performance of DARE, we have conducted an extensive empirical study. Specifically, we employed 3 widely used datasets from prior studies, i.e., CIFAR10, SVHN and Fashion-MNIST. The details are presented as follows. | Dataset | Description | Train Set | Test Set | Size | Link | |-----------------|----------------------------------------------|--------|--------|-----|-----------| | cifar10 | Classic 10 classification datasets | 50,000 | 10,000 |32×32 | cifar10 | | svhn | A real-world image dataset of street-view house numbers | 72,257 | 26,032 |32×32 |svhn | | Fashion-MNIST (FM) | A dataset of Zalando's article images | 50,000 | 10,000 |28×28 |fashion mnist | Furthermore, to validate the generality of `DARE`, we employed 3 different neural network architectures in the experiment. | Model | Dataset | Model Size| Params | Acc(%) | |------|--------|--------- |-------|------| | VGG16 | CIFAR10 | 256.9 | 33.6M | 88.7 | | | SVHN | 256.9 | 33.6M | 94.3 | | | FM | 245.8 | 22.6M | 91.1 | | VGG19 | CIFAR10 | 297.7 | 39.0M | 90.6 | | | SVHN | 297.4 | 39.0M | 93.9 | | | FM | 256.7 | 33.6M | 90.2 | | Alexnet | CIFAR10 | 73.6 | 9.6M | 83 | | | SVHN | 73.6 | 9.6M | 93.3 | | | FM | 63.0 | 8.1M | 90.0 | The data and models used and other intermediate results can be obtained from the following links https://mega.nz/folder/1alCBThJ#JLh9CC6lY0FpOIP6icgh8w Reproducibility Environment To run and reproduce our results, please try to install the suggested version of the key packages. Key Packages: tensorflow==2.0.0 Keras==2.3.1 foolbox==2.3.0 Preparation You need to follow these steps to completely run DARE. The original training set of the dataset needs to be randomly divided into two parts, one half is used for model training (x_train) and the other half is used for model fine-tuning (x_validation). Run the corresponding code in the train folder and use x_train to train the model. Note that all historical models need to be saved during the training process. Run dare\nnslicer.py to slice the model and save the slice results, run dare\get_ats.py to traverse and calculate the effect of the historical model on x_validation for data enhancement. Randomly select 5000 images from x_train and use adv\generate_adv.py to generate adversarial samples to measure the robustness of the model. Using the results obtained in the third step, run dare\dare.py to fine-tune the model, all models are saved in a folder, and run dare\test.py to calculate their robustness. Note: Since the structure and parameters of different models may be very different, if you want to add a new model, you need to modify the layer name and output shape contained in the model in dare\nnslicer.py. Parameter Model Slicing The size of the model slice can be controlled by adjusting the per parameter in dare\nnslicer.py. The larger the per is, the larger the slice will be. In the experiment, it is set to 0.95. Model Tuning and Synchronization The adjustment parameter in the model fine-tuning process is mainly the learning rate, which is adjusted by modifying the lr_rate parameter. In the experiment, when VGG16 and VGG19 achieve the best results, lr_rate is mostly set to 10e-7, and when alexnet achieves the best results, lr_rate is mostly set to 10e-3.

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 48
    download downloads 8
  • 48
    views
    8
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
0
Average
Average
Average
48
8
Green