This repository contains the official implementation code for the research paper DANAA: Towards Transferable Attacks with Double Adversarial Neuron Attribution. The DANAA framework is designed to generate adversarial examples with enhanced transferability by leveraging the concept of double adversarial neuron attribution.
The implementation requires the following software and libraries:
- Python 3.6.13
- Keras 2.2.4
- NumPy 1.16.2
- TensorFlow 1.14.0
- TQDM 4.63.1
- Pillow 6.0.0
- SciPy 1.2.1
For the experiments, pre-trained models are employed. The following table lists the required models along with their download links:
Model | Source |
---|---|
Inception v3 | Download |
Inception v4 | Download |
Inception-ResNet-v2 | Download |
ResNet v2 152 | Download |
Inception v3 adv | Download |
Inception ResNet v2 adv | Download |
Inception v3 adv ens3 | Download |
Inception v3 adv ens4 | Download |
Inception ResNet v2 adv ens3 | Download |
The non-adversarial models are sourced from TensorFlow's Slim Model Library, while the adversarially trained models can be found in the Adversarial ImageNet Models Repository. Download and place these models in the models
directory within the repository.
To execute the DANAA attack method, use the following command format, adjusting parameters as necessary for the desired model and settings:
python DANAA.py --model_name <model> --attack_method DANAA --layer_name <layer> --ens 30 --output_dir ./outputs/DANAA/ --scale 0.25
For instance, to run DANAA on the Inception V3 model, the command would be:
python DANAA.py --model_name inception_v3 --attack_method DANAA --layer_name InceptionV3/InceptionV3/Mixed_5b/concat --ens 30 --output_dir ./outputs/DANAA/ --scale 0.25
The framework also supports comparison with other attack methodologies such as NAA, MIM, NRDM, FIA, and FDA. To execute these, simply replace the --attack_method
parameter with the desired attack type.
To verify the effectiveness of the generated adversarial examples, use the verify.py
script as follows:
python verify.py --ori_path ./dataset/images/ --output_dir ./outputs/DANAA/
For academic use, please cite the following paper.
@inproceedings{jin2023danaa,
title={DANAA: Towards transferable attacks with double adversarial neuron attribution},
author={Jin, Zhibo and Zhu, Zhiyu and Wang, Xinyi and Zhang, Jiayu and Shen, Jun and Chen, Huaming},
booktitle={International Conference on Advanced Data Mining and Applications},
pages={456--470},
year={2023},
organization={Springer}
}
Code refer to: NAA.