Coder Social home page Coder Social logo

code-assasin / mufiacode Goto Github PK

View Code? Open in Web Editor NEW
3.0 1.0 0.0 3.1 MB

Code for the attack multiplicative filter attack MUFIA, from the paper "Frequency-based vulnerability analysis of deep learning models against image corruptions".

Home Page: https://arxiv.org/abs/2306.07178

License: MIT License

Python 83.14% Shell 0.44% Jupyter Notebook 16.42%
adversarial-attacks adversarial-examples adversarial-machine-learning cifar10 cifar100 computer-vision dct-coefficients domain-generalization filters frequency-analysis imagenet machine-learning ml-safety pytorch robustness

mufiacode's Introduction

MUFIA - Multiplicative Filter Attack

This is the official repositiory of the paper "Frequency-Based Vulnerability Analysis of Deep Learning Models against Image Corruptions" (arxiv). Deep learning models often face challenges when handling real-world image corruptions. In response, researchers have developed image corruption datasets to evaluate the performance of deep neural networks in handling such corruptions. However, these datasets have a significant limitation: they do not account for all corruptions encountered in real-life scenarios. To address this gap, we present MUFIA (Multiplicative Filter Attack), an algorithm designed to identify the specific types of corruptions that can cause models to fail. Our algorithm identifies the combination of image frequency components that render a model susceptible to misclassification while preserving the semantic similarity to the original image. We find that even state-of-the-art models trained to be robust against known common corruptions struggle against the low visibility-based corruptions crafted by MUFIA. This highlights the need for more comprehensive approaches to enhance model robustness against a wider range of real-world image corruptions.

Dependencies

This code has been tested with Python 3.8.10 and PyTorch 2.0.0+cu117. To install required dependencies run:

cd misc
pip install -r requirements.txt

Getting started

  • We set the directories for the datasets, models and configs for wandb in configs/defaults.py. Please change the directories accordingly.

  • We provide a script scripts/test.py with command line arguments to test a variety of models and datasets mentioned in our paper. To simplify the process we also provide a bash script :

    cd scripts
    bash run.sh
  • We also provide a ipython notebook for playing around with the attack and making the visualization easier. notebooks/playground.ipynb

Results

Upon running the script, the results will be logged to wandb. The results for the models mentioned in the paper are as follows:

Dataset Model Architecture Clean CC MUFIA LPIPS
CIFAR10 Standard ResNet-50 95.25 73.46 0 0.076
Prime ResNet-18 93.06 89.05 2.27 0.163
Augmix ResNeXt29_32x4d 95.83 89.09 0 0.119
Card WideResNet-18-2 96.56 92.78 38.71 0.181
CIFAR100 Standard ResNet-56 72.63 43.93 0 0.09
Prime ResNet-18 77.60 68.28 0.76 0.132
Augmix ResNeXt29_32x4d 78.90 65.14 0.05 0.1536
Card WideResNet-18-2 79.93 71.08 13.31 0.148
ImageNet Standard ResNet-50 76.72 39.48 0.712 0.196
Prime ResNet-50 75.3 56.4 0.45 0.224
Augmix ResNet-50 77.34 49.33 0.438 0.243
Deit-B DeiT Base 81.38 67.55 6.748 0.221

Accuracy(%) on the clean uncorrupted dataset (Clean), common corruptions dataset (CC), and accuracy after our attack (MUFIA) for standard and robust models on different datasets. Our attack algorithm, MUFIA, exposes previously unseen corruptions that greatly impact the accuracy of almost all the models. Notably, while these models perform well on the common corruption dataset (CC accuracy column), they struggle when confronted with new corruptions introduced by MUFIA. Further, our attack achieves this while generating adversarial images that maintain a high degree of semantic similarity, as indicated by the LPIPS values.

License

This project is licensed under the MIT License - see the LICENSE.md file for details.

Acknowledgments

Our code repository is based on the following repositories. Credits to the respective authors for open-sourcing their code.

Citing this work

@article{machiraju2023frequency,
  title={Frequency-Based Vulnerability Analysis of Deep Learning Models against Image Corruptions},
  author={Machiraju, Harshitha and Herzog, Michael H and Frossard, Pascal},
  journal={arXiv preprint arXiv:2306.07178},
  year={2023}
}

mufiacode's People

Contributors

harsmac avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.