Coder Social home page Coder Social logo

kuan-li / adversairal-attacks-pytorch Goto Github PK

View Code? Open in Web Editor NEW

This project forked from harry24k/adversarial-attacks-pytorch

0.0 1.0 0.0 12.36 MB

A pytorch implementations of Adversarial attacks and utils

License: MIT License

Python 100.00%

adversairal-attacks-pytorch's Introduction

Adversarial-Attacks-Pytorch

This is a lightweight repository of adversarial attacks for Pytorch.

There are popular attack methods and some utils.

Table of Contents

  1. Usage
  2. Attacks and Papers
  3. Demos
  4. Update Records

Usage

Dependencies

  • torch 1.2.0
  • python 3.6

Installation

  • pip install torchattacks or
  • git clone https://github.com/Harry24k/adversairal-attacks-pytorch
import torchattacks
pgd_attack = torchattacks.PGD(model, eps = 4/255, alpha = 8/255)
adversarial_images = pgd_attack(images, labels)

Precautions

  • WARNING :: All images should be scaled to [0, 1] with transform[to.Tensor()] before used in attacks.
  • WARNING :: All models should return ONLY ONE vector of (N, C) where C = number of classes.

Attacks and Papers

The papers and the methods with a brief summary and example. All attacks in this repository are provided as CLASS. If you want to get attacks built in Function, please refer below repositories.

  • Explaining and harnessing adversarial examples : Paper, Repo

    • FGSM
  • DeepFool: a simple and accurate method to fool deep neural networks : Paper

    • DeepFool
  • Adversarial Examples in the Physical World : Paper, Repo

    • BIM
    • StepLL
  • Towards Evaluating the Robustness of Neural Networks : Paper, Repo

    • CW(L2)
  • Ensemble Adversarial Traning : Attacks and Defences : Paper, Repo

    • RFGSM
  • Towards Deep Learning Models Resistant to Adversarial Attacks : Paper, Repo

    • PGD
    • RPGD
  • Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network" : Paper

    • APGD
Attack Clean Adversarial
FGSM
BIM
StepLL
RFGSM
CW
PGD
RPGD
DeepFool

Demos

  • White Box Attack with Imagenet (code): To make adversarial examples with the Imagenet dataset to fool Inception v3. However, the Imagenet dataset is too large, so only 'Giant Panda' is used.

  • Black Box Attack with CIFAR10 (code): This demo provides an example of black box attack with two different models. First, make adversarial datasets from a holdout model with CIFAR10 and save it as torch dataset. Second, use the adversarial datasets to attack a target model.

  • Adversairal Training with MNIST (code): This demo shows how to do adversarial training with this repository. The MNIST dataset and a custom model are used in this code. The adversarial training is performed with PGD, and then FGSM is applied to test the model.

Update Records

~ Version 0.3

  • New Attacks : FGSM, IFGSM, IterLL, RFGSM, CW(L2), PGD are added.
  • Demos are uploaded.

Version 0.4

  • DO NOT USE : 'init.py' is omitted.

Version 0.5

  • Package name changed : 'attacks' is changed to 'torchattacks'.
  • New Attack : APGD is added.
  • attack.py : 'update_model' method is added.

Version 0.6

  • Error Solved :
    • Before this version, even after getting an adversarial image, the model remains evaluation mode.
    • To solve this, below methods are modified.
      • '_switch_model' method is added into attack.py. It will automatically change model mode to the previous mode after getting adversarial images. When getting adversarial images, model is switched to evaluation mode.
      • 'call' methods in all attack changed to forward. Instead of this, 'call' method is added into 'attack.py'
  • attack.py : To provide ease of changing images to uint8 from float, 'set_mode' and '_to_uint' is added.
    • 'set_mode' determines returning all outputs as 'int' OR 'flaot' through '_to_uint'.
    • '_to_uint' changes all outputs into uint8.

Version 0.7

  • All attacks are modified
    • clone().detach() is used instead of .data
    • torch.autograd.grad is used instead of .backward() and .grad :
      • It showed 2% reduction of computation time.

Version 0.8

  • New Attack : RPGD is added.
  • attack.py : 'update_model' method is depreciated. Because torch models are passed by call-by-reference, we don't need to update models.
    • cw.py : In the process of cw attack, now masked_select uses a mask with dtype torch.bool instead of a mask with dtype torch.uint8.

Version 0.9

  • New Attack : DeepFool is added.
  • Some attacks are renamed :
    • I-FGSM -> BIM
    • IterLL -> StepLL

Version 1.0

  • attack.py :
    • load : Load is depreciated. Instead, use TensorDataset and DataLoader.
    • save : The problem of calculating invalid accuracy when the mode of the attack set to 'int' is solved.

Version 1.1

adversairal-attacks-pytorch's People

Contributors

harry24k avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.