Coder Social home page Coder Social logo

kuan-li / robust-local-lipschitz Goto Github PK

View Code? Open in Web Editor NEW

This project forked from yangarbiter/robust-local-lipschitz

0.0 1.0 0.0 118 KB

A Closer Look at Accuracy vs. Robustness

Home Page: https://arxiv.org/abs/2003.02460

Python 71.23% Jupyter Notebook 26.49% Shell 2.28%

robust-local-lipschitz's Introduction

A Closer Look at Accuracy vs. Robustness

This repo contains the implementation of experiments in the paper

A Closer Look at Accuracy vs. Robustness

Authors: Yao-Yuan Yang*, Cyrus Rashtchian*, Hongyang Zhang, Ruslan Salakhutdinov, Kamalika Chaudhuri (* equal contribution)

Abstract

Current methods for training robust networks lead to a drop in test accuracy, which has led prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning. We take a closer look at this phenomenon and first show that real image datasets are actually separated. With this property in mind, we then prove that robustness and accuracy should both be achievable for benchmark datasets through locally Lipschitz functions, and hence, there should be no inherent tradeoff between robustness and accuracy. Through extensive experiments with robustness methods, we argue that the gap between theory and practice arises from two limitations of current methods: either they fail to impose local Lipschitzness or they are insufficiently generalized. We explore combining dropout with robust training methods and obtain better generalization. We conclude that achieving robustness and accuracy in practice may require using methods that impose local Lipschitzness and augmenting them with deep learning generalization techniques.

Setup

Install requiremented libraries

pip install -r ./requirements.txt

Install cleverhans from its github repository

pip install --upgrade git+https://github.com/tensorflow/cleverhans.git#egg=cleverhans

Generate the Restricted ImageNet dataset

Use the script ./scripts/restrictedImgNet.py to generate restrictedImgNet dataset and put the data in ./data/RestrictedImgNet/ with torchvision ImageFolder readable format. For more detail, please refer to lolip/dataset/__init__.py.

Repository structure

Parameters

The default training parameters are set in lolip/models/__init__.py

The network architectures defined in lolip/models/torch_utils/archs.py

Algorithm implementations

Defense Algorithms

Attack Algorithms

Example options for model parameter

arch: ("CNN001", "CNN002", "WRN_40_10", "WRN_40_10_drop20", "WRN_40_10_drop50", "ResNet50", "ResNet50_drop50")

  • Natural: ce-tor-{arch}
  • TRADES(beta=6): strades6ce-tor-{arch}
  • adversarial training: advce-tor-{arch}
  • RST(lambda=2): advbeta2ce-tor-{arch}
  • TULIP(gradient regularization): tulipce-tor-{arch}
  • LLR: sllrce-tor-{arch}

Examples

Run Natural training with CNN001 on the MNIST dataset Perturbation distance is set to $0.1$ with L infinity norm. Batch size is $64$ and using the SGD optimizer (default parameters).

python ./main.py --experiment experiment01 \
  --no-hooks \
  --norm inf --eps 0.1 \
  --dataset mnist \
  --model ce-tor-CNN001 \
  --attack pgd \
  --random_seed 0

Run TRADES (beta=6) with Wide ResNet 40-10 on the Cifar10 dataset Perturbation distance is set to 0.031 with L infinity norm. Batch size is $64$ and using the SGD optimizer

python ./main.py --experiment experiment01 \
  --no-hooks \
  --norm inf --eps 0.031 \
  --dataset cifar10 \
  --model strades6ce-tor-WRN_40_10 \
  --attack pgd \
  --random_seed 0

Run adversarial training with ResNet50 on the Restricted ImageNet dataset. Perturbation distance is set to 0.005 with L infinity norm. Attack with PGD attack. Batch size is $128$ and using the Adam optimizer

python ./main.py --experiment restrictedImgnet \
  --no-hooks \
  --norm inf --eps 0.005 \
  --dataset resImgnet112v3 \
  --model advce-tor-ResNet50-adambs128 \
  --attack pgd \
  --random_seed 0

Reproducing Results

Scripts

Appendix C: Proof-of-concept classifier

Run Robust self training (lambda=2) with Wide ResNet 40-10 on the Cifar10 dataset Perturbation distance is set to 0.031 with L infinity norm. Batch size is $64$ and using the SGD optimizer

python ./main.py --experiment hypo \
  --no-hooks \
  --norm inf --eps 0.031 \
  --dataset cifar10 \
  --model advbeta2ce-tor-WRN_40_10 \
  --attack pgd \
  --random_seed 0

robust-local-lipschitz's People

Contributors

yangarbiter avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.