Coder Social home page Coder Social logo

cifar10_challenge's People

Contributors

amakelov avatar dtsip avatar ludwigschmidt avatar wh0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cifar10_challenge's Issues

about the loss in the pgd_attack

The loss from CW in line 36 of pgd_attack.py use a negative sign, but there is not a such sign in the original CW loss, looking forward to your help

 loss = -tf.nn.relu(correct_logit - wrong_logit + 50)

Image out of valid range for the first iteration of PGD attack

Hi,

I noticed that the image which is fed to the model to obtain the gradients for the first iteration of the PGD attack is not clipped to be in the valid image range.

Here, random noise is added to the original image and the resulting image is directly fed to the network for the first iteration without clipping.

pytorch definition of the model

Is there a pytorch definition and pytorch model weights of the architecture used for the white board leaderboard?
We would like to try our attack on your challenge but unfortunately our code is written in PyTorch.

If there is any other way, please let me know?

PGD steps along the sign of the gradient

More of a question than an issue.

It can be infered from here the PGD steps along the sign of the gradient.

Is there any reason it does not simply step along the gradient?
i.e. x += gradient(x)*step_size instead of x += sign(gradient(x))*step_size

Thanks

cifar_input.py Function get_next_batch() has a small bug.

Hi there,

Thanks a lot for the open sourced project!

I recently found that the function get_next_batch() in cifar_input.py has a small bug.
In cifar_input.py line 132 and 142,

` actual_batch_size = min(batch_size, self.n - self.batch_start)

    if actual_batch_size < batch_size:

        if reshuffle_after_pass:

            self.cur_order = np.random.permutation(self.n)

        self.batch_start = 0

    batch_end = self.batch_start + batch_size

    batch_xs = self.xs[self.cur_order[self.batch_start : batch_end], ...]

    batch_ys = self.ys[self.cur_order[self.batch_start : batch_end], ...]

    self.batch_start += actual_batch_size`

The final line here should be self.batch_start += batch_size, since the generated batch contains (batch_size) images and labels.

For example, when we just take every image out and start over again, this will cause actual_batch_size = 0, then self.batch_start will not update in the first round and there will be two same batch generated.

Naturally trained network gives 78% test accuracy on pgd attack.

Hi,
Thank you very much for this repo. It's very helpful. I could reproduce the performance in your paper for the adversarially trained network. However, I observed that the naturally trained network has 78% accuracy on pgd attack. First, I used fetch_model.py to download naturally trained model and ran run_attack.py on attack.npy which is generated using adv_trained network. I got 78%. In case of an issue with the released model, I trained another model on only natural images from scratch using your implementation and again got 78% test accuracy on pgd adversarial images. I used the default config file. Standard test performance is around 95%. There is still a drop, but I was expecting the test performance on pgd attack to be around 3%. Am I doing something wrong?

Thank you so much in advance.

Number of trainable parameters

I logged out the number of trainable parameters here and I received 45,901,914 params.
Using this function
np.sum([np.prod(v.get_shape().as_list()) for v in tf.trainable_variables()])

But when I look at the number of trainable parameters in the wideresnet from their original paper: https://arxiv.org/abs/1605.07146 I see 36.5M. Why is yours so much more?
trainable_params_wideresnet

Also, is there a pytorch version of this network? I noticed you referred to the Robustness platform, but I don't see an actual implementation of the exact same network mentioned here in that repository here: https://github.com/MadryLab/robustness/blob/master/robustness/cifar_models/resnet.py I only see a ResNet 18 wide but that's it.

Thanks,

Image Channels

Hi Team,

For example, if a CNN model (say, image classification) is trained on a 1-channel (grayscale) inputs, how can we deal with the perturbations or l-norm constraints? any thoughts?
Thanks.

When generating uniform noise in random start, floating point number will cause invalid pixel value.

In here,replace
x = x_nat + np.random.uniform(-self.epsilon, self.epsilon, x_nat.shape)
with
x = x_nat + np.random.random_integers(int(-self.epsilon), int(self.epsilon), x_nat.shape)
Actually, x_nat is discrete and converted from UINT8, but uniform noise got from np.random.uniform() is continuous if we ignore machine word-length.
When doing PGD adversarial training, I think FLOAT type maybe ok. However, when generating adversarial examples, I think we should restrict adversarial space in a meaningful space, says UINT8.
What's more, in run_attack.py, we should make sure all pixel values in an adversarial image can map to UINT8.

White-box result of madry_lab_challenges in examples of cleverhans.

I run the code in 'cleverhans/examples/madry_lab_challenges/cifar10/attack_model.py' with default parameter settings to attack target model with 'models/adv_trained' checkpoints. And I get the results as follows, which are something different from those in the white-box leaderboard. I don't know why the resulting test accuraries are higher. Any help would be appreciated!
PGD: 0.5370
fgsm: 0.6330
cwl2 : 0.5420

How to determine "best" model

After training, we are left with ~80 models saved at each 1k iteration. What is your rule for selecting the "best" model to then keep and do further evaluations with? I am especially wondering because I have noticed that when I train a model with FGSM adversary only, if I simply select the model with the greatest robustness to FGSM adversary, the clean data accuracy may not be that great. Essentially, how do you determine tradeoff between robustness to the adversary you trained against vs clean data accuracy?

I will also specifically reference Table 5 in your paper (i.e. robustness to whitebox adversary). What was your criteria for choosing the models reported in this table?

Base network questions and implementation.

Does anybody know if there is a PyTorch implementation of the Wide ResNet network specifically mentioned in this repository. I have found some but they are 30x10 instead of 28x10? Additionally is the standard (non-wide) ResNet a ResNet101?

Making adversarial examples during training

During training, could you explain me why did you use the gradient of a 'train' model to make PGD adversarial examples? It seems unnatural since the batch normalization could hinder generating 'real' adversarial examples.
Thanks.

Overflow when random_restart is false

We believe there is overflow occurring in pgd_attack.py when random_start is False. Because x is of type uint8, x will overflow when the gradient step is added to it owing to the unsafe add. To fix this, we propose the change below. (Note: there is a similar issue with x_nat when the step_size in config.json is an integer.)

Interestingly, when we run 20-step PGD with no random start, step size of 2.0, and our fix, the adversarially-trained model achieves an adversarial accuracy of 45.81%. That is really close to the 20-step PGD on the cross-entropy loss with 10 random restarts white-box leaderboard result (45.21%). We also found that increasing the number of steps to 100 with a step size of 1.0 yields an adversarial accuracy of 45.37%, closing the gap further.

It seems that random-starts/random-restarts are unnecessary when you attack an adversarially-trained model. Any difference between a random start and non-random start would imply that either the attack needs more iterations or that gradient masking is occurring for those examples. We are currently investigating how this issue affects adversarial training.

Proposed change:

diff --git cifar10_input.py cifar10_input.py
index aa2eec4..334bba5 100644
--- cifar10_input.py
+++ cifar10_input.py
@@ -42,7 +42,7 @@ class CIFAR10Data(object):
         eval_filename = 'test_batch'
         metadata_filename = 'batches.meta'

-        train_images = np.zeros((50000, 32, 32, 3), dtype='uint8')
+        train_images = np.zeros((50000, 32, 32, 3), dtype='float32')
         train_labels = np.zeros(50000, dtype='int32')
         for ii, fname in enumerate(train_filenames):
             cur_images, cur_labels = self._load_datafile(os.path.join(path, fname))

Dataset normalization

Hello,

I am trying to re-implement your CIFAR10 adv. training in PyTorch and maybe some of the questions will be based on my limited knowledge of TensorFlow.

I have couple of questions regarding CIFAR10 dataset normalization. In PyTorch, the entire dataset is usually normalized as the dataset is loaded through the loader by adding normalization as one of transformations (after converting image to tensor to be in range [0,1]). Also, the normalization is usually implemented by specifying per-channel mean and stddev computed for the entire dataset. Hence, my questions are the following:

  1. What is the reason you are implementing "per_image_standardization" as part of the model rather than normalization over entire dataset as preprocessing? Is it to keep original samples in the range of 0-255 and to perform perturbations in that range?

  2. Is there any difference between implementing standardization/normalization using per-channel mean and stddev computed for entire dataset (case of PyTorch) and mean and stddev computed for each separate image (case of tf.image.per_image_standardization)? As far as I can tell the end aim is essentially the same for both cases: to have samples with zero mean and unit variance. But I think in case of TensorFlow, as the sample will be perturbed, normalization will change correspondingly to keep the input distribution to the model consistently with zero mean and unit variance.

Thank you and sorry for verbosity: wanted to make sure I delivered my concerns properly.

Matching training statistics

I tried to match the training scheme of this network and I was unable to do so using what seems to be the same parameters.

I pull a random batch for each epoch of size batch_size = 48.

Epochs 0 - 40,000 LR = 0.1
Epochs 40,000 - 60,000 LR = 0.01
Epochs 60,000 - 80,000 LR = 0.001
In this case it nearly matches the procedure shown because I have an epoch for each batch, so it takes ~937 epochs to cycle once through my training dataset.

However, the cross entropy loss on my network is near 0 by the time I move past 60k epochs, and the network is only trained on the adversarial samples.

(1) Are the momentum parameters saved after the learning rate is updated? Because it seems like a brand new optimizer is being created after the 40th epoch, and after the 60th epoch.

(2) Did you experience anything like this? I am using an off the shelf WideResNet 30 from here.

My training set is 45k images, and I have a validation set of 5k images. Each adversarial sample is computed using the training set. I am able to get ~100% accuracy on the natural images and around 100% on the adversarial trained images but only 80% on the natural testing images.

xent_adv_train_nat
acc_adv_train_nat

Submitting my result to white CIFAR-10 leaderboards

hello mardy:
I am very happy to read such a good paper, and thank you very much for providing the white box MNIST and CIFAR-10 leaderboards. I recently(2020.8.15) submitted the results of my adversarial attack to you. If you have time, could you check my results and update the CIFAR-10 leaderboards?
Thank you very much!
My name is ye Liu.

Googlenet with owndata

Hi Team,

can we extend this cifar10_challenge to a vehicle classification dataset trained using googlenet model (tensorflow), any thoughts?

pretrained model link expired

Can anyone provide new link to download pretrained model in fetch_model.py?
I got urllib.error.URLError: <urlopen error [Errno 101] Network is unreachable> when I run python fetch_model.py natural

About the accuracy of adversarial examples

I download the two 'secret model' from the web url in fetch_model.py, and load the model weights. When I use the adversarial examples generated from my own method, I found the test accuracy of the naturally_trained model is even better than the accuracy of adv_trained model. I don't know why that happens, can you give some explanation ?

Questions about recreating paper results

I am working to recreate some of the results from your paper, specifically some cifar10 transfer results. I have noticed something in the tables that doesnt seem intuitive so I was wondering if you could comment on.

In Table 5 [Model=Wide-Natural, Adversary=FGSM] it appears the whitebox model accuracy while under attack is 32.7%. In Table 3 [Target = Wide-Natural, Source = Wide-Natural] the accuracy of the target model under FGSM attack is recorded as 21.3%. This is surprising to me because it means the black-box attack is more powerful than the whitebox attack which I have never observed before. Do you have any intuitions or explanations about this?

Thank you.

About the convergence of training.

Hello, thanks for your great work. I wonder as the training going on, how to judge the convergence of training? Just according to the curves of loss?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.