Coder Social home page Coder Social logo

Comments (14)

ain-soph avatar ain-soph commented on May 24, 2024 1

Jesus, that's quite an old version. I'll check the results based on the most up-to-date codes.

from trojanzoo.

ain-soph avatar ain-soph commented on May 24, 2024 1

Thanks a lot. But it seems that deep inspect (di) doesn't work well either.

I think to be honest, their original method doesn't work very well compared with Neural Cleanse, which is pointed out in our paper.

from trojanzoo.

ain-soph avatar ain-soph commented on May 24, 2024

Your mark alpha value is set to 0.0. Which is a total transparent watermark. It is expected to not work.

I remember there is previously a change (around 3 months ago) about alpha value to keep consistent with RGBA definition and support RGBA watermark image import. It’s no longer the transparency but opacity.

from trojanzoo.

ain-soph avatar ain-soph commented on May 24, 2024

The default value of mark alpha in the code is 1.0

from trojanzoo.

ain-soph avatar ain-soph commented on May 24, 2024

But it’s interesting that you attack successfully with a total transparent watermark. I don’t think it’s correct. Could you share the command you run BadNet? I’ll check if there is a bug in my validation method.

The clean acc and attack success rate shouldn’t reach over 90% at the same time under a total transparent watermark setting.

from trojanzoo.

programehr avatar programehr commented on May 24, 2024

Thank you for your response. Here's the script:

for i in {0..9}
do

CUDA_VISIBLE_DEVICES=1 python3.9 ./examples/backdoor_attack.py --verbose 1 --dataset cifar10 --model resnet18_comp --attack badnet --device cuda --epoch 200 --save --mark_path square_white.png  --mark_height 3  --mark_width 3  --height_offset 2  --width_offset 2  --batch_size 100 --pretrain >> attack_badnet_cifar_multirun2.txt

 CUDA_VISIBLE_DEVICES=1 python3.9 ./examples/backdoor_defense.py --verbose 1 --validate_interval 1 --dataset cifar10 --model resnet18_comp --attack badnet --defense deep_inspect --random_init --device cuda --save >> defense_di_attack_badnet_cifar_multirun2.txt

 CUDA_VISIBLE_DEVICES=1 python3.9 ./examples/backdoor_defense.py --verbose 1 --validate_interval 1 --dataset cifar10 --model resnet18_comp --attack badnet --defense neural_cleanse --random_init --device cuda --save >> defense_nc_attack_badnet_cifar_multirun2.txt

CUDA_VISIBLE_DEVICES=1 python3.9 ./examples/backdoor_attack.py --verbose 1 --dataset mnist --model net --attack badnet --device cuda --epoch 200 --save --mark_path square_white.png  --mark_height 3  --mark_width 3  --height_offset 2  --width_offset 2  --batch_size 100 >> attack_badnet_mnist_multirun2.txt 

  CUDA_VISIBLE_DEVICES=1 python3.9 ./examples/backdoor_defense.py --verbose 1 --validate_interval 1 --dataset mnist --model net --attack badnet --defense deep_inspect --random_init --device cuda --save >> defense_di_attack_badnet_mnist_multirun2.txt

 CUDA_VISIBLE_DEVICES=1 python3.9 ./examples/backdoor_defense.py --verbose 1 --validate_interval 1 --dataset mnist --model net --attack badnet --defense neural_cleanse --random_init --device cuda --save >> defense_nc_attack_badnet_mnist_multirun2.txt

done

from trojanzoo.

ain-soph avatar ain-soph commented on May 24, 2024

Which version of codes are you currently using? The last release version?
If you are using current codes in github, it should be python 3.10 only.
There shouldn't be any performance difference though.

from trojanzoo.

programehr avatar programehr commented on May 24, 2024

I'm using 1.0.8

from trojanzoo.

ain-soph avatar ain-soph commented on May 24, 2024

In 1.0.8, where alpha value still stands for the opacity. The attack succeeds with a total opaque watermark. This is expected.

Based on your provided neural cleanse results. I see it’s working well. The MAD score of the target class is significantly larger than other classes.

Why do you say it’s not working?

from trojanzoo.

programehr avatar programehr commented on May 24, 2024

Did you check that of MNIST?

from trojanzoo.

ain-soph avatar ain-soph commented on May 24, 2024

It seems to be working as well for Neural Cleanse?

The target class has a significantly smaller Mask Norm and Loss.
Maybe the MAD score is not an outlier, that's because there are some classes with very large loss and mask norm.
We should only consider the small outliers rather than the large outliers.

But I have to say that MAD is not always exceeding 2.0, as the original paper claims. But it works from my personal view and the target class is an outlier based on my observation. Maybe you can try some other metrics rather than the MAD.

from trojanzoo.

programehr avatar programehr commented on May 24, 2024

Consider this case please:

mask norms: tensor([30.3323, 50.6468, 36.4013, 32.4547, 55.6689, 44.6483, 50.8236, 51.0138,
48.9748, 56.3015], device='cuda:0')
mask MAD: tensor([2.9064, 0.2607, 1.9602, 2.5755, 1.0436, 0.6745, 0.2882, 0.3179, 0.0000,
1.1422], device='cuda:0')
loss: tensor([0.0023, 0.0041, 0.0025, 0.0025, 0.0034, 0.0030, 0.0026, 0.0043, 0.0028,
0.0035])
loss MAD: tensor([1.0691, 2.4688, 0.6407, 0.6745, 0.9954, 0.2922, 0.5311, 2.7733, 0.0000,
1.2038])
(defense_nc_attack_badnet_mnist_multirun2.txt, line 1120)

It doesn't look much like an outlier, right?

from trojanzoo.

ain-soph avatar ain-soph commented on May 24, 2024

Yeah, you are correct, it doesn't look like an outlier for this run.

But I also observe it works for many runs as well, and especially almost every run for CIFAR10. So I think it's fine.

If you seek for the performance improvement, I would say this is just an re-implementation of NeuralCleanse and I'll only try my best to keep it consistent with the original paper. While certainly, You are welcome to inherit trojanzoo codes and modify them to improve performance.

Sorry that I didn't take MNIST into consideration in TrojanZoo paper because it's too naive and doesn't show any typical trend compared with more complex dataset such as CIFAR and ImageNet.

from trojanzoo.

programehr avatar programehr commented on May 24, 2024

Thanks a lot. But it seems that deep inspect (di) doesn't work well either.

from trojanzoo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.