Comments (14)
Jesus, that's quite an old version. I'll check the results based on the most up-to-date codes.
from trojanzoo.
Thanks a lot. But it seems that deep inspect (di) doesn't work well either.
I think to be honest, their original method doesn't work very well compared with Neural Cleanse, which is pointed out in our paper.
from trojanzoo.
Your mark alpha value is set to 0.0. Which is a total transparent watermark. It is expected to not work.
I remember there is previously a change (around 3 months ago) about alpha value to keep consistent with RGBA definition and support RGBA watermark image import. It’s no longer the transparency but opacity.
from trojanzoo.
The default value of mark alpha in the code is 1.0
from trojanzoo.
But it’s interesting that you attack successfully with a total transparent watermark. I don’t think it’s correct. Could you share the command you run BadNet? I’ll check if there is a bug in my validation method.
The clean acc and attack success rate shouldn’t reach over 90% at the same time under a total transparent watermark setting.
from trojanzoo.
Thank you for your response. Here's the script:
for i in {0..9}
do
CUDA_VISIBLE_DEVICES=1 python3.9 ./examples/backdoor_attack.py --verbose 1 --dataset cifar10 --model resnet18_comp --attack badnet --device cuda --epoch 200 --save --mark_path square_white.png --mark_height 3 --mark_width 3 --height_offset 2 --width_offset 2 --batch_size 100 --pretrain >> attack_badnet_cifar_multirun2.txt
CUDA_VISIBLE_DEVICES=1 python3.9 ./examples/backdoor_defense.py --verbose 1 --validate_interval 1 --dataset cifar10 --model resnet18_comp --attack badnet --defense deep_inspect --random_init --device cuda --save >> defense_di_attack_badnet_cifar_multirun2.txt
CUDA_VISIBLE_DEVICES=1 python3.9 ./examples/backdoor_defense.py --verbose 1 --validate_interval 1 --dataset cifar10 --model resnet18_comp --attack badnet --defense neural_cleanse --random_init --device cuda --save >> defense_nc_attack_badnet_cifar_multirun2.txt
CUDA_VISIBLE_DEVICES=1 python3.9 ./examples/backdoor_attack.py --verbose 1 --dataset mnist --model net --attack badnet --device cuda --epoch 200 --save --mark_path square_white.png --mark_height 3 --mark_width 3 --height_offset 2 --width_offset 2 --batch_size 100 >> attack_badnet_mnist_multirun2.txt
CUDA_VISIBLE_DEVICES=1 python3.9 ./examples/backdoor_defense.py --verbose 1 --validate_interval 1 --dataset mnist --model net --attack badnet --defense deep_inspect --random_init --device cuda --save >> defense_di_attack_badnet_mnist_multirun2.txt
CUDA_VISIBLE_DEVICES=1 python3.9 ./examples/backdoor_defense.py --verbose 1 --validate_interval 1 --dataset mnist --model net --attack badnet --defense neural_cleanse --random_init --device cuda --save >> defense_nc_attack_badnet_mnist_multirun2.txt
done
from trojanzoo.
Which version of codes are you currently using? The last release version?
If you are using current codes in github, it should be python 3.10 only.
There shouldn't be any performance difference though.
from trojanzoo.
I'm using 1.0.8
from trojanzoo.
In 1.0.8, where alpha value still stands for the opacity. The attack succeeds with a total opaque watermark. This is expected.
Based on your provided neural cleanse results. I see it’s working well. The MAD score of the target class is significantly larger than other classes.
Why do you say it’s not working?
from trojanzoo.
Did you check that of MNIST?
from trojanzoo.
It seems to be working as well for Neural Cleanse?
The target class has a significantly smaller Mask Norm and Loss.
Maybe the MAD score is not an outlier, that's because there are some classes with very large loss and mask norm.
We should only consider the small outliers rather than the large outliers.
But I have to say that MAD is not always exceeding 2.0, as the original paper claims. But it works from my personal view and the target class is an outlier based on my observation. Maybe you can try some other metrics rather than the MAD.
from trojanzoo.
Consider this case please:
mask norms: tensor([30.3323, 50.6468, 36.4013, 32.4547, 55.6689, 44.6483, 50.8236, 51.0138,
48.9748, 56.3015], device='cuda:0')
mask MAD: tensor([2.9064, 0.2607, 1.9602, 2.5755, 1.0436, 0.6745, 0.2882, 0.3179, 0.0000,
1.1422], device='cuda:0')
loss: tensor([0.0023, 0.0041, 0.0025, 0.0025, 0.0034, 0.0030, 0.0026, 0.0043, 0.0028,
0.0035])
loss MAD: tensor([1.0691, 2.4688, 0.6407, 0.6745, 0.9954, 0.2922, 0.5311, 2.7733, 0.0000,
1.2038])
(defense_nc_attack_badnet_mnist_multirun2.txt, line 1120)
It doesn't look much like an outlier, right?
from trojanzoo.
Yeah, you are correct, it doesn't look like an outlier for this run.
But I also observe it works for many runs as well, and especially almost every run for CIFAR10. So I think it's fine.
If you seek for the performance improvement, I would say this is just an re-implementation of NeuralCleanse and I'll only try my best to keep it consistent with the original paper. While certainly, You are welcome to inherit trojanzoo codes and modify them to improve performance.
Sorry that I didn't take MNIST into consideration in TrojanZoo paper because it's too naive and doesn't show any typical trend compared with more complex dataset such as CIFAR and ImageNet.
from trojanzoo.
Thanks a lot. But it seems that deep inspect (di) doesn't work well either.
from trojanzoo.
Related Issues (20)
- BackdoorAttack class has no argument for source_class HOT 1
- Low effective loading in get_class_subset function HOT 1
- Install newest version fail HOT 1
- Using a custom model HOT 4
- RuntimeError: Dataset not found or corrupted. You can use download=True to download it HOT 10
- Clean label attack accuracy is wrong HOT 5
- In new push model path is not working HOT 1
- badnet folder information HOT 1
- [Error] When I test Neural Cleanse i got a error HOT 2
- Is it possible to apply methods to graph? HOT 6
- Input aware dynamic backdoor error HOT 5
- trojanvision.datasets.ImageFolder HOT 1
- Possible bug: target_class not changed when computing ASR for reversed triggers HOT 2
- problem about saving the intermediate results and config problem HOT 6
- strange mark saved HOT 2
- Hyperparameters for training Resnet18 on CIFAR10? HOT 1
- STRIP implementation doesn't match original codebase HOT 1
- Attack saving and loading is not working HOT 2
- Comp version of networks HOT 2
- Unable to Access Triggered Dataset in BadNet Attack HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from trojanzoo.