mdzhangst / narcissus-backdoor-attack Goto Github PK
View Code? Open in Web Editor NEWThis project forked from reds-lab/narcissus
The official implementation of Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
Home Page: https://arxiv.org/pdf/2204.05255.pdf
License: MIT License