anirudhs123 / attention-subnetworks-based-adversarial-detection Goto Github PK
View Code? Open in Web Editor NEWThis work demonstrates an altogether different utility of attention heads. Self-attention heads are characteristic of Transformer models and have been well studied for interpretability and pruning, but here we build a novel adversarial detection model based on them.
Home Page: https://anirudhs123.github.io/Attention-Subnetworks-based-Adversarial-Detection