Topic: adversarial-attacks Goto Github
Some thing interesting about adversarial-attacks
Some thing interesting about adversarial-attacks
adversarial-attacks,Simple pytorch implementation of FGSM and I-FGSM
User: 1konny
adversarial-attacks,Code for our nips19 paper: You Only Propagate Once: Accelerating Adversarial Training Via Maximal Principle
User: a1600012888
adversarial-attacks,Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
Organization: advboxes
adversarial-attacks,PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. 🏆 Best Paper Awards @ NeurIPS ML Safety Workshop 2022
Organization: agencyenterprise
adversarial-attacks,TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.
User: ain-soph
Home Page: https://ain-soph.github.io/trojanzoo
adversarial-attacks,Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
User: ashafahi
Home Page: https://arxiv.org/abs/1904.12843
adversarial-attacks,Self-hardening firewall for large language models
Organization: automorphic-ai
Home Page: https://automorphic.ai
adversarial-attacks,A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Organization: bethgelab
Home Page: https://foolbox.jonasrauber.de
adversarial-attacks,Adversary Emulation Framework
Organization: bishopfox
adversarial-attacks,A Toolbox for Adversarial Robustness Research
Organization: borealisai
adversarial-attacks,AIShield Watchtower: Dive Deep into AI's Secrets! 🔍 Open-source tool by AIShield for AI model insights & vulnerability scans. Secure your AI supply chain today! ⚙️🛡️
Organization: bosch-aisecurity-aishield
Home Page: https://www.boschaishield.com/
adversarial-attacks,Adversarial attacks and defenses on Graph Neural Networks.
User: chandlerbang
adversarial-attacks,Implementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
User: chandlerbang
Home Page: https://arxiv.org/abs/2005.10203
adversarial-attacks,Codes for NeurIPS 2020 paper "Adversarial Weight Perturbation Helps Robust Generalization"
User: csdongxian
adversarial-attacks,Implementation of the paper "Adversarial Attacks on Graph Neural Networks via Meta Learning".
User: danielzuegner
Home Page: https://www.kdd.in.tum.de/gnn-meta-attack
adversarial-attacks,Implementation of the paper "Adversarial Attacks on Neural Networks for Graph Data".
User: danielzuegner
Home Page: https://www.cs.cit.tum.de/daml/forschung/nettack/
adversarial-attacks,⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
User: deadbits
Home Page: https://vigil.deadbits.ai/
adversarial-attacks,A pytorch adversarial library for attack and defense methods on images and graphs
User: dse-msu
adversarial-attacks,Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
User: fra31
Home Page: https://arxiv.org/abs/2003.01690
adversarial-attacks,Awesome Resources for Advanced Computer Vision Topics
User: haofanwang
adversarial-attacks,PyTorch implementation of adversarial attacks [torchattacks].
User: harry24k
Home Page: https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html
adversarial-attacks,A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"
User: harry24k
adversarial-attacks,💡 Adversarial attacks on explanations and how to defend them
User: hbaniecki
Home Page: https://doi.org/10.1016/j.inffus.2024.102303
adversarial-attacks,A Harder ImageNet Test Set (CVPR 2021)
User: hendrycks
adversarial-attacks,A suite for hunting suspicious targets, expose domains and phishing discovery
Organization: huntdownproject
Home Page: https://huntdownproject.github.io/
adversarial-attacks,Library containing PyTorch implementations of various adversarial attacks and resources
User: jeromerony
adversarial-attacks,Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"
User: jeromerony
adversarial-attacks,A Model for Natural Language Attack on Text Classification and Inference
User: jind11
adversarial-attacks,Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)
User: kabkabm
adversarial-attacks,Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
User: koukyosyumei
adversarial-attacks,Raising the Cost of Malicious AI-Powered Image Editing
Organization: madrylab
Home Page: https://gradientscience.org/photoguard/
adversarial-attacks,Data augmentation for NLP
User: makcedward
Home Page: https://makcedward.github.io/
adversarial-attacks,Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
User: max-andr
Home Page: https://arxiv.org/abs/1912.00049
adversarial-attacks,A unified evaluation framework for large language models
Organization: microsoft
Home Page: http://aka.ms/promptbench
adversarial-attacks,🔥🔥Defending Against Deepfakes Using Adversarial Attacks on Conditional Image Translation Networks
User: natanielruiz
adversarial-attacks,TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
Organization: qdata
Home Page: https://textattack.readthedocs.io/en/master/
adversarial-attacks,DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
User: ryderling
adversarial-attacks,A curated list of adversarial attacks and defenses papers on graph-structured data.
Organization: safe-graph
adversarial-attacks,Python toolbox to evaluate graph vulnerability and robustness (CIKM 2021)
User: safreita1
Home Page: https://graph-tiger.readthedocs.io
adversarial-attacks,Implementation of Papers on Adversarial Examples
User: sarathknv
adversarial-attacks,Physical adversarial attack for fooling the Faster R-CNN object detector
User: shangtse
adversarial-attacks,A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
User: shubhomoydas
adversarial-attacks,The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-dimensional) data distributions. In this survey, we provide an exhaustive literature review of 74 articles published in major RS and ML journals and conferences. This review serves as a reference for the RS community, working on the security of RS or on generative models using GANs to improve their quality.
Organization: sisinflab
adversarial-attacks,A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
User: tao-bai
adversarial-attacks,A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
Organization: thu-ml
Home Page: https://thu-ml-ares.rtfd.io
adversarial-attacks,A reading list for large models safety, security, and privacy.
User: thuccslab
Home Page: https://github.com/ThuCCSLab/Awesome-LM-SSP
adversarial-attacks,An Open-Source Package for Textual Adversarial Attack.
Organization: thunlp
Home Page: https://openattack.readthedocs.io/
adversarial-attacks,Must-read Papers on Textual Adversarial Attack and Defense
Organization: thunlp
adversarial-attacks,Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Organization: trusted-ai
Home Page: https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
adversarial-attacks,Anti-DreamBooth: Protecting users from personalized text-to-image synthesis (ICCV'23)
Organization: vinairesearch
Home Page: https://anti-dreambooth.github.io/
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.