Coder Social home page Coder Social logo

Comments (3)

pd90506 avatar pd90506 commented on July 30, 2024

This method is not specified to multi-class problems. In a binary case, there will be only 1 adversarial class, which leads this method to a form similar to IG, except that IG requires a straightline path and a starting reference point, this AGI method is still able to automatically find a path.

from agi.

leleogere avatar leleogere commented on July 30, 2024

Thank you for your answer, that is what I thought but I wasn't sure.

Anyway thank you for your work, I managed to get some pretty amazing results with this method that I could not get with classical IG.

One last question, is it normal that on the following lines the delta is not normalized?

AGI/AGI_main.py

Lines 89 to 90 in 4b28c8b

grad_lab_norm = torch.norm(data_grad_lab,p=2)
delta = epsilon * data_grad_adv.sign()

According to the algorithm below, I would expect something like delta = epsilon * (data_grad_adv / grad_lab_norm).sign() (especially as it seems that the variable grad_lab_norm is not used anywhere).

algorithm

There could be an issue when the gradient is zero, but it could be solved by clipping the norm to some small value:

delta = epsilon * (data_grad_adv / torch.clamp(grad_lab_norm, min=1e-8)).sign()

from agi.

pd90506 avatar pd90506 commented on July 30, 2024

Thank you for your answer, that is what I thought but I wasn't sure.

Anyway thank you for your work, I managed to get some pretty amazing results with this method that I could not get with classical IG.

One last question, is it normal that on the following lines the delta is not normalized?

AGI/AGI_main.py

Lines 89 to 90 in 4b28c8b

grad_lab_norm = torch.norm(data_grad_lab,p=2)
delta = epsilon * data_grad_adv.sign()

According to the algorithm below, I would expect something like delta = epsilon * (data_grad_adv / grad_lab_norm).sign() (especially as it seems that the variable grad_lab_norm is not used anywhere).

algorithm

There could be an issue when the gradient is zero, but it could be solved by clipping the norm to some small value:

delta = epsilon * (data_grad_adv / torch.clamp(grad_lab_norm, min=1e-8)).sign()

Yes! You're correct. But since we only take the sign, it shouldn't cause any differences. Your suggestion should also work.

from agi.

Related Issues (1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.