Coder Social home page Coder Social logo

danielzuegner / nettack Goto Github PK

View Code? Open in Web Editor NEW
217.0 7.0 55.0 497 KB

Implementation of the paper "Adversarial Attacks on Neural Networks for Graph Data".

Home Page: https://www.cs.cit.tum.de/daml/forschung/nettack/

License: MIT License

Jupyter Notebook 40.62% Python 59.38%
machine-learning adversarial-attacks graph-mining deep-learning neural-networks

nettack's People

Contributors

danielzuegner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

nettack's Issues

Will the default execution of the code exactly reproduce the example image?

If we strictly follow the setup in the code, will it produce exact same output as shown in the example image?

I tried to run your demo code without any modification (but converted it to python file for convenience in running). However, the attack is not very effective and the correct class is shown to be class 1 instead of class 5 shown in the example output (node id is 0 and seed is 15). Since I did not modify anything, I am trying to figure out the possible reason.

Computation of score functions in feature attacks

It seems that the code doesn't comply to the sent in the paper, "The elements where the gradient points outside the allowable direction should not be perturbed since they would only hinder the attack – thus, the old score stays unchanged." and the gradients are not sorted by their absolute values but original ones.

a problem about the node feature

hi,I am honored to read your paper, and I would like to use your ideas in my own data set. But it didn't work very well. I think it's because of my data set.
In my data set,the node feature is a number between -1 and 1 which is different from yours dataset.
And when i set perturb_features = True,the program will always select the same feature of the same node to attack.which is this:

Starting attack
Attack node with ID 4544 using structure and feature perturbations
Attacking the node indirectly via 5 influencer nodes
Performing 100 perturbations

Influencer nodes: [3463 1829 3198 1837 4399]

...1/100 perturbations ...

Edge perturbation: [1837 1079]

...2/100 perturbations ...

Edge perturbation: [1837 1824]

...3/100 perturbations ...

Feature perturbation: [1829 35]

...4/100 perturbations ...

Feature perturbation: [1829 35]

...5/100 perturbations ...

Feature perturbation: [1829 35]

...6/100 perturbations ...

as you can see :in 3,4,5 and after,the network always select feature [1829,35] to attack.
What do you think the problem is and how do I change the code?Looking forward to your reply!

How to perform structure perturbates on Polblogs?

Hi Daniel,

I noticed that when running the code, it cannot provide results for the polblogs dataset. I noticed that issue #3 has already raised this problem. I would like to ask how I can modify the code to only perform structural perturbations only and obtain perturbation results for the polblogs dataset?

AttributeError: A1 not found in demo when perturb_structure = False

Hi,

The following error occurred when setting perturb_structure = False in demo.

AttributeError Traceback (most recent call last)
in ()
1 classification_margins_corrupted = []
2 class_distrs_retrain = []
----> 3 gcn_retrain = GCN.GCN(sizes, nettack.adj_preprocessed, nettack.X_obs.tocsr(), "gcn_retrain", gpu_id=gpu_id,seed=seed)
4 for _ in range(retrain_iters):
5 print("... {}/{} ".format(_+1, retrain_iters))

nettack-master\nettack\GCN.py in init(self, sizes, An, X_obs, name, with_relu, params_dict, gpu_id, seed)
75 self.training = tf.placeholder_with_default(False, shape=())
76
---> 77 self.An = tf.SparseTensor(np.array(An.nonzero()).T, An[An.nonzero()].A1, An.shape)
78 self.An = tf.cast(self.An, tf.float32)
79 self.X_sparse = tf.SparseTensor(np.array(X_obs.nonzero()).T, X_obs[X_obs.nonzero()].A1, X_obs.shape)

645             return self.getnnz()
646         else:

--> 647 raise AttributeError(attr + " not found")
648
649 def transpose(self, axes=None, copy=False):

AttributeError: A1 not found

I assume the error is due to the fact that when initializing the Nettack class the 'self.adj_preprocessed' is of type 'scipy.sparse.lil.lil_matrix' and when no structure perturbations are performed the 'self.adj_preprocessed' type remains as is, but when those perturbations are performed by the model the 'self.adj_preprocessed' is of type 'scipy.sparse.csr.csr_matrix' in the end of the poisoning data process.

No Features for polblogs?

It seems there is no feature matrix available at least for the uploaded polblogs dataset? How did you do the feature matrix(X_obs) computation in this case?

Question about deriving Eq.(17)

I've derived Eq.(17) by myself but found that in the PROOF the A hat squared is not equals to the right hand side. In the definition of A, there's no tilde in a, but in A hat squared the tilde exists. This puzzled me. So how to derive Eq.(17)?

Possibilities of extending the nettack to the task of graph classification

Hi, Daniel,
I know the nettack is specifically designed for the node classification under GCN model.
But will it possible to extend the nettack to graph classification using GCN.

For example, the hacky version is to a virtual node connect to all the nodes within the individual graph, and apply the nettack to attack the virtual node to mimic the attack on graphs.

Or in another approach, instead of having a specific target node, how about having a set of nodes to attack simultaneously? Those nodes are coming from the same graph in order to mispredict the graph.

About the pytorch version

Currently, Nettack implements the attack on CPU. It will be too slower when n_influencer is too large.

Is any version of re-implementation available based on Pytorch?

Any hints would be helpful. Thanks in advance.

Question about surrogate model

Hi Daniel,
Sorry to bother you. After reading your code, I have a question about the type of the attack. Do the parameters of the surrogate model continue to be updated like training process during an attack? Is that actually a evasion attack?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.