Coder Social home page Coder Social logo

finder's Issues

Error message when make file

Hi, I faced with the error message below:

"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.39.33519\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\NSGL-113-01\AppData\Local\Programs\Python\Python310\include -IC:\Users\NSGL-113-01\AppData\Local\Programs\Python\Python310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.39.33519\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\cppwinrt" /EHsc /Tpgraph.cpp /Fobuild\temp.win-amd64-cpython-310\Release\graph.obj -std=c++11
cl : 命令列 warning D9002 : 忽略未知的選項 '-std=c++11'
graph.cpp
graph.cpp(17325): error C2105: '++' 需要左值 (l-value)
graph.cpp(17327): error C2105: '--' 需要左值 (l-value)
graph.cpp(17636): error C2105: '++' 需要左值 (l-value)
graph.cpp(17638): error C2105: '--' 需要左值 (l-value)
graph.cpp(17886): error C2105: '++' 需要左值 (l-value)
graph.cpp(17888): error C2105: '--' 需要左值 (l-value)
graph.cpp(19537): warning C4996: '_PyUnicode_get_wstr_length': deprecated in 3.3
graph.cpp(19553): warning C4996: '_PyUnicode_get_wstr_length': deprecated in 3.3
graph.cpp(20297): warning C4996: 'PyUnicode_FromUnicode': deprecated in 3.3
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.39.33519\bin\HostX86\x64\cl.exe' failed with exit code 2

How can I fixed it?

Node Importance

Hi there,
I have read the paper and inspected all of the code in the Code Ocean.

Was there any work done on identifying which nodes in the set of key players was the most important to ANC?
I am looking for a method of ranking the solution nodes from most important to least important.

Thanks,
Chris

Working with nx.graph objects when there are disconnected nodes in the graph

If you have a nx.graph object where a node is disconnected (or removed) from the graph, FINDER will fail to analyze this.

Code for reproduction:

>>> test = np.array([0, 0, 0, 1], [0, 0, 0, 1], [0, 0, 0, 1], [1, 1, 1, 0]])
>>> g = nx.from_numpy_array(test)
>>> g.remove_node(2)
>>> 
>>> # some more detail:
>>> a, b = zip(*g.edges())
>>> a
(0, 1)
>>> b    # <<< Here is the error!
(3, 3)
>>> len(g.nodes())
3

I already created a pull request (#12) on this and I am only filing this issue for convenient issue tracking.

about result of critical node

Hello, I am currently using FINDER to process my data.
I would like to ask about the node label sorting in the result file (for example, the Crime_degree.txt file) to determine which nodes are important.
What do the values in the MaxCCList__Strategy_Crime file represent? Do they represent maximum Closeness Centrality? Or something else?I couldn't find the specific meaning of MaxCCList in the literature and supplementary materials.
In summary, how can I determine the importance ranking of nodes from these two files?

Questions about the action space and the calculation of Q(s,a)

Dear authors

Thanks for your paper. Since I have conducted some related research, I want to ask some details, hope you can help me.

  1. I want to ask about the way you calculate Q(s,a). After obtained the node embedding, and the graph embedding, do you throw them into some NN models to calculate Q(s, a)? By concating node embedding and graph embedding and then throw them into MLP?

  2. The representation of action. May I ask that how you define the action space? I mean, if the DRL ouptuts a scalar as the action, and if the graph's labels are permuted, how does the policy network output the same action? Sorry if I missed to see them in your supplemententary information. It seems that there is no clear definition. Did I miss to notice them?

  3. The node embedding, especially for a larger graph( 50 or 100 nodes), after passing the GNN model, would be similar to some node. For example, node A and nobe B may has the same embedding, so the policy network cannot determine which node to remove. Do you also notice this problem?

  4. If the action space is fixed in the training process, how does it scale to larger graphs when testing? I thinks it is restricted by both GNN module and DRL action space.

Hope you can help me. Thanks

I have found some problems when I run finder on other dataset.

The dataset is SALITY from https://github.com/iBigQ/botnet-graphs

I know that FINDER don't support graph that node numbered by string,so I renumber these nodes by integer and using edge list to save it.But FINDER gave me an exception like this.

exception:

  File "FINDER.pyx", line 721, in FINDER.FINDER.EvaluateRealData
  File "FINDER.pyx", line 732, in FINDER.FINDER.EvaluateRealData
  File "FINDER.pyx", line 753, in FINDER.FINDER.GetSolution
  File "FINDER.pyx", line 491, in FINDER.FINDER.PredictWithCurrentQNet
  File "FINDER.pyx", line 487, in FINDER.FINDER.Predict
AssertionError

有关cython

我想请问一下为什么cython里面找不到operator?

Invalid argument: Cannot multiply A and B because inner dimension does not match: 59899 vs. 59783.

number of nodes:59783
number of edges:69429
2023-11-06 15:06:02.409266: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
2 root error(s) found.
(0) Invalid argument: Cannot multiply A and B because inner dimension does not match: 59899 vs. 59783. Did you forget a transpose? Dimensions of A: [59899, 59899). Dimensions of B: [59783,64]
[[node SparseTensorDenseMatMul/SparseTensorDenseMatMul (defined at /py-project/FINDER/FINDER_CN_cost/testReal.py:25) ]]
[[MatMul_22/_119]]
(1) Invalid argument: Cannot multiply A and B because inner dimension does not match: 59899 vs. 59783. Did you forget a transpose? Dimensions of A: [59899, 59899). Dimensions of B: [59783,64]
[[node SparseTensorDenseMatMul/SparseTensorDenseMatMul (defined at /py-project/FINDER/FINDER_CN_cost/testReal.py:25) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'SparseTensorDenseMatMul/SparseTensorDenseMatMul':
File "/py-project/FINDER/FINDER_CN_cost/testReal.py", line 143, in
main()
File "/py-project/FINDER/FINDER_CN_cost/testReal.py", line 137, in main
GetSolution(0.01, model_file)
File "/py-project/FINDER/FINDER_CN_cost/testReal.py", line 25, in GetSolution
dqn = FINDER()
File "/miniconda3/envs/py37/lib/python3.7/site-packages/tensorflow/python/ops/sparse_ops.py", line 2364, in sparse_tensor_dense_matmul
adjoint_b=adjoint_b)
File "/miniconda3/envs/py37/lib/python3.7/site-packages/tensorflow/python/ops/gen_sparse_ops.py", line 3053, in sparse_tensor_dense_mat_mul
adjoint_b=adjoint_b, name=name)
File "/miniconda3/envs/py37/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/miniconda3/envs/py37/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/miniconda3/envs/py37/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
op_def=op_def)
File "/miniconda3/envs/py37/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2005, in init
self._traceback = tf_stack.extract_stack()

train the model

Excuse me. How to adjust the parameters to speed up the model training? The MAX_ITERATION or the LEARNING_RATE?
And this:

iter 8400 eps 0.202 average size of vc: 0.28585756567588727
testing 100 graphs time: 12.65684605s
300 iterations total time: 120.95875800s
WARNING:tensorflow:Issue encountered when serializing trainable_variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
tf.float32 has type DType, but expected one of: int, long, bool
WARNING:tensorflow:Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
tf.float32 has type DType, but expected one of: int, long, bool
model has been saved success!
iter 8700 eps 0.1735 average size of vc: 0.2842680456584987
testing 100 graphs time: 12.28663921s
300 iterations total time: 122.96724600s
WARNING:tensorflow:Issue encountered when serializing trainable_variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
tf.float32 has type DType, but expected one of: int, long, bool
WARNING:tensorflow:Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
tf.float32 has type DType, but expected one of: int, long, bool
model has been saved success!

Is this right? Or I wish someone can help me.Thanks!

About the reward settings and playing game

Hello! I read the FINDER recently and there are two questions puzzled me.

  1. In the article, you define the reward is decrease of ANC, however the computation of ANC needs the nodes removing sequence. How should I get the removing sequence? Using FINDER, HDA or other methods to remove nodes?
    2.In supplementary, the FINDER algorithm S3 shows that SGD is performed after each storing experience. however, in the last paragraph of Ⅱ.D.2 (Train algorithm), it seems that SGD is performed after each episode. What is the episode means? Removing single node or removing nodes in a graph until terminal?
    Very thanks to your work! Hope your answers.

Failure with no error message

Hi there,
I have been using FINDER, and it works great on the 9 provided real datasets. It also works well on a Watts-Strogatz graph generated by networkX.
I am attempting to use it on some bespoke data. I have ensured that the graph is all one connected component using NetworkX, and it looks like the provided image:
image

I will also provide the TXT file.

Upon running this, the code simply pauses and then ends, like so:
image

Any ideas would be appreciated.
data.txt

Key error with Critical Node Cost problem

Hi there,
I am getting a KeyError when attempting to run your example datasets (such as Crime) with the provided .gml cost data.

I redownloaded and rebuilt a new version of FINDER from your repo, but I am still seeing the key error. Please find the error log below.

image

Do you know what is causing this?

Many Thanks,
Chris

Implementing FINDER on a new dataset

I am having issues implementing FINDER on certain datasets. I have no issues with some dataset but for some I end up with this exception:
File "FINDER.pyx", line 490, in FINDER.FINDER.Predict
assert (pos == len(raw_output))
AssertionError

I am hoping you could clarify the purpose of the assertion.
Thank you!

Modifying Connectivity Measures

Hello,
In the article authors claim that in principle algorithm can work with any properly defined connectivity measure. I examined supplementary information document however, I could not find a description of how to change connectivity measures. Is this modification too complex to be described? If not how can I do that?
Thank you.

Additions to ReadME for Windows Users

For anyone else trying to install on Windows 10:

  1. You must have CUDA 10.0 ToolKit installed, can be found here.
  2. You need to copy the cudnn64_7.dll downloaded from here.
  3. The command to run the Synthetic Tests is as follows: set CUDA_VISIBLE_DEVICES=-1 & python testSynthetic.py

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.