tiiiger / sgc Goto Github PK
View Code? Open in Web Editor NEWofficial implementation for the paper "Simplifying Graph Convolutional Networks"
License: MIT License
official implementation for the paper "Simplifying Graph Convolutional Networks"
License: MIT License
Hi,
I am interested in applying SGC to some other text classification datasets. How did you prepossess the dataset?
Many thanks ahead
Hi, I'm working on some GNNs recently. Thanks for your excellent work! I have some question about SGC, hope you can give some advice.
Looking forward to your reply, thanks!
hello,
can you explain the fields in features? i do not know whether they are attribute of node or vectors of node
hello,I have run your code and it's dramatically faster than other GCN framework.
But could you tell me why your hidden unit is set to 0 and I try to change it,but it make no difference to the test precision.
Thanks!
GCN can handle directed graph when adjacency matrix is allowed to be asymmetric and normalized with D^-1 * A. Is it possible to have SGC handle directed graph as well? Thanks
scikit-learn is listed twice and the hyperopt dependency misses one '=' sign
Hi! I'm very interested in your work. When I run code on reddit datasets,I encountered this errors.
adj = adj.cuda()
RuntimeError: torch.cuda.sparse.FloatTensor is not enabled.
Could you please explain what this line in the code does:
adj = adj + adj.T.multiply(adj.T > adj) - adj.multiply(adj.T > adj)
For your 20NG results reported in your original paper (88.5 ± 0.1
), was the model trained on the full public 20NG train
set, of size 11314, or were the reported results generated using the code currently in this repo, which appears to exclude the validation set from the data used to train the model? (The latter would use a training set of size 10183).
Also, I notice that in Table 4 you reported a different result (87.9 ± 0.2
) for the cited Yao GCN paper than they did themselves (0.8634 ± 0.0009
). Do you know why that is the case?
Thanks!
Thank you Tiiiger for making up the changes as the reference to the closed issue Memory Error #6, but it still not working for me, I am posting the error message here.
Traceback (most recent call last):
File "C:\Users\Desktop\NewProjects\SGC\downstream\TextSGC\train.py", line 105, in
adj_dense = sparse_to_torch_dense(sp_adj, device='cpu')
File "C:\Users\Desktop\NewProjects\SGC\downstream\TextSGC\utils.py", line 127, in sparse_to_torch_dense
dense = sparse.todense().astype(np.float32)
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\sparse\base.py", line 849, in todense
return np.asmatrix(self.toarray(order=order, out=out))
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\sparse\coo.py", line 317, in toarray
B = self._process_toarray_args(order, out)
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\sparse\base.py", line 1187, in _process_toarray_args
return np.zeros(self.shape, dtype=self.dtype, order=order)
MemoryError
I am a newer to graph neural network, I have read the paper and code, But I have a question, as we can see in the model code, self.W = nn.Linear(nfeat, nclass), that's the SGC model, why I think It's just a Bp neural network, it confuse me a lot.
While running train.py from downstream/TextSGC a Memory Error occurs.
"SGC\downstream\TextSGC\utils.py", line 177, in sparse_to_torch_dense
dense = sparse.todense().astype(np.int32)"
Hi, firstly thank you for your excellent work. I had a question when I read the 'Spectral Analysis' part in your paper, as follow,
I wonder what effects will be caused when the negative filter coefficients exist, also in other papers I find they often avoid to generate the negative filter coefficients.
Looking forward to your insightful understanding, thanks! :)
adj = adj + adj.T + sp.eye(adj.shape[0])
adj = adj + sp.eye(adj.shape[0])
Hi, Tiiiger
I am very interesting in your recent SGC work.
I want to apply your SGC code to the Semi-supervised user geolocation which belongs to Downstream Tasks in your paper.
The GEOTEXT dataset is OK, but when I turn to TWITTER-US and TWITTER-WORLD, it crashed.The error is list as follows:
File "/home/wtl/桌面/wtlCode/geoSGC/dataProcess.py", line 96, in process_data
features = torch.FloatTensor(features.to_dense())
RuntimeError: $ Torch: not enough memory: you tried to allocate 417GB. Buy new RAM! at /pytorch/aten/src/TH/THGeneral.cpp:201
I have tried different versions of python and torch, such as python 2.7 + torch 1.0.1.post2 and python 3.5 + torch 1.0.1.post2 , but failed. I also google for solution but so many methods dose not work.
Do you have the similar error, and how do you fix it? My computer is Ubuntu 16.04 with 40GB memory.
Many thanks for your help.
-Morton
I wonder how to compute the red line in Figure 2 of your paper.
I re-implemented SGC with pytorch-geometric and test on PPI while I got only 0.45-0.5 of micro-F1 score. I wonder whether my code has something wrong or SGC is not worked on PPI?
I'm confused about this line of the code in utils.py file
features[test_idx_reorder, :] = features[test_idx_range, :] #line61
labels[test_idx_reorder, :] = labels[test_idx_range, :] #line65
What's the point of this operation?I'm looking forward to your reply soon.
how to reimplement the job of table5?I mean how to precess the twitter data?
When I clicked on this link, I entered my own network disk. I can not find the preprocessed version of
the data. I do not know what is wrong. I hope you could help me. Thank you very much
The default normalization is set to "FirstOrderGCN", which is not implemented in normalization.py. I believe it should be replaced by "AugNormAdj" as is the case in load_reddit_data().
Quick questions. It looks like that SGC does not generate node embedding. Instead, SGC generates N*C whereas C refers to the number of classes. Does it imply that SGC could not generate node embeddings?
When I tried to run tuning.py, I got an error.
How can I fix this? Could you specify which version of hyperopt you used?
Traceback (most recent call last):
File "tuning.py", line 33, in
best = fmin(sgc_objective, space=space, algo=tpe.suggest, max_evals=60)
File "/Users/liujc/anaconda3/lib/python3.6/site-packages/hyperopt/fmin.py", line 407, in fmin
rval.exhaust()
File "/Users/liujc/anaconda3/lib/python3.6/site-packages/hyperopt/fmin.py", line 262, in exhaust
self.run(self.max_evals - n_done, block_until_done=self.asynchronous)
File "/Users/liujc/anaconda3/lib/python3.6/site-packages/hyperopt/fmin.py", line 198, in run
disable=not self.show_progressbar, dynamic_ncols=True,
File "/Users/liujc/anaconda3/lib/python3.6/site-packages/tqdm/_tqdm.py", line 850, in init
self.set_postfix(refresh=False, **postfix)
TypeError: set_postfix() argument after ** must be a mapping, not str
Exception ignored in: <object repr() failed>
Traceback (most recent call last):
File "/Users/liujc/anaconda3/lib/python3.6/site-packages/tqdm/_tqdm.py", line 893, in del
self.close()
File "/Users/liujc/anaconda3/lib/python3.6/site-packages/tqdm/_tqdm.py", line 1111, in close
pos = self.pos
AttributeError: 'tqdm' object has no attribute 'pos'
Hi there,
I found you used LBFGS optimizer for the Reddit dataset. And you also claimed this in the paper. I wonder why you chose this optimizer. Why not just use SGD?
And what kind of the optimizer did you use for text classification and semi-supervised geolocation classification when you reported the efficiency? Because I thought different optimizers have different speed efficiency and memory efficiency.
Could you provide any insights? Thanks!
RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1556653145446/work/aten/src/THC/THCBlas.cu:259
How can I solve this problem?
what is the meaning of this assignment statement?
features[test_idx_reorder, :] = features[test_idx_range, :]
I found the training time improvements of SGC vary on different datasets. For example, SGC was trained 28 times faster than GCN on Pubmed dataset, while it is only < 5 times faster than GCN on TWITTER-WORLD, Cora and Citeseer. I wonder what the reason is to come out these kinds of results. Is there a theoretical guarantee? Or only the empirical results?
The other question is how we can quantitatively analyze the training time of these GCNs? Are there any approaches to do this? I think it is not enough if only analyze the time complexity of matrix multiplication during forward/backward propagation with ignoring the time consuming of non-linear transformation.
I would appreciate your help if you could provide the answers/insights of the above questions.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.