Comments (7)
Thx! In addition I have some questions and suggestions.
Q1: When I run ./go.sh 0 NCI1 random2
or ./go.sh 0 NCI1 random3
, it will show nan loss in the unsupervised_TU
+ for seed in 0 1 2 3 4
+ CUDA_VISIBLE_DEVICES=0
+ python gsimclr.py --DS NCI1 --lr 0.01 --local --num-gc-layers 3 --aug random2 --seed 0
4110
37
================
lr: 0.01
num_features: 37
hidden_dim: 32
num_gc_layers: 3
================
tensor(nan, device='cuda:0', grad_fn=<NegBackward>)
tensor(nan, device='cuda:0', grad_fn=<NegBackward>)
However, when I run ./go.sh 0 NCI1 random4
, it will back to normal. I would like to know why this is happening?
Q2: How is GraphCL loss implemented in unsupervised_Cora_Citeseer
? As I cannot find the simliarity between c_1
and c_2
?
Q3: what is shuf_fts
used for? and what is the function of h_0, h_2
in Discriminator, since in Discriminator2
, there is no c_x = c_x.expand_as(h_pl)
.
I hope you can give a more detailed comment, thank you!
S1: For unsupervised_TU
, downloading TU dataset is working fine, however, in semisupervised_TU
, auto downloading dataset does not work properly. I found out that it was due to a problem with the version of the torch-geometric used, which worked fine in version 1.1.0 and did not download correctly in version 1.5.0. Which is the #1 problem.
S2: The installation of cortex-DIM is omitted in the required env yaml. Since cortex-DIM
folder is only exist in simsupervised_TU/finetuning
.
from graphcl.
I have fixed this bug just now, you can try again.
from graphcl.
Hi @flyingtango,
Thanks for detailed feedback. I will try to double check things within this week.
from graphcl.
Hi @flyingtango,
Q1. I fixed the bugs. Seems the dropping node ratio was incorrect previously which stands out in random2 (only sample from dropping nodes & subgraph) compared with in random4 --> value overflow --> nan values and gradients.
Q2. @yongduosui can you give some comments on this?
Q3. Would you mind referring the position of the code? Since the implementation is division of labour I would like to find the right person to address the question.
S1. Yes that's right. Since we did experiments in a variety of settings, thus I might refer to the SOTA in each setting first (see the acknowledge part in each exp) --> then I start to implement our version --> thus the environment of each exp are separated. I am sorry that make the inconvenience.
S2. Sorry for the mistake. It should exist in unsupervised_TU dir rather that semisupervised_TU dir. I already made it in the right place.
from graphcl.
@flyingtango
Q2. Please check paper DEEP GRAPH INFOMAX[1], this paper maximizing mutual information between patch representations and corresponding high-level summaries of graphs. We also add augmentation graphs information to maximizing mutual information, which is equal to optimize GraphCL loss. You can check and compare the theoretical proof in our paper Appendix section with the paper DEEP GRAPH INFOMAX[1] for more details.
[1] Petar Veliˇckovi´c, William Fedus, William L Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. arXiv preprint arXiv:1809.10341, 2018.
from graphcl.
Hi @yyou1996,
Thanks for your bug fix! I also met the Q1 previously.
Now after the update, is the ratio of dropped nodes
20%? It seems that the ratio of remained nodes
is 20% by mistake previously.
from graphcl.
@ha-lins Yes the augmentation ratio is the dropping ratio rather than remained ratio.
from graphcl.
Related Issues (20)
- About Unsupervised_Cora_Citeseer HOT 1
- Different result about Transfer Learning HOT 2
- ValueError: `MessagePassing.propagate` only supports `torch.LongTensor` of shape `[2, num_messages]` or `torch_sparse.SparseTensor` for argument `edge_index`. HOT 1
- Question about GraphCL HOT 1
- Question About Sungraph HOT 1
- Question about data augmentation HOT 1
- Question about Unsupervised_TU HOT 4
- Unsupervised learning with self created dataset HOT 1
- graph classification HOT 1
- 问题请教 HOT 1
- A question about Semisupervised_TU in pre_training HOT 1
- 运行过程未结束就‘Early stopping!’ HOT 1
- Question about Unsupervised_TU experiments details HOT 4
- Question about Unsupervised-TU ''test'' and ''val'' HOT 1
- How to use for Graph Clustering. HOT 1
- information about the datasets HOT 1
- Question about cortex_DIM in net_infomax.py of semisupervised_TU/finetuning HOT 2
- File "/home/test02/code/GraphCL-master/semisupervised_TU/pre-training/feature_expansion.py", line 27, in __init__ super(FeatureExpander, self).__init__('add', 'source_to_target') TypeError: __init__() takes from 1 to 2 positional arguments but 3 were given HOT 1
- Dateset Error HOT 2
- Dataset Download Fail HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from graphcl.