thunlp / age Goto Github PK
View Code? Open in Web Editor NEWSource code and dataset for KDD 2020 paper "Adaptive Graph Encoder for Attributed Graph Embedding"
Source code and dataset for KDD 2020 paper "Adaptive Graph Encoder for Attributed Graph Embedding"
您好!首先感谢您在图嵌入方面的工作。请问该模型能否适用于有权图,另外它在无属性的情况下表现依然优异吗?
Hi,
I congratulate you on this brilliant work. Unfortunately, I was not able to reproduce the paper results. I trained the model several times on Cora and Citeseer using the provided hyper-parameters.
Best results on Cora after 4 trials: ACC=74.889, NMI=55.762
Best results on Citeseer after 4 trials: ACC=58.791, NMI=36.117
Repeating the experiments does not seem to give any improvement.
Could the authors shed some light on this issue?
The problem is related to scikit version.
What do these suffixes "allx ally graph index tx ty x y" mean and how did you get them?
Thanks for your great work!
I think if you could release Implementation of baselines it would be pretty helpful to the community~
你好,可以把wiki数据集上传上去吗?
In section "Run", the headers "lowth_st" and "upth_ed" should be exchanged.
Thanks for your outstanding work!
When I am reading your paper, I find that two experiment including node clustering and link prediction is carried out. This may be done in a single graph. And I wonder if I can use AGE to do some multi-graph tasks, such as comparing nodes in different graphs or comparing different graphs.
As shown in the Readme document, I am confused about why a positive threshold is decaying when doing adaptive learning, but the negative is increasing when training.
Hi,
I'm trying to understand if the code can be generalized to multiple graphs. I don't want to compare them as in this issue, but I have a dataset with multiple smaller graphs (1000 nodes (variable) x 500 features (fixed)), and I want to annotate the edges of each one. However, from what I understood, all the datasets you use have only a single, large graph.
I have two possible solutions for how I can achieve this.
Combine all in a single large dataset and then use the same code, however, this solution doesn't scale to new graphs and also doesn't scale well for the GPU memory usage.
The alternative is to use batching from torch-geometric and then apply the Laplacian on the batch and then sample positives and negatives examples from each batch and iterate for a larger number of epochs. However, I think that the Laplacian smoothing might cause some issues from the different graphs, as their feature might be very different. Maybe, I can just use a batch of one, apply the smoothing, and sample large?
Any tips or suggestions about which solution would be better?
NOTE: I don't have any labeled data, so my only measures are unsupervised (eg silhouette, modularity).
Thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.