Coder Social home page Coder Social logo

lee-zix / re-gcn Goto Github PK

View Code? Open in Web Editor NEW
119.0 2.0 21.0 50.85 MB

This is the official code release of the following paper: Zixuan Li, Xiaolong Jin, Wei Li, Saiping Guan, Jiafeng Guo, Huawei Shen, Yuanzhuo Wang and Xueqi Cheng. Temporal Knowledge Graph Reasoning Based on Evolutional Representation Learning

Python 100.00%

re-gcn's Introduction

Temporal Knowledge Graph Reasoning Based on Evolutional Representation Learning

This is the official code release of the following paper:

Zixuan Li, Xiaolong Jin, Wei Li, Saiping Guan, Jiafeng Guo, Huawei Shen, Yuanzhuo Wang and Xueqi Cheng. Temporal Knowledge Graph Reasoning Based on Evolutional Representation Learning. SIGIR 2021.

regcn_architecture

Quick Start

Environment variables & dependencies

conda create -n regcn python=3.7

conda activate regcn

pip install -r requirement.txt

Process data

First, unzip and unpack the data files

tar -zxvf data-release.tar.gz

For the three ICEWS datasets ICEWS18, ICEWS14, ICEWS05-15, go into the dataset folder in the ./data directory and run the following command to construct the static graph.

cd ./data/<dataset>
python ent2word.py

Train models

Then the following commands can be used to train the proposed models. By default, dev set evaluation results will be printed when training terminates.

  1. Make dictionary to save models
mkdir models
  1. Train models
cd src
python main.py -d ICEWS14s --train-history-len 3 --test-history-len 3 --dilate-len 1 --lr 0.001 --n-layers 2 --evaluate-every 1 --gpu=0 --n-hidden 200 --self-loop --decoder convtranse --encoder uvrgcn --layer-norm --weight 0.5  --entity-prediction --relation-prediction --add-static-graph --angle 10 --discount 1 --task-weight 0.7 --gpu 0

Evaluate models

To generate the evaluation results of a pre-trained model, simply add the --test flag in the commands above.

For example, the following command performs single-step inference and prints the evaluation results (with ground truth history).

python main.py -d ICEWS14s --train-history-len 3 --test-history-len 3 --dilate-len 1 --lr 0.001 --n-layers 2 --evaluate-every 1 --gpu=0 --n-hidden 200 --self-loop --decoder convtranse --encoder uvrgcn --layer-norm --weight 0.5  --entity-prediction --relation-prediction --add-static-graph --angle 10 --discount 1 --task-weight 0.7 --gpu 0 --test

The following command performs multi-step inference and prints the evaluation results (without ground truth history).

python main.py -d ICEWS14s --train-history-len 3 --test-history-len 3 --dilate-len 1 --lr 0.001 --n-layers 2 --evaluate-every 1 --gpu=0 --n-hidden 200 --self-loop --decoder convtranse --encoder uvrgcn --layer-norm --weight 0.5  --entity-prediction --relation-prediction --add-static-graph --angle 10 --discount 1 --task-weight 0.7 --gpu 0 --test --multi-step --topk 0

Change the hyperparameters

To get the optimal result reported in the paper, change the hyperparameters and other experiment set up according to Section 5.1.4 in the paper (https://arxiv.org/abs/2104.10353).

Citation

If you find the resource in this repository helpful, please cite

@article{li2021temporal,
  title={Temporal Knowledge Graph Reasoning Based on Evolutional Representation Learning},
  author={Li, Zixuan and Jin, Xiaolong and Li, Wei and Guan, Saiping and Guo, Jiafeng and Shen, Huawei and Wang, Yuanzhuo and Cheng, Xueqi},
  booktitle={SIGIR},
  year={2021}
}

re-gcn's People

Contributors

lee-zix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

re-gcn's Issues

Confusion about baseline

I'm very sorry to be a brother. Can you publish or give me the code of RGCRN in baseline, which will be very helpful to me. If you can, I hope you can send it to my email [email protected].

Filtered Result

Hi,

very nice work!
I am just wondering why you didn't report filtered results in your paper, since filtered results are in most cases much fairer than raw results.

Could you please also post the filtered results of your model?
Thanks!

Zifeng

你好,请问以下下面的错误怎么解决?

Traceback (most recent call last):
File "E:\知识图谱\RE-GCN-master\src\main.py", line 442, in
run_experiment(args)
File "E:\知识图谱\RE-GCN-master\src\main.py", line 234, in run_experiment
loss_e, loss_r, loss_static = model.get_loss(history_glist, output[0], static_graph, use_cuda)
File "E:\知识图谱\RE-GCN-master\src\rrgcn.py", line 212, in get_loss
evolve_embs, static_emb, r_emb, _, _ = self.forward(glist, static_graph, use_cuda)
File "E:\知识图谱\RE-GCN-master\src\rrgcn.py", line 160, in forward
temp_e = self.h[g.r_to_e]
IndexError: tensors used as indices must be long, byte or bool tensors,代码没动过,除了改了一下UTF-8的编码格式,剩下的都是Github上一样的,为什么会报这种错误?

Ask for codes

Hi, I have read your paper and that is very helpful! I would be appreciated if you commit the codes of this paper:-)

Adaptation to a "different" static graph

Hi!

First of all, thank you for sharing the code for your publication.

I am working to adapt your framework to my problem, where "static" relations regard nodes which are already in the knowledge graph. As a consequence, when setting static_graph.ndata['h'] (rrgcn.py, line 146), I don't need to concatenate dynamic embeddings with word embeddings (which I have deleted), and thus when I take the output of the static layer I don't have to limit to the first self.num_ents elements.

However, with this settings, the entity-prediction loss is nan. I've tried to solve this problem by setting self.h as the sum of the dynamic and static embeddings (rrgcn.py, line 150 and 176). In this way, the problem with the entity-prediction loss is solved, but the static loss has very big values (>2000) and does not decreases over the training epochs. The result is that the static graph doesn't allow to improve performance.

Since this is a methodological question rather than a coding issue, I was wondering if you could help me with your feedback and guidance. Your ideas on how to adapt the methodology and code to work with my task would have a great value to me.

Thanks,
Marco

Confusion about dataset and experiment

Hello, I have some confusions about GEDLT dataset. The training set of GDELT is 1734399 in your paper, but it's down to about a million in the code. May I ask why the data set in the experiment is different from the data set in the paper?

The sceond is when I use CyGNet to experiment on the ICEWS14 dataset in your code, I get better results than your paper.
The result I have gotten is below:

python test.py --dataset ICEWS14s
Namespace(alpha=0.5, batch_size=1024, counts=4, dataset='ICEWS14s', entity='object', gpu=0, hidden_dim=200, lr=0.001, n_epochs=30, raw=False, regularization=0.01, time_stamp=1, valid_epoch=5)
num_times 366
start object testing:
Using best epoch: 7
test object-- Epoch 0007 | Best MRR 0.5029 | Hits@1 0.4366 | Hits@3 0.5419 | Hits@10 0.6185
start subject testing:
Using best epoch: 8
test subject-- Epoch 0008 | Best MRR 0.4795 | Hits@1 0.4084 | Hits@3 0.5197 | Hits@10 0.6051

final test --| Best MRR 0.4912 | Hits@1 0.4225 | Hits@3 0.5308 | Hits@10 0.6118

Confusion about the experimental setup

I have read your paper and that is very helpful! I'm curious about the experimental setup without ground truth. I tried some experiments, but I didn't get the result that you wrote in the paper without ground truth. It would be very helpful if you could give the experimental setup for this part.

cuda out of memory

I tried to train a model on my own dataset, but I came with the problem "cuda out of memory", is there any good way to solve that problem?

The results for WIKI dataset are different from the paper

In your paper, the Hit@10 for WIKI is 53.88 or 53.88. But I run the code, and get 0.833 Hit@10, which is almost the same as the Yago dataset.
I am curious how you do the experiments. Can the model distinguish between Yago and WIKI?

confusion about parameter(skip_connect)

i found skip_connect=True never works in your code.
it is used in a layer as follows:

            for i, layer in enumerate(self.layers):
                layer(g, [], init_rel_emb[i])

and layer calls for the following code,and bool expression len(prev_h) != 0 and self.skip_connect is forever False.

    def forward(self, g, prev_h, emb_rel):
        self.rel_emb = emb_rel
        if len(prev_h) != 0 and self.skip_connect:
            skip_weight = F.sigmoid(torch.mm(prev_h, self.skip_connect_weight) + self.skip_connect_bias) 

so this parameter doesn't work,is there something wrong?

confusion about dataset

image

can someone tell me what's "24" and "0" mean?What i know is all lines with 24 are events happens on 2018-01-02,but i don't know why 2018-01-02 is encoded as 24 and what does the last column represent?

command line for running on YAGO and wiki

Can you share the command lines for running on YAGO and wiki?

I got such an error

FileNotFoundError: [Errno 2] No such file or directory: './data/YAGO/e-w-graph.txt'

Confusion about YAGO and WIKI

Thank you for sharing your code. I find that YAGO and WIKI data sets in the code do not have ent2word.py, so e-w-graph.txt cannot be generated. I hope you can answer my doubts

Confusion about the experiment

您好,我跑了您的模型,发现一个问题,您的模型似乎在二十几轮左右达到了最佳效果,并且之后loss在下降,MRR反而也下降了。但是您的实验参数设置的epochs是500,请问一下这个是过拟合了吗?为什么随着训练轮次上升效果反而下降了?并且采用您的参数发现关系预测得到的结果好像比论文中的数据差,想问下是我的设置哪个地方有问题?谢谢您~

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.