Coder Social home page Coder Social logo

conve's People

Contributors

bluesbreaker45 avatar timdettmers avatar todpole3 avatar zenogantner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

conve's Issues

Dropout applied twice

Hey,

Any reason for double dropout layers - once after the embedding in previous lines and once here?

ConvE/model.py

Line 39 in 34bb07d

e1_embedded_real = self.inp_drop(e1_embedded_real)

Thanks

Ask about ConvE model

I tried to follow code based on concept in the paper but I did not find any code that does dot product with object entity. Can you tell me where is object entity being fed during forward propagation?

No module named "spodernet"

thank you for your code!
I am a freshman,so there is some questions:
I ran the command

CUDA_VISIBLE_DEVICES=0 python main.py model ConvE dataset FB15k-237
input_drop 0.2 hidden_drop 0.3 feat_drop 0.2
lr 0.003 process True
after preprocessing
there is a error say: No module named "spodernet"
I find in "src" ther is a folder named spodernet,but i don't know how to install it,can you help me?
thank you

Any way to try 1-(0.1 N) scoring?

Hi,

I noticed in the new version of the paper, you mentioned 1-(0.1 N) scoring and I'd like to try that.
I didn't see any parameter for this in the code though.
Do you have instructions how to do that somewhere?

Detailed parameter settings of ComplEx on WN18RR and FB15237?

Hi, I have failed to reproduce results of ComplEx on WN18RR and FB15K237 reported in ConvE paper. I use an implementation of myself and it can reproduce results of FB15K and WN18 correctly. Could you please tell me the optimal parameter settings of ComplEx you implemented on these two datasets?

prcocess dataset YAG03-10 ERROR

Processing dataset YAGO3-10
Traceback (most recent call last):
File "wrangle_KG.py", line 34, in
data = f.readlines() + data
File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 188: ordinal not in range(128)

No able to reproduce results on FB15K237

Hi, I am using parameters given in the readme to run ConvE on FB15K237 but the best results I can get is: HITS@10: 0.484, MR: 274, MRR: 0.309, I didn't change anything. So is there specific parameter setting for FB15K237? Please tell me how to reproduce the results. Thanks in advance!

Any body has the problem of Memory Explosion?

Hi Tim,
When I run this code, the memory usage is getting bigger and finally got an explosion.
I was wondering if you have the same problem and what's the solution?

Looking forward to your kind feedback.
Thanks,
tsingker88

Not able to replicate FB15k scores

In the light of a major bug #18 I recomputed all scores for all datasets that were reported. While I was able to replicate all scores of other datasets within reasonable limits (some scores decreased, and some increased) I was not able to replicate the scores of FB15k. I have the log files of the run that produced the initial results but using the same parameters I am no longer able to receive scores which are close to the reported scores. I am not sure what has gone wrong, but I hypothesize that I used a different version of FB15k where I removed inverse relationships and self-inverse relationships — at the time of these experiments I was not aware that the same issues have been addressed by FB15k-237 (which only removed inverse relationships). I apologize for any confusion and misreporting that this error entailed. The paper and the scores in this repo have been updated.

Erroneous reported scores:

  • MR: 64, MRR 0.745, Hits@10: 0.873, Hits@3: 0.801, Hits@1: 0.670
    New correct scores:
  • MR: 51, MRR 0.657, Hits@10: 0.831, Hits@3: 0.723, Hits@1: 0.558

Spodernet `TypeError: 'NoneType' object is not callable` issue

I got an email with the following commands / errors.

1) 
 python main.py model ConvE dataset FB15k-237 process True
Traceback (most recent call last):
  File "main.py", line 181, in <module>
    main()
  File "main.py", line 153, in main
    for i, str2var in enumerate(train_batcher):
  File "/home/jisuk1/virtual_env_dir/torch/src/spodernet/spodernet/preprocessing/batching.py", line 369, in __next__
    self.publish_end_of_iter_event()
  File "/home/jisuk1/virtual_env_dir/torch/src/spodernet/spodernet/preprocessing/batching.py", line 302, in publish_end_of_iter_event
    obs.at_end_of_iter_event(self.state)
  File "/home/jisuk1/virtual_env_dir/torch/src/spodernet/spodernet/hooks.py", line 55, in at_end_of_iter_event
    metric = self.calculate_metric(state)
  File "/home/jisuk1/virtual_env_dir/torch/src/spodernet/spodernet/hooks.py", line 169, in calculate_metric
    return state.loss.item()
AttributeError: 'torch.FloatTensor' object has no attribute 'item'
Exception ignored in: <bound method Event.__del__ of <torch.cuda.Event 0x49e0c720>>
Traceback (most recent call last):
  File "/share/apps/lib/python3.5/site-packages/torch/cuda/streams.py", line 164, in __del__
TypeError: 'NoneType' object is not callable

2) 
python main.py model ConvE dataset WN18RR process True
Traceback (most recent call last):
  File "main.py", line 181, in <module>
    main()
  File "main.py", line 153, in main
    for i, str2var in enumerate(train_batcher):
  File "/home/jisuk1/virtual_env_dir/torch/src/spodernet/spodernet/preprocessing/batching.py", line 369, in __next__
    self.publish_end_of_iter_event()
  File "/home/jisuk1/virtual_env_dir/torch/src/spodernet/spodernet/preprocessing/batching.py", line 302, in publish_end_of_iter_event
    obs.at_end_of_iter_event(self.state)
  File "/home/jisuk1/virtual_env_dir/torch/src/spodernet/spodernet/hooks.py", line 55, in at_end_of_iter_event
    metric = self.calculate_metric(state)
  File "/home/jisuk1/virtual_env_dir/torch/src/spodernet/spodernet/hooks.py", line 169, in calculate_metric
    return state.loss.item()

What's wrong with FB15k and WN18?

You state

Used in the paper, but do not use these datasets for your research: FB15k and WN18.

What is the problem with using these datasets? They are commonly used in this domain.

How to deal with unseen entities in WN18RR and FB15K237?

Hi, I have noticed that test.txt/valid.txt of WN18RR and FB15K237 contains entities that are not appear at train.txt,. It means that these entities will never be updated during the training, how to deal with these unseen entities?

Sth seems to be wrong when I change to use a new large dataset

I try to train ConvE on a new data set, which has 128148 entities and 22 relations. My train set contains 684729 triples, and test 171171 triples. Is it too large? I guess the error comes at
train_batcher = StreamBatcher(Config.dataset, 'train', Config.batch_size, randomize=True, keys=input_keys)
The error information show as follow:

Exception in thread Thread-3:
Traceback (most recent call last):
File "/home/anaconda2/envs/pytorch/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/home/tools/ConvE/src/spodernet/spodernet/preprocessing/batching.py", line 175, in run
shard_idx = self.rdm.choice(len(list(self.shard2batchidx.keys())), 1, p=self.shard_fractions)[0]
File "mtrand.pyx", line 1142, in mtrand.RandomState.choice
ValueError: a and p must have same size

Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/anaconda2/envs/pytorch/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/home/tools/ConvE/src/spodernet/spodernet/preprocessing/batching.py", line 175, in run
shard_idx = self.rdm.choice(len(list(self.shard2batchidx.keys())), 1, p=self.shard_fractions)[0]
File "mtrand.pyx", line 1142, in mtrand.RandomState.choice
ValueError: a and p must have same size

Error in spodernet.preprocessing.batching.DataLoaderSlave.clean_cache

Hi,
Thanks a lot for sharing your code, it's been very helpful for me.
I ran your code with smaller smaller cache_size in the StreamBatcher objects because I had memory issues dealing with larger graphs (the full Wordnet hierarchy).

I ran into some errors iterating over StreamBatcher objects in my setting.
I checked the code and found that you pop elements from the cache list while iteration over it:

def clean_cache(self, current_paths):
    # delete unused cached data
    for i in range(len(self.cache_order)):
        if self.cache_order[i] in current_paths: continue
        path = self.cache_order.pop(i)
        self.current_data.pop(path, None)
        GB_usage = self.determine_cache_size()
        if GB_usage < self.cache_size_GB: break

Simply inversing the order of the iteration fixed it for me:
for i in list(range(len(self.cache_order)))[::-1]

So the full functions looks like this:

def clean_cache(self, current_paths):
    # delete unused cached data
    for i in list(range(len(self.cache_order)))[::-1]:
        if self.cache_order[i] in current_paths: continue
        path = self.cache_order.pop(i)
        self.current_data.pop(path, None)
        GB_usage = self.determine_cache_size()
        if GB_usage < self.cache_size_GB: break

I hope it helps, thanks again for sharing.

Problem with batching on YAGO3-10

I encountered an error in the batching.
This always happens in when I run on YAGO3-10 (after ca. 6 epochs).

I use batch size of 128.

Exception in thread Thread-4:
Traceback (most recent call last):
  File "miniconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "ConvE/src/spodernet/spodernet/preprocessing/batching.py", line 154, in run
    start = self.rdm.randint(0, n-self.stream_batcher.batch_size+1)
  File "mtrand.pyx", line 993, in mtrand.RandomState.randint
ValueError: low >= high

Please tell me your results using 3 decimal places on WN18RR

Thank you very much for your implementation. I just re-read your extended AAAI 2018 paper and see your updated results on WN18RR: MRR: 0.43 Hits@10: 0.52. You rounded the results to 2 decimal places. Could you please tell me your results using 3 decimal places? Thank you very much.

Question about "Inverse Model" in the paper.

I have understood how to detect inverse relations by in the "Inverse Model" in the paper.
But I do not quite get the idea of "k matches" in testing,
"At test time, we check if the test triple has inverse matches outside the test set: if k matches are found, we sample a permutation of the top k ranks for these matches; if no match is found, we select a random rank for the test triple."
Is this means, if we have (s, r_i, o), we will find all (o, r_j, s) in training set (r_j \in R), and k is the number of (o, r_j, s) ?
If yes, why should we do so?
It seems this test looks like "relation prediction" instead of "entity prediction".
Maybe I do not really understand it.

About WN18RR

It seems that some enities in the testing set does not appear in the training set. So, about 210 number of triples in testing set are meaningless?

Changing embedding size fails (line numbers in Quirks section are not right)

Sorry but I could not follow this description

If you use a different embedding size, the ConvE concatenation size cannot be determined automatically and you have to set it yourself in line 106/107. Also the first dimension of the projection layer will change. You will need to comment out the print function (line 118) to get the needed dimension, and adjust the size of the fully connected layer in line 98

I could not follow this explanation (lines 106/107, 98, 118 of which files ?)
If I want to change the embedding size to say 50, how many places I have to make the change ?

Can not reproduce results in the paper for WN18RR dataset

I have tried the generic command for reproducing results in the paper for WN18RR dataset, but it could not reproduce MRR reported in the paper, I managed only to get 0.42.

Which hyperparameters can reproduce the 0.46 MRR reported in the paper?

Model not running through

I run it in an nvidia-docker container with Anaconda Python Version 3.6.4 and all the dependencies installed listed in the readme. The script does the preprocessing and training of the first few epochs correctly, then the output is

########################################
           COMPLETED EPOCH: 435
 train Loss: 0.0012251        99% CI: (0.00098407, 0.0014662), n=68
 ########################################


saving to saved_models/FB15k-237_ConvE_0.2_0.3.model

 --------------------------------------------------
 dev_evaluation
 --------------------------------------------------

and it won't continue. I ran following command:

CUDA_VISIBLE_DEVICES=0 python main.py \
 model ConvE \
 dataset FB15k-237 \
 input_drop 0.2 \
 hidden_drop 0.3 \
 feat_drop 0.2 \
 lr 0.003 \
 lr_decay 0.995 \
 process True

Getting cuda initialization error

I have installed ConvE earlier and it was working but I recently re-installed it and I am getting following cuda error everytime I run the main.py file.
conve

Can someone please help me resolve this error?
Also, is it possible to run the embedding model without using cuda, just with pytorch cpu version?
I tried that too but it gives me "Torch not compiled with cuda enabled" error.

Settings in FB15k

hey!
To get the best results on the FB15k, what should the parameters be set to? I use the same settings as the FB15k-237 and can only get hit@10: 86 %.

Indexes for subjects and objects are inconsistent.

I found that for the same entity, the index when it is a subject is different from which when it is an object. I think this is because spodernet didn't know the domains of the keys e1 and e2_multi1 are the same (all entities). This inconsistency will lead to disordered encoding for subjects and objects.

The inconsistent indexes can be observed by adding these two lines to the main function:

def main():
    if Config.process: preprocess(Config.dataset, delete_data=True)
    input_keys = ['e1', 'rel', 'rel_eval', 'e2', 'e2_multi1', 'e2_multi2']
    p = Pipeline(Config.dataset, keys=input_keys)
    p.load_vocabs()
    vocab = p.state['vocab']

    num_entities = vocab['e1'].num_token

    ######### add lines below ##############
    print(vocab['e1'].token2idx)
    print(vocab['e2_multi1'].token2idx)
    ########################################

    train_batcher = StreamBatcher(Config.dataset, 'train', Config.batch_size, randomize=True, keys=input_keys)
    dev_rank_batcher = StreamBatcher(Config.dataset, 'dev_ranking', Config.batch_size, randomize=False, loader_threads=4, keys=input_keys)
    test_rank_batcher = StreamBatcher(Config.dataset, 'test_ranking', Config.batch_size, randomize=False, loader_threads=4, keys=input_keys)

On kinship dataset I got

{'OOV': 0, '': 1, 'person100': 2, 'person80': 3, 'person37': 4, 'person72': 5, 'person49': 6, 'person39': 7, 'person12': 8, 'person87': 9, 'person10': 10, 'person48': 11, 'person63': 12, 'person36': 13, 'person45': 14, 'person68': 15, 'person40': 16, 'person82': 17, 'person90': 18, 'person13': 19, 'person17': 20, 'person69': 21, 'person103': 22, 'person0': 23, 'person65': 24, 'person11': 25, 'person9': 26, 'person92': 27, 'person62': 28, 'person102': 29, 'person66': 30, 'person70': 31, 'person73': 32, 'person18': 33, 'person60': 34, 'person26': 35, 'person50': 36, 'person89': 37, 'person38': 38, 'person81': 39, 'person14': 40, 'person21': 41, 'person53': 42, 'person67': 43, 'person28': 44, 'person24': 45, '$erson95': 46, 'person51': 47, 'person3': 48, 'person41': 49, 'person99': 50, 'person96': 51, 'person7': 52, 'person54': 53, 'person15': 54, 'person1': 55, 'person29': 56, 'person78': 57, 'person31': 58, 'person83': 59, 'person33': 60, 'pe$son2': 61, 'person58': 62, 'person52': 63, 'person79': 64, 'person27': 65, 'person32': 66, 'person76': 67, 'person85': 68, 'person101': 69, 'person8': 70, 'person35': 71, 'person23': 72, 'person5': 73, 'person64': 74, 'person77': 75, 'per$on86': 76, 'person93': 77, 'person91': 78, 'person4': 79, 'person42': 80, 'person22': 81, 'person19': 82, 'person47': 83, 'person20': 84, 'person46': 85, 'person25': 86, 'person75': 87, 'person71': 88, 'person56': 89, 'person43': 90, 'per$on88': 91, 'person6': 92, 'person57': 93, 'person94': 94, 'person61': 95, 'person16': 96, 'person98': 97, 'person55': 98, 'person97': 99, 'person74': 100, 'person84': 101, 'person30': 102, 'person34': 103, 'person59': 104, 'person44': 105}
{'OOV': 0, '': 1, 'person77': 2, 'person82': 3, 'person63': 4, 'person59': 5, 'person56': 6, 'person80': 7, 'person83': 8, 'person85': 9, 'person90': 10, 'person44': 11, 'person100': 12, 'person97': 13, 'person88': 14, 'person93': 15, 'pe$son103': 16, 'person30': 17, 'person18': 18, 'person35': 19, 'person26': 20, 'person89': 21, 'person32': 22, 'person36': 23, 'person87': 24, 'person81': 25, 'person71': 26, 'person72': 27, 'person78': 28, 'person67': 29, 'person69': 30, '$erson74': 31, 'person49': 32, 'person45': 33, 'person37': 34, 'person43': 35, 'person28': 36, 'person96': 37, 'person102': 38, 'person4': 39, 'person16': 40, 'person46': 41, 'person50': 42, 'person17': 43, 'person20': 44, 'person39': 45, $person66': 46, 'person12': 47, 'person8': 48, 'person48': 49, 'person15': 50, 'person5': 51, 'person0': 52, 'person1': 53, 'person7': 54, 'person10': 55, 'person9': 56, 'person19': 57, 'person34': 58, 'person86': 59, 'person29': 60, 'pers$n73': 61, 'person2': 62, 'person21': 63, 'person13': 64, 'person24': 65, 'person3': 66, 'person14': 67, 'person53': 68, 'person23': 69, 'person62': 70, 'person98': 71, 'person25': 72, 'person40': 73, 'person99': 74, 'person92': 75, 'perso$27': 76, 'person76': 77, 'person79': 78, 'person31': 79, 'person95': 80, 'person75': 81, 'person33': 82, 'person70': 83, 'person38': 84, 'person42': 85, 'person91': 86, 'person68': 87, 'person41': 88, 'person94': 89, 'person64': 90, 'pers$n101': 91, 'person55': 92, 'person84': 93, 'person60': 94, 'person6': 95, 'person11': 96, 'person54': 97, 'person65': 98, 'person22': 99, 'person51': 100, 'person61': 101, 'person58': 102, 'person57': 103, 'person52': 104, 'person47': 105}

Is this intentional or a bug?

Error occurs at 'for i, str2var in enumerate(train_batcher):'

Hi, I tried to use a subset of FB15k-237, but something went wrong at for i, str2var in enumerate(train_batcher): without any error info.

The method I create subset was that:

  • Choose some entities.
  • Delete all triples in train.txt, valid.txt and test.txt which contain chosen entities.

In sum I deleted 11070/272115 of train.txt, 516/17535 in valid.txt and 684/20466 of test.txt. (I also did this on WN18RR but there was no error)

Then I run any model(ConvE, DistMult, ComplEx) in FB15k-237 subset, but there was no output. Use sys.exit(0) I got that it stayed at for i, str2var in enumerate(train_batcher): for more than 12 hours but didn't go to next step or any error information.

What's wrong with it? Could you help me?

some difference between paper and code

  1. paper say use early stop but it seems the code just train 1000 epochs ?
  2. paper say force L2 norm for DistMult and CompEx but I didn't see where is it ?
  3. I don't think we should see the test result during training, which is cheating . so why do test every 3 epoch in the main.py
  4. paper say DistMult and CompEx use margin-based loss and ConvE use cross entropy loss but what I see is that they all use the torch.nn.BCELoss for model.loss ?

maybe the questions are a bit more but I am working on a new model on the task , hard , so hope you could give some answer which is a great help for me and I will appreciate it a lot ,thank you !
(please forgive me for my poor English..)

ImportError: No module named bashmagic

Hey!

First of all, a really great thanks for open sourcing your code. I was trying to run the code but ran into some issues with dependencies.

Traceback (most recent call last):
  File "main.py", line 16, in <module>
    from spodernet.preprocessing.pipeline import Pipeline, DatasetStreamer
  File "/home/username/ConvE/src/spodernet/spodernet/preprocessing/pipeline.py", line 9, in <module>
    from spodernet.preprocessing.vocab import Vocab
  File "/home/username/ConvE/src/spodernet/spodernet/preprocessing/vocab.py", line 9, in <module>
    import bashmagic
ImportError: No module named bashmagic

Would appreciate if you can provide some insight to how to solve this issue.

Many thanks!

Config.batchsize inside model definitions

Hi,

Thank you for making your code available to the public. I've been trying to incorporate your code for the models' definitions in my existing pipeline (including dataloaders etc.), but failed to figure out how to adapt the code for Complex and ConvE models, which use Config.batch_size (global? changed by your dataloaders?) in their forwardmethods.

Can you explain why we need to reshape tensors according to the batch_size for the Complex and ConvE models?

The DistMult implementation does not make use of the batch_size and it integrates effortlessly with my dataloader.

P.s. I would have used your dataloaders -- which seem to be fairly advanced compared to my vanilla ones -- but there is no documentation for the spodernet project.

Exporting the learned embeddings

Hi,

Thanks for open-sourcing the code! I am interested in exporting the embeddings learned by ConvE to use with another task. Is there a straightforward way to export the mapping from entity / relation IDs to the learned embeddings?

Thanks,
Bhuwan Dhingra

Source and original format of kinship.tar.gz

In your version of the kinship dataset, it's formatted like the following:

Term2	Person58	Person72
Term20	Person48	Person49
Term7	Person39	Person94
Term16	Person91        Person4
Term13	Person93	Person95
Term8	Person70        Person12
Term13	Person61        Person77
Term16	Person36	Person81

I wonder what is the original values of Term# relations, so I wonder what is the source of kinship dataset? Also, I assume that this format is Predicate, Subject, Object. Do you consider rearranging columns before feeding data to the model?

Thanks a lot.

Reopening issue #43 on data augmentation with reversed triples

Thanks for the answer in #43 (comment), but I don't quite get the point. As pointed out in [1] adding inverse relations to the training set affects the performance of the model. To cite their paper:

Third, we propose a different formulation of the objective, in which we model separately predicates and their inverse: for each predicate pred, we create an inverse predicate predicate and create a triple (obj; pred^-1; sub) for each training triple (sub; pred; obj). At test time, queries of the form (?; pred; obj) are answered as (obj; pred^-1; ?). Similar formulations were previously used by Shen et al. (2016) and Joulin et al. (2017), but for different models for which there was no clear alternative, so the impact of this reformulation has never been evaluated.

... Learning and predicting with the inverse predicates, however, changes the picture entirely. First, with both CP and ComplEx, we obtain significant gains in performance on all the datasets. More precisely, we obtain state-of-the-art results with CP, matching those of ComplEx.

So does ConvE addi inverse relations as [1] did in their paper? Then according to [1] one can conclude that ConvE has profited from this data augmentation, unless it does an ablation study and shows there is no difference, right? I think this is an important point concerning a fair comparison against other existing; this can decide acceptance/rejection of future knowledge graph embeddings papers!

[1] Lacroix, Timothee, Nicolas Usunier, and Guillaume Obozinski. “Canonical Tensor Decomposition for Knowledge Base Completion.” In International Conference on Machine Learning, 2863–72, 2018. http://proceedings.mlr.press/v80/lacroix18a.html.

Clarification on Experiment Setup

Two clarification questions:

  1. Do you retrain the embeddings with valid set triples added for test set prediction? An alternative is to train the embeddings using only train set triples and obtain the results for both dev and test using the same set of embeddings. Looking at the code I think you're doing the latter, just to make sure.

  2. For dataset such as KINSHIP, do you random split the triples into train, dev, test according to a certain ratio or is there an official data split used by all papers? I'm asking because another KBC paper also released the KINSHIP dataset and the data split is different from yours.

Thanks!

How many epochs?

Hello, I am a PhD Student at Roma Tre University and I'm working on a comparative analysis among link prediction models.

I have really appreciated your paper "Convolutional 2D Knowledge Graph Embeddings" and I would like to add ConvE to my experiments. I am trying to replicate your results and I have started training with the configuration you describe in your readme.MD:

CUDA_VISIBLE_DEVICES=0 python main.py model ConvE input_drop 0.2 hidden_drop 0.3 \
                                      feat_drop 0.2 lr 0.003 lr_decay 0.995 \
                                      dataset DATASET_NAME

Unfortunately I can not find any details (either in the readme or in the paper) on the termination condition you have used in your training. Did you just stop after a certain number of epochs? If so, how many?
Thanks in advance for your help!

Andrea

Model not learning anything

Hi
I have a Movie KG for which I am trying to train ConvE model. It has 6 relation types, around 38k entities and 100k triples. I trained ConvE model for 500 epochs and 300 dimensional vectors keeping rest of the parameters same, but the MRR and Hit@10 values never go beyond 0.2. Each epoch doesn't take more than a minute to finish. Also, the values remain same from 20 epochs of training to 500 epochs. The training loss is also 0.0003 and hardly changes.
Any idea how many epochs model should take to learn the embeddings?

Following is a snapshot of the output I get:

2019-01-14 23:43:38.559798 (INFO): ########################################
2019-01-14 23:43:38.559814 (INFO): COMPLETED EPOCH: 500
2019-01-14 23:43:38.559827 (INFO): train Loss: 0.00030636 99% CI: (0.0003061, 0.00030663), n=61
2019-01-14 23:43:38.559837 (INFO): ########################################
2019-01-14 23:43:38.559848 (INFO):

saving to saved_models/D1_ConvE_128_500.model
2019-01-14 23:43:38.705671 (INFO):
2019-01-14 23:43:38.705735 (INFO): --------------------------------------------------
2019-01-14 23:43:38.705770 (INFO): dev_evaluation
2019-01-14 23:43:38.706107 (INFO): --------------------------------------------------
2019-01-14 23:43:38.706128 (INFO):
2019-01-14 23:45:19.866323 (INFO): Hits left @1: 0.3492588141025641
2019-01-14 23:45:19.866904 (INFO): Hits right @1: 0.05999599358974359
2019-01-14 23:45:19.868125 (INFO): Hits @1: 0.20462740384615385
2019-01-14 23:45:19.868674 (INFO): Hits left @2: 0.34935897435897434
2019-01-14 23:45:19.869216 (INFO): Hits right @2: 0.060196314102564104
2019-01-14 23:45:19.870324 (INFO): Hits @2: 0.20477764423076922
2019-01-14 23:45:19.870869 (INFO): Hits left @3: 0.34935897435897434
2019-01-14 23:45:19.871412 (INFO): Hits right @3: 0.060196314102564104
2019-01-14 23:45:19.872464 (INFO): Hits @3: 0.20477764423076922
2019-01-14 23:45:19.873017 (INFO): Hits left @4: 0.34935897435897434
2019-01-14 23:45:19.873587 (INFO): Hits right @4: 0.060196314102564104
2019-01-14 23:45:19.874643 (INFO): Hits @4: 0.20477764423076922
2019-01-14 23:45:19.875188 (INFO): Hits left @5: 0.34935897435897434
2019-01-14 23:45:19.875729 (INFO): Hits right @5: 0.060196314102564104
2019-01-14 23:45:19.876787 (INFO): Hits @5: 0.20477764423076922
2019-01-14 23:45:19.877350 (INFO): Hits left @6: 0.34935897435897434
2019-01-14 23:45:19.877890 (INFO): Hits right @6: 0.06029647435897436
2019-01-14 23:45:19.878956 (INFO): Hits @6: 0.20482772435897437
2019-01-14 23:45:19.879499 (INFO): Hits left @7: 0.34935897435897434
2019-01-14 23:45:19.880037 (INFO): Hits right @7: 0.060396634615384616
2019-01-14 23:45:19.881873 (INFO): Hits @7: 0.2048778044871795
2019-01-14 23:45:19.882422 (INFO): Hits left @8: 0.34935897435897434
2019-01-14 23:45:19.882968 (INFO): Hits right @8: 0.060396634615384616
2019-01-14 23:45:19.884053 (INFO): Hits @8: 0.2048778044871795
2019-01-14 23:45:19.884598 (INFO): Hits left @9: 0.34935897435897434
2019-01-14 23:45:19.885140 (INFO): Hits right @9: 0.060396634615384616
2019-01-14 23:45:19.886244 (INFO): Hits @9: 0.2048778044871795
2019-01-14 23:45:19.886789 (INFO): Hits left @10: 0.34935897435897434
2019-01-14 23:45:19.887327 (INFO): Hits right @10: 0.060396634615384616
2019-01-14 23:45:19.888419 (INFO): Hits @10: 0.2048778044871795
2019-01-14 23:45:19.889793 (INFO): Mean rank left: 12504.717748397436
2019-01-14 23:45:19.891131 (INFO): Mean rank right: 16421.001302083332
2019-01-14 23:45:19.893638 (INFO): Mean rank: 14462.859525240385
2019-01-14 23:45:19.894989 (INFO): Mean reciprocal rank left: 0.3493859660704812
2019-01-14 23:45:19.896470 (INFO): Mean reciprocal rank right: 0.06042333463160364
2019-01-14 23:45:19.899066 (INFO): Mean reciprocal rank: 0.20490465035104244

Incorrect masking of reverse relations in evaluation procedure

This is to document a buy brought to me by Victoria Lin from Salesforce Research. She noted the following:

The problem is caused by the design of the dictionary keys. For both directions, the relation part of the key is the same. This causes some false positives to be mixed into the ground truth sets.
Consider a relation of the construct:
(A, father of, B)*
(B, father of, C)*
The statement d_egraph[(e2, rel)].add(e1) added A as a correct answer for (B, father of, ?). As a result, A does not trigger a rank penalty in evaluation while it should. A model that predicts an entity ranking list [A, C, ...] receives a measure of rank 1 (while the correct measure should be 2).

* example altered for clarity.

In other words, for the test triples:

(Mike,  father of, John)
(John, father of, Tom)

We would have at test time for the masks of existing triples (as computed in wrangle_KG.py):

(John, fatherOf, ?) -> mask = {Mike, Tom}
(?, fatherOf, John) -> mask = {Mike, Tom}

while the correct masks should be:

(John, fatherOf, ?) -> mask = {Tom}
(?, fatherOf, John) -> mask = {Mike}

Fixing the issue was not simple since ConvE is, unlike other link predictors, directional due to 1-N scoring. If we want to score (E, rel, e2) in ConvE, where E are all entities, then we can only do this by computing (e2, rel, E). One can simply ignore the issue of an directional model and provide different masks for correctness, but this decreases the scoring for ConvE, since it would predict for (e1, rel, E) and (e2, rel, E) the same values although the labels are different.

The solution that I opted for was to introduce a "reverse relation" to indicate the direction of evaluation. If ConvE is evaluated from right to left, that is, (E, rel, e2) then we would compute the ConvE score with (e2, rel_reverse, E); for evaluations from left to right, the scoring remains the same (e1, rel, E).

This bugfix was implemented in d830ddf.

New Results

Currently, I do not have the compute resources to compute an grid search for new values, but I found the following differences in scores. Here + means an indirect in score (good for Hits and MRR) and - means a decrease in score (good for MR).

Better

  • UMLS
    • MR -1, MRR +0.13 Hits@10: +0.01, Hits@3: +0.06, Hits@1: +0.20
  • WN18RR:
    • MR -1090, MRR 0.0 Hits@10: +0.03, Hits@3: +0.01, Hits@1: -0.010
  • FB15k-237:
    • MR -2, MRR +0.009 Hits@10: +0.010, Hits@3: +0.006, Hits@1: -0.002

Almost No change

  • Kinship
    • MR 0, MRR -0.01 Hits@10: +0.01, Hits@3: 0.00, Hits@1: -0.02
  • WN18:
    • MR -530, MRR +0.001 Hits@10: +0.001, Hits@3: -0.001, Hits@1: 0.000

Worse

  • FB15k?
  • YAGO3-10?

There seems to be something wrong with the FB15k scores. And I have to investigate what that exactly is. I am currently still computing YAGO3-10 scores.

I will update the paper once I have all the scores.

Data processing returns error

I did use pytorch v0.4 & python 3.6,

there are two errors I found on data processing.

  1. running (python main.py model ConvE dataset FB15k-237 process True)
    -->
    Traceback (most recent call last):
    File "main.py", line 181, in
    main()
    File "main.py", line 153, in main
    for i, str2var in enumerate(train_batcher):
    File "/home/jisuk1/virtual_env_dir/torch/src/spodernet/spodernet/preprocessing/batching.py", line 369, in next
    self.publish_end_of_iter_event()
    File "/home/jisuk1/virtual_env_dir/torch/src/spodernet/spodernet/preprocessing/batching.py", line 302, in publish_end_of_iter_event
    obs.at_end_of_iter_event(self.state)
    File "/home/jisuk1/virtual_env_dir/torch/src/spodernet/spodernet/hooks.py", line 59, in at_end_of_iter_event
    delta = metric - self.mean
    TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'
    Exception ignored in: <bound method Event.del of <torch.cuda.Event 0x3b55aa0>>
    Traceback (most recent call last):
    File "/share/apps/lib/python3.5/site-packages/torch/cuda/streams.py", line 164, in del
    File "/share/apps/lib/python3.5/site-packages/torch/cuda/init.py", line 188, in check_error
    AttributeError: 'NoneType' object has no attribute 'SUCCESS

  2. Preprocessing unicode err --> this can be fixed by individuals, but good to have it right.

Processing dataset YAGO3-10
Traceback (most recent call last):
File "wrangle_KG.py", line 36, in
data = f.readlines() + data
File "/home/jisuk1/anaconda3/envs/pytorch0.4/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 188: ordinal not in range(128)

If the data size is not the integer times of batch size, the last imcomplete batch is dropt.

In spodernet/spodernet/preprocessing/batching.py (https://github.com/TimDettmers/spodernet/blob/c72b3047fec15920beb5a396993f10061eec6366/spodernet/preprocessing/batching.py), the number of batch is int(np.sum(config['counts']) / batch_size), which means that to drop the last incomplete batch, if the dataset size is not divisible by the batch size. Thus, in the training and test process, some data is ignored. In this case, I think keep the last smaller batch is more reasonable.

Question about "Number of Interactions for 1D vs 2D Convolutions"

Hi,

Thanks for your remarkable paper! I am sorry to tell you that I could not quite catch “Number of Interactions for 1D vs 2D Convolutions”in your paper. Specifically, why the number of interactions for 1D convolution is proportional to k, and why n and k for 2D convolutions?So is the number of interactions for 3D convolutions. Sorry for my poor English.

Thanks,
Benben

AttributeError: 'Logger' object has no attribute 'f'

Traceback (most recent call last):
File "main.py", line 13, in
from evaluation import ranking_and_hits
File "D:\Codes\ConvE\evaluation.py", line 5, in
from spodernet.utils.global_config import Config
File "d:\codes\conve\src\spodernet\spodernet\utils\global_config.py", line 4, in
log = Logger('global_config.py.txt')
File "d:\codes\conve\src\spodernet\spodernet\utils\logger.py", line 60, in init
path = join(get_logger_path(), file_name)
File "d:\codes\conve\src\spodernet\spodernet\utils\logger.py", line 18, in get_logger_path
return join(get_home_path(), '.data', 'log_files')
File "d:\codes\conve\src\spodernet\spodernet\utils\logger.py", line 15, in get_home_path
return os.environ['HOME']
File "C:\Users\shuaizhang\AppData\Local\Programs\Python\Python36\lib\os.py", line 669, in getitem
raise KeyError(key) from None
KeyError: 'HOME'
Exception ignored in: <bound method Logger.del of <spodernet.utils.logger.Logger object at 0x0000029BBC69C080>>
Traceback (most recent call last):
File "d:\codes\conve\src\spodernet\spodernet\utils\logger.py", line 71, in del
self.f.close()
AttributeError: 'Logger' object has no attribute 'f'

saved models reused

If my network were broken some time, how to use the model in directory ‘saved models’ to continue the next epoch?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.