Coder Social home page Coder Social logo

chainsofreasoning's People

Contributors

rajarshd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

chainsofreasoning's Issues

Some questions about "/vacab/domain-Label.gz"

{"domain": {"1": 0, "-1": 1}, "name": "label"}

Why "1" is 0 and "-1"is 1? If use this label, the positive examples in the test set and development set are approximately ten times more than the negative ones... this seems to be the opposite, right?
By the way, whether the pytorch re-implmentation has been finished? I've been trying to reproduce it using Pyotorch recently, and if you already release one, it's great. : )

Evaluation

Hi, thank you for your code, I want to know if you performed link prediction tasks for the evaluation.
For each test entity pair (h, r, t) in WebClue test dataset, you predict (h, r, ?) where you put all entities in place of ? such that:
predict(h, r, e1)
predict(h, r, e2)
...
predict(h, r, en)
Then, compute Average Precision for (h, r, ?)

Is it correct?

call for help

when i ran the code, i can not find the file MyBCECriterion in all the file chainsofreasoning. can you help me to get the MyBCECriterion.lua? i really appreciate for your response.

Test issues

Hello,I have studied your code for a long time, but just now I found out that it seems you didn't push your test code here ?
I have already run out the module 0-20, could you please told me how to recurrence your results in your paper?
And the link on this repository is linked to the 5 Jul 2016 (v1)version, do you really mean that? Or just forgot to update it to the v2 version?
image

This is really important for my group, thanks for your answer.

Train Result Error

When I run this program on my Ubuntu 18.04,and I set the global varieties by not using GPU,and finally I find this output:
Iter: 2
avg loss in epoch = nan
total elapsed = 0.885171
time per batch = inf
examples/sec = 0.000000

I don't know why

example failed--where or how to generates the file train.list

mldl@mldlUB1604:/ub16_prj/ChainsofReasoning/run_scripts$ bash train.sh ./config.sh
experiment_dir ../
experiment_file ..//0.txt
output_dir results/lse
data_dir ../examples/data_small_output/_architecture_structure_address/
gpu_id -1
numEpoch 20
numEntityTypes 7
includeEntityTypes 1
includeEntity 0
numFeatureTemplates 10
relationEmbeddingDim 250
entityTypeEmbeddingDim 100
entityEmbeddingDim 50
rnnHidSize 250
topK 2
K 5
Learning Rate 1e-3
Learning Rate Decay 0.0167
rnnType rnn
epsilon 1e-8
gradClipNorm 5
gradientStepCounter 100000
saveFrequency 1
batchSize 32
useGradClip 1
package_path
useAdam 1
paramInit 0.1
evaluationFrequency 5
createExptDir 1
useReLU 1
l2 1e-3
rnnInitialization 1
regularize 0
numLayers 1
useDropout 0
relationVocabSize 51390
entityVocabSize 1542690
entityTypeVocabSize 2218
dropout 0.3
Executing:
th ..//model/OneModel.lua -dataDir ../examples/data_small_output/_architecture_structure_address/ -tokenFeatures 0 -minibatch 32 -gpuid -1 -learningRate 1e-3 -l2 1e-3 -numEpochs 20 -useAdam 1 -saveFrequency 1 -evaluationFrequency 5 -model results/lse/2017-07-05-23-01-50/_architecture_structure_address/model -rnnType rnn -exptDir results/lse/2017-07-05-23-01-50/_architecture_structure_address -relationVocabSize 51390 -entityTypeVocabSize 2218 -relationEmbeddingDim 250 -entityTypeEmbeddingDim 100 -numFeatureTemplates 10 -numEntityTypes 7 -includeEntityTypes 1 -includeEntity 0 -entityVocabSize 1542690 -entityEmbeddingDim 50 -rnnHidSize 250 -topK 2 -epsilon 1e-8 -gradClipNorm 5 -gradientStepCounter 100000 -useGradClip 1 -paramInit 0.1 -createExptDir 1 -useReLU 1 -rnnInitialization 1 -learningRateDecay 0.0167 -regularize 0 -numLayers 1 -useDropout 0 -dropout 0.3 -K 5
Log file is results/lse/2017-07-05-23-01-50/_architecture_structure_address/log.txt
nn.Sequential {
[input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output]
(1): nn.SplitTable
(2): nn.Sequential {
[input -> (1) -> (2) -> output]
(1): nn.ConcatTableNoGrad {
input
|-> (1): nn.Sequential { | [input -> (1) -> (2) -> (3) -> output] | (1): nn.NarrowTable | (2): nn.ParallelTable { | input | |-> (1): nn.LookupTable
| |-> (2): nn.LookupTable | |-> (3): nn.LookupTable
| |-> (4): nn.LookupTable | |-> (5): nn.LookupTable
| |-> (6): nn.LookupTable | -> (7): nn.LookupTable
| ... -> output
| }
| (3): nn.CAddTable
| }
|-> (2): nn.Sequential { | [input -> (1) -> (2) -> output] | (1): nn.SelectTable(-1) | (2): nn.LookupTable | } ... -> output } (2): nn.JoinTable } (3): nn.SplitTable (4): nn.Sequencer @ nn.Recurrence @ nn.MaskZero @ nn.Sequential { [input -> (1) -> (2) -> (3) -> output] (1): nn.ParallelTable { input |-> (1): nn.Linear(350 -> 250)
`-> (2): nn.Linear(250 -> 250)
... -> output
}
(2): nn.CAddTable
(3): nn.ReLU
}
(5): nn.SelectTable(-1)
(6): nn.Linear(250 -> 46)
}
Reducer is LogSumExp
reading file list from ../examples/data_small_output/_architecture_structure_address/train.list
/home/mldl/torch/install/bin/luajit: ../model/batcher/BatcherFileList.lua:20: bad argument #1 to 'lines' (../examples/data_small_output/_architecture_structure_address/train.list: No such file or directory)
stack traceback:
[C]: in function 'lines'
../model/batcher/BatcherFileList.lua:20: in function '__init'
/home/mldl/torch/install/share/lua/5.1/torch/init.lua:91: in function </home/mldl/torch/install/share/lua/5.1/torch/init.lua:87>
[C]: in function 'BatcherFileList'
..//model/OneModel.lua:318: in main chunk
[C]: in function 'dofile'
...mldl/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
mldl@mldlUB1604:
/ub16_prj/ChainsofReasoning/run_scripts$

train-trainBatcher:getBatch()

Hello, when I study your code ,I find some question in train.can you give me some solution?
In Myptimizer.lua , I find this code dont calculate the err and Update the model When I use CPU to train the model.
At function MyOptimizer:train(trainBatcher),the trainBatcher will do getBatcher() as local minibatch_targets,minibatch_inputs,num, classId = trainBatcher:getBatch(),
But when I study the getBatcher function in BatcherFileList.lua ,I find this code just give the useCUDA's method and do nothing when use CPU .Therfore, it will get nil, so that the while will break before update parameters.
So,I'm really confused that how to train our model when use CPU. I'm not sure if I understand it wrong, can you tell me?

run examples failed and find some bug in "BatcherFileList:getBatch"

Hi, when I run the example data with the demo config, it failed. It seems like that there are some bugs in "MyOptimizer.lua" line 181.
By the way I don't use GPU. And it always return nil in "BatcherFileList:getBatch()", I fix that bug and rerun the example data with the demo config , it failed.

/home/momo/torch/install/share/lua/5.1/nn/Select.lua:11: attempt to compare nil with number
stack traceback:
/home/momo/torch/install/share/lua/5.1/nn/Select.lua:11: in function </home/momo/torch/install/share/lua/5.1/nn/Select.lua:9>
[C]: in function 'xpcall'
/home/momo/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/momo/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
../model/optimizer/MyOptimizer.lua:183: in function 'opfunc'
/home/momo/torch/install/share/lua/5.1/optim/adam.lua:37: in function 'optimMethod'
../model/optimizer/MyOptimizer.lua:212: in function 'trainBatch'
../model/optimizer/MyOptimizer.lua:133: in function <../model/optimizer/MyOptimizer.lua:90>
..//model/OneModel.lua:412: in main chunk
[C]: in function 'dofile'
...momo/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
/home/momo/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
/home/momo/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
../model/optimizer/MyOptimizer.lua:183: in function 'opfunc'
/home/momo/torch/install/share/lua/5.1/optim/adam.lua:37: in function 'optimMethod'
../model/optimizer/MyOptimizer.lua:212: in function 'trainBatch'
../model/optimizer/MyOptimizer.lua:133: in function <../model/optimizer/MyOptimizer.lua:90>
..//model/OneModel.lua:412: in main chunk
[C]: in function 'dofile'
...momo/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

Total num batches 0

Whoops! Just noticed that it is actually not training but only iterating:

/bin/bash train.sh ./config.sh
...
Making a pass of the data to count the batches
Total num batches 0

Iter: 1
avg loss in epoch = nan
total elapsed = 0.000831
time per batch = inf
examples/sec = 0.000000
saving to results/lse/2017-02-10-16-05-43/_architecture_structure_address/model-1
...
Iter: 20
avg loss in epoch = nan
total elapsed = 0.379223
time per batch = inf
examples/sec = 0.000000

Could you check whether it is running like this on your side? Maybe I broke something on the way already :)

TextKBQA also failed on example...

I paste the issue on this project since the TextKQQA cannot do that

Num questions 1
No pretrained entity & word embeddings available. Learning entity embeddings from scratch
Traceback (most recent call last):
File "/Users/yike.ke/yike_prj/TextKBQA/code/train.py", line 353, in
t = Trainer()
File "/Users/yike.ke/yike_prj/TextKBQA/code/train.py", line 45, in init
separate_key_lstm=separate_key_lstm)
File "/Users/yike.ke/yike_prj/TextKBQA/code/KBQA.py", line 291, in init
super(TextKBQA, self).init(**kwargs)
File "/Users/yike.ke/yike_prj/TextKBQA/code/KBQA.py", line 67, in init
self.entity_lookup_table_extended = tf.concat(0, [self.entity_lookup_table, self.entity_dummy_mem])
File "/Library/Python/2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1029, in concat
dtype=dtypes.int32).get_shape(
File "/Library/Python/2.7/site-packages/tensorflow/python/framework/ops.py", line 639, in convert_to_tensor
as_ref=False)
File "/Library/Python/2.7/site-packages/tensorflow/python/framework/ops.py", line 704, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/Library/Python/2.7/site-packages/tensorflow/python/framework/constant_op.py", line 113, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/Library/Python/2.7/site-packages/tensorflow/python/framework/constant_op.py", line 102, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/Library/Python/2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 370, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/Library/Python/2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 302, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).name))
TypeError: Expected int32, got <tf.Variable 'entity_lookup_table:0' shape=(1817564, 50) dtype=float32_ref> of type 'Variable' instead.

Could not get the reported MAP score.

Hi Rajarshd, thanks for sharing the eval code.
I have tried running the latest code, but find the result a little weird. I wrote a script and calculated MAP based on the test.scores files generated with the latest model (e.g. test.scores.model-20) of the 46 relations, and only got MAP=0.39 on test set. Is there any extra work to be done to get the right result, or any place that I should pay attention to?
Additionally, there are many different variants of the proposed model in your paper, but which one corresponds to the default config in the code?
image

error loading module 'Util' from file

hello, when I run the movie_data_format.sh, I met a problem about loading module 'Util' from the folder, I don't know how it happend. I have tried searched answers in Google but found no useful information. I use python2.7 and lua5.1,here is error:
Missed entity pair count 0
train
converting train to torch files
output/train/train.txt.10.int
/home/zju/lkw/lkwenv1/torch/install/bin/lua: ...w/lkwenv1/torch/install/share/lua/5.1/trepl/init.lua:389: error loading module 'Util' from file './Util.lua':
./Util.lua:2: unexpected symbol near '#'
stack traceback:
[C]: in function 'error'
...w/lkwenv1/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
int2torch.lua:4: in main chunk
[C]: in function 'dofile'
.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: ?
int2torch failed!
Failed for relation
output/train/train.txt.11.int
/home/zju/lkw/lkwenv1/torch/install/bin/lua: ...w/lkwenv1/torch/install/share/lua/5.1/trepl/init.lua:389: error loading module 'Util' from file './Util.lua':
./Util.lua:2: unexpected symbol near '#'
stack traceback:
[C]: in function 'error'
...w/lkwenv1/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
int2torch.lua:4: in main chunk
[C]: in function 'dofile'
.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: ?
int2torch failed!
Failed for relation
output/train/train.txt.12.int
/home/zju/lkw/lkwenv1/torch/install/bin/lua: ...w/lkwenv1/torch/install/share/lua/5.1/trepl/init.lua:389: error loading module 'Util' from file './Util.lua':
./Util.lua:2: unexpected symbol near '#'
stack traceback:
[C]: in function 'error'
...w/lkwenv1/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
int2torch.lua:4: in main chunk
[C]: in function 'dofile'
.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: ?
int2torch failed!
Failed for relation
output/train/train.txt.13.int
/home/zju/lkw/lkwenv1/torch/install/bin/lua: ...w/lkwenv1/torch/install/share/lua/5.1/trepl/init.lua:389: error loading module 'Util' from file './Util.lua':
./Util.lua:2: unexpected symbol near '#'
stack traceback:
[C]: in function 'error'
...w/lkwenv1/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
int2torch.lua:4: in main chunk
[C]: in function 'dofile'
.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: ?
int2torch failed!
Failed for relation
output/train/train.txt.14.int
/home/zju/lkw/lkwenv1/torch/install/bin/lua: ...w/lkwenv1/torch/install/share/lua/5.1/trepl/init.lua:389: error loading module 'Util' from file './Util.lua':
./Util.lua:2: unexpected symbol near '#'
stack traceback:
[C]: in function 'error'
...w/lkwenv1/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
int2torch.lua:4: in main chunk
[C]: in function 'dofile'
.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: ?
int2torch failed!
Failed for relation
output/train/train.txt.15.int
/home/zju/lkw/lkwenv1/torch/install/bin/lua: ...w/lkwenv1/torch/install/share/lua/5.1/trepl/init.lua:389: error loading module 'Util' from file './Util.lua':
./Util.lua:2: unexpected symbol near '#'
stack traceback:
[C]: in function 'error'
...w/lkwenv1/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
int2torch.lua:4: in main chunk
[C]: in function 'dofile'
.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: ?
int2torch failed!
Failed for relation
......
Hope to hear from you!Have a good day.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.