Coder Social home page Coder Social logo

cfo's Introduction

CFO

Code repo for Conditional Focused Neural Question Answering with Large-scale Knowledge Bases

Installation and Preprocessing

  1. Refer to Virtuoso.md to install and confiture the software
  2. Make sure torch7 is installed together with the following dependencies
    • logroll: luarocks install logroll
    • nngraph: luarocks install nngraph
  3. After the installation and configuration of Virtuoso, run bash data_preprocess.sh to finish preprocessing

Training

  1. Focused Lableing

    cd FocusedLabeling
    th train_crf.lua
    
  2. Entity Type Vector

    cd EntityTypeVec
    th train_ent_typevec.lua
    
  3. RNN based Relation Network

    cd RelationRNN
    th train_rel_rnn.lua
    

Inference

In the following, define SPLIT='valid' or 'test'.

  1. Run focused labeling on validation/test data

    cd FocusedLabeling
    
    python generate_inference_data.py --split ${SPLIT}
    
    th process_inference.lua -testSplit ${SPLIT}
    th infer_crf.lua \
        -testData inference-data/label.${SPLIT}.t7 \
        -modelFile "path-to-pretrained-model"
    
    • python generate_inference_data.py --split ${SPLIT} will create the file label.${SPLIT}.txt in the folder FocusedLabeling/inference-data;
    • th process_inference.lua will turn the text file label.${SPLIT}.txt into label.${SPLIT}.t7 in torch format (both in the folder FocusedLabeling/inference-data);
    • th infer_crf.lua ... will generate the file label.result.${SPLIT} in the folder FocusedLabeling.
  2. Query candidates based on focused labeling

    cd Inference
    mkdir ${SPLIT} && cd ${SPLIT}
    python ../query_candidates.py 6 \
           ../../PreprocessData/QAData.${SPLIT}.pkl \
           ../../FocusedLabeling/label.result.${SPLIT} \
           ../../KnowledgeBase/type.top-500.pkl
    

    This step will generate the file QAData.label.${SPLIT}.cpickle in the folder Inference/${SPLIT}.

  3. Generate score data based on the query results

    cd Inference/${SPLIT}
    python ../generate_score_data.py QAData.label.${SPLIT}.cpickle
    

    This step will generate the following files in the same folder Inference/${SPLIT}:

    • rel.single.${SPLIT}.txt (candidate relations for those with only a single candidate subject)
    • rel.multi.${SPLIT}.txt (candidate relations for those with only multiple candidate subject)
    • type.multi.${SPLIT}.txt (candidate entities for those with multiple candidate subjects)
    • single.${SPLIT}.cpickle
    • multi.${SPLIT}.cpickle
  4. Run relation inference

    cd RelationRNN
    mkdir inference-data
    th process_inference.lua -testSplit ${SPLIT}
    th infer_rel_rnn.lua -testData inference-data/rel.single.${SPLIT}.t7
    th infer_rel_rnn.lua -testData inference-data/rel.multi.${SPLIT}.t7
    

    This step will generate the files score.rel.single.${SPLIT} and score.rel.multi.${SPLIT} in the folder RelationRNN.

  5. Run entity inference

    cd EntityTypeVec
    mkdir inference-data
    th process_inference.lua -testSplit ${SPLIT}
    th infer_ent_typevec.lua -testData inference-data/ent.${SPLIT}.t7
    

    This step will generate the file score.ent.multi.multi.${SPLIT} in the folder EntityTypeVec.

  6. Run joint disambiguation

    cd Inference/${SPLIT}
    python ../joint_disambiguation.py multi.${SPLIT}.cpickle \
           ../../RelationRNN/score.rel.multi.${SPLIT} \
           ../../EntityTypeVec/score.ent.multi.multi.${SPLIT}
    

cfo's People

Contributors

donglixp avatar zihangdai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cfo's Issues

Cannot run train_crf.lua

After pre-processing, I have encountered a problem when I am trying to run train_crf.lua.

The feedback in the system is:

/home/sy/torch/install/bin/luajit: /home/sy/virtuoso-opensource/src/model/CRF.lua:192: invalid arguments: DoubleTensor number DoubleTensor
expected arguments: [DoubleTensor] DoubleTensor index LongTensor
stack traceback:
[C]: in function 'gather'
/home/sy/virtuoso-opensource/src/model/CRF.lua:192: in function 'forward'
train_crf.lua:130: in main chunk
[C]: in function 'dofile'
...e/sy/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x004064f0

Could you please help me figure it out?

problems about Inference

When running generate_score_data.py under Inference folder, it required cand_rel and can_sub attributes.

if hasattr(data, 'cand_sub') and hasattr(data, 'cand_rel') and len(data.cand_rel) > 0 and data.relation in data.cand_rel and data.subject in data.cand_sub:

So I found it can generate these two attributes by query_candidates.py in the same folder, but i requires these documents

print 'usage: python query_candidate_relation.py num_processes QAData_cpickle_file attention_score_file [[type_dict]]'

Could anyone tell me how to get attention _score_file ?
Or how to get pickle file containing cand_sub and cand_rel attribute?

Issues for RelationRNN training, maxSeqLen and zero loss or infinity loss

If using the default maxSeqLen, one will get the cublas runtime error

➜  RelationRNN git:(master) ✗ th train_rel_rnn.lua               
[INFO - 2018_05_02_20:11:11] - "--------------------------------------------------"
[INFO - 2018_05_02_20:11:11] - "SeqRankingLoader Configurations:"
[INFO - 2018_05_02_20:11:11] - "    number of batch : 296"
[INFO - 2018_05_02_20:11:11] - "    data batch size : 256"
[INFO - 2018_05_02_20:11:11] - "    neg sample size : 1024"
[INFO - 2018_05_02_20:11:11] - "    neg sample range: 7524"
[INFO - 2018_05_02_20:11:11] - "--------------------------------------------------"
[INFO - 2018_05_02_20:11:11] - "BiGRU Configuration:"
[INFO - 2018_05_02_20:11:11] - "    inputSize   :   300"
[INFO - 2018_05_02_20:11:11] - "    hiddenSize  :   256"
[INFO - 2018_05_02_20:11:11] - "    maxSeqLen   :    40"
[INFO - 2018_05_02_20:11:11] - "    maxBatch    :   256"
[INFO - 2018_05_02_20:11:11] - "--------------------------------------------------"
[INFO - 2018_05_02_20:11:11] - "BiGRU Configuration:"
[INFO - 2018_05_02_20:11:11] - "    inputSize   :   512"
[INFO - 2018_05_02_20:11:11] - "    hiddenSize  :   256"
[INFO - 2018_05_02_20:11:11] - "    maxSeqLen   :    40"
[INFO - 2018_05_02_20:11:11] - "    maxBatch    :   256"
/home/vimos/.torch/install/bin/luajit: /home/vimos/.torch/install/share/lua/5.1/nn/Container.lua:67: 
In 5 module of nn.Sequential:
/home/vimos/Data/git/QA/CFO/src/model/BiGRU.lua:241: cublas runtime error : an internal operation failed at /home/vimos/.torch/extra/cutorch/lib/THC/THCBlas.cu:246
stack traceback:
	[C]: in function 'mm'
	/home/vimos/Data/git/QA/CFO/src/model/BiGRU.lua:241: in function 'updateGradInput'
	/home/vimos/.torch/install/share/lua/5.1/nn/Module.lua:31: in function </home/vimos/.torch/install/share/lua/5.1/nn/Module.lua:29>
	[C]: in function 'xpcall'
	/home/vimos/.torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
	/home/vimos/.torch/install/share/lua/5.1/nn/Sequential.lua:84: in function 'backward'
	train_rel_rnn.lua:174: in main chunk
	[C]: in function 'dofile'
	...mos/.torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
	[C]: at 0x559ae9bad710

WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
	[C]: in function 'error'
	/home/vimos/.torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
	/home/vimos/.torch/install/share/lua/5.1/nn/Sequential.lua:84: in function 'backward'
	train_rel_rnn.lua:174: in main chunk
	[C]: in function 'dofile'
	...mos/.torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
	[C]: at 0x559ae9bad710
THCudaCheckWarn FAIL file=/home/vimos/.torch/extra/cutorch/lib/THC/THCStream.cpp line=50 error=77 : an illegal memory access was encountered
THCudaCheckWarn FAIL file=/home/vimos/.torch/extra/cutorch/lib/THC/THCStream.cpp line=50 error=77 : an illegal memory access was encountered

But this problem can be fixed by using a larger maxSeqLen

➜  RelationRNN git:(master) ✗ th train_rel_rnn.lua -maxSeqLen 42
[INFO - 2018_05_02_20:11:52] - "--------------------------------------------------"
[INFO - 2018_05_02_20:11:52] - "SeqRankingLoader Configurations:"
[INFO - 2018_05_02_20:11:52] - "    number of batch : 296"
[INFO - 2018_05_02_20:11:52] - "    data batch size : 256"
[INFO - 2018_05_02_20:11:52] - "    neg sample size : 1024"
[INFO - 2018_05_02_20:11:52] - "    neg sample range: 7524"
[INFO - 2018_05_02_20:11:52] - "--------------------------------------------------"
[INFO - 2018_05_02_20:11:52] - "BiGRU Configuration:"
[INFO - 2018_05_02_20:11:52] - "    inputSize   :   300"
[INFO - 2018_05_02_20:11:52] - "    hiddenSize  :   256"
[INFO - 2018_05_02_20:11:52] - "    maxSeqLen   :    42"
[INFO - 2018_05_02_20:11:52] - "    maxBatch    :   256"
[INFO - 2018_05_02_20:11:52] - "--------------------------------------------------"
[INFO - 2018_05_02_20:11:52] - "BiGRU Configuration:"
[INFO - 2018_05_02_20:11:52] - "    inputSize   :   512"
[INFO - 2018_05_02_20:11:52] - "    hiddenSize  :   256"
[INFO - 2018_05_02_20:11:52] - "    maxSeqLen   :    42"
[INFO - 2018_05_02_20:11:52] - "    maxBatch    :   256"
[INFO - 2018_05_02_20:11:56] - "iter  100, loss = 0.00198258"........] ETA: 3h29m | Step: 42ms       
[INFO - 2018_05_02_20:12:00] - "iter  200, loss = 0.00000000"........] ETA: 3h25m | Step: 41ms       
[INFO - 2018_05_02_20:12:04] - "epoch   1, loss 0.00066979"..........] ETA: 3h28m | Step: 42ms       
[INFO - 2018_05_02_20:12:04] - "iter  300, loss = 0.00000000"........] ETA: 3h27m | Step: 42ms       
[INFO - 2018_05_02_20:12:09] - "iter  400, loss = 0.00000000"........] ETA: 3h28m | Step: 42ms       
[INFO - 2018_05_02_20:12:13] - "iter  500, loss = 0.00000000"........] ETA: 3h25m | Step: 41ms       
[INFO - 2018_05_02_20:12:17] - "epoch   2, loss 0.00000000"..........] ETA: 3h26m | Step: 41ms       
[INFO - 2018_05_02_20:12:17] - "iter  600, loss = 0.00000000"........] ETA: 3h26m | Step: 41ms       
[INFO - 2018_05_02_20:12:21] - "iter  700, loss = 0.00000000"........] ETA: 3h28m | Step: 42ms       
[INFO - 2018_05_02_20:12:25] - "iter  800, loss = 0.00000000"........] ETA: 3h27m | Step: 42ms       
[INFO - 2018_05_02_20:12:29] - "epoch   3, loss 0.00000000"..........] ETA: 3h25m | Step: 41ms       
[INFO - 2018_05_02_20:12:30] - "iter  900, loss = 0.00000000"........] ETA: 3h25m | Step: 41ms       
[INFO - 2018_05_02_20:12:34] - "iter 1000, loss = 0.00000000"........] ETA: 3h27m | Step: 42ms 

But the loss will be 0 after the 1 epoch or become infinity

➜  RelationRNN git:(master) ✗ th train_rel_rnn.lua -maxSeqLen 42 -seed 12
[INFO - 2018_05_02_20:26:49] - "--------------------------------------------------"
[INFO - 2018_05_02_20:26:49] - "SeqRankingLoader Configurations:"
[INFO - 2018_05_02_20:26:49] - "    number of batch : 296"
[INFO - 2018_05_02_20:26:49] - "    data batch size : 256"
[INFO - 2018_05_02_20:26:49] - "    neg sample size : 1024"
[INFO - 2018_05_02_20:26:49] - "    neg sample range: 7524"
[INFO - 2018_05_02_20:26:49] - "--------------------------------------------------"
[INFO - 2018_05_02_20:26:49] - "BiGRU Configuration:"
[INFO - 2018_05_02_20:26:49] - "    inputSize   :   300"
[INFO - 2018_05_02_20:26:49] - "    hiddenSize  :   256"
[INFO - 2018_05_02_20:26:49] - "    maxSeqLen   :    42"
[INFO - 2018_05_02_20:26:49] - "    maxBatch    :   256"
[INFO - 2018_05_02_20:26:49] - "--------------------------------------------------"
[INFO - 2018_05_02_20:26:49] - "BiGRU Configuration:"
[INFO - 2018_05_02_20:26:49] - "    inputSize   :   512"
[INFO - 2018_05_02_20:26:49] - "    hiddenSize  :   256"
[INFO - 2018_05_02_20:26:49] - "    maxSeqLen   :    42"
[INFO - 2018_05_02_20:26:49] - "    maxBatch    :   256"
[INFO - 2018_05_02_20:26:53] - "iter  100, loss = 81231552070126006809284050944.00000000" 41ms       
[INFO - 2018_05_02_20:26:57] - "iter  200, loss = 0.00000000"........] ETA: 3h15m | Step: 39ms       
[INFO - 2018_05_02_20:27:01] - "epoch   1, loss 27443091915583111597203128320.00000000"p: 40ms       
[INFO - 2018_05_02_20:27:01] - "iter  300, loss = 0.00000000"........] ETA: 3h17m | Step: 40ms  

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.