The code for Bi-Stage Prefix Tuning framework will be updated in this repo. The novelty of this paper is that text-based KG reasoning can be speeded up with Bi-Stage Prefix Tuning using the same LLM (either a BERT or GPT). To measure semantic transferability, we develop an Antiphrasis Evaluation Protocol where understanding of novel relations is measured by performance drop. Hence, down is up! Welcome to check our implementation and predictive examples for more details.
Fig. 1 Bi-stage Prefix-Tuning of KG reasoning.unzip all_files.zip -d DESTINATION
Extract knowledge graphs to the data folder similar as follows:
wiki5m_ind
├── train.txt
├── valid.txt
├── test.txt
├── wikidata5m_entity.txt
├── wikidata5m_relation.txt
└── wikidata5m_text.txt
pip install -r requirement.txt
bash scripts/preprocess.sh WN18RR
Dataset | Checkpoints |
---|---|
WN18RR Bunting BERT | Checkpoint |
WN18RR Bunting GPT | Checkpoint |
Wikidata5M-transductive | Checkpoint |
mv ${CHECKPOINT_BI-LINK_WN18RR} checkpoint/bilink_bert
To evaluate the model, please run
bash scripts/eval.sh ${CHECKPOINT} WN18RR
Please chage pretrained-model from bert-base-uncased to gpt-2 when evaluating Bunting GPT
Fig. 3 Comparison between wrongly predicted tails' relevance scores with labels.