Coder Social home page Coder Social logo

continual_fewshot_relation_learning's People

Contributors

qcwthu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

continual_fewshot_relation_learning's Issues

Candidate Relations

Hello, thanks for your excellent work.
I want to know how the candidate relations are produced. Are they from the origin dataset? If not, how you choose the candidates for each sample?

Compared method and evaluation metric

Hi, thank for your excellent work.
However, I still have a question about the compared method. From the 'eval_model' function in the provided code, one prediction of an instance is regarded as true as long as its true label score is higher than the scores of the negative labels. This 'accuracy' metric is not as strict as the 'accuracy' metric used in other CRE models, like RP-CRE (ACL 2021) and following papers since they will compare the true label score with the scores of all seen relations.

RP-CRE is much earlier than your publication. In your paper, you didn't compare your model with RP-CRE. I'm wondering the reason.

I re-run RP-CRE with your 10-way-5-shot dataset under your evaluation metric (the number selected sample for each relation is set as 1) and I get much higher score in every task (The final task is around 80%). And under the stricter evaluation metric, RP-CRE still achieve a little higher score than your current score. There may be some questions in my applying RP-CRE to your dataset.

Could you please apply RP-CRE to your datasets and provide the scores under fair settings or explain the reason why RP-CRE is not applicable to these datasets? Or could you please provide the scores of your model under the stricter evaluation metric.

Thank you very much!

Dataloder questions about "Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation"

Hi, Chengwei, thanks for your excellent work!
In your work, the data was divided into three parts:training_data, valid_data and test_data.
1.The first question is that why the correct label is included in the candidate options in the valid_data, but not in the training_data? As shown in the figure below:Is there any difference between them?
2.Besides, I notioced that the data in val_data and test_data are completely same. In my opinion, the function of valid_data is to fine-tune the parameters of the model as the sample source of query set in the training phase, so I studied the training code, and found that validation set data was not used in the training phase. So how do you use the valid_data?
3.I found that there is a small overlap between the data used for training and the valid_data(test_data), is there any special reason for that?
Finally, I want to experiment with my new model based on your training paradigm on the basis of your research, so I want to make sure whether my understanding of your training paradigm is correct:
Take fewrel dataset as an example, in the basic training stage of k=0, select 10 relationships, 100 for each relationship as training samples, and in the incremental learning stage (k>0), a group of 10 relationships, Select 2-5-10 samples as training data for each relationship according to different tasks(2-shot,5-shot,10-shot), and then select 1-2 samples as valid data different with training data to fine-tuning model for each relationship, finally select test samples that are different from the training and vilid to test the performance of the model (test data are only selected from the relationships seen).
I haven't studied this field for a long time, so I really hope you can point out some mistakes in the expressions.
I would be appreciate if you could answer me, that will help me a lot. Thanks!

Any plan to release the baseline: IDLVQ-C

Hi, @qcwthu
Thanks for your excellent work about incremental few-shot relation learning. It is really interesting and insightful.
I would like to know if there is a chance I can access the baselines, especially IDLVQ-C?

Questions about specific experimental results

Hi, Chengwei,

CFRL is a very interesting and meaningful work. Congratulations on its acceptance by ACL!

In the experimental part, a large number of figures (Figure 3/5/6/7) are drawn to vividly show the performance of each method. Could the original and official results of these figures be released in the form of table (like Table 1) for subsequent research?

Thanks very much!

Question about result on BERT

Thank you for sharing.
I did the experiment on bert, but the gap between the discovery and the paper is large, can you explain why.Can you share the following pre-trained relationship model

for distant data

Hi, Chengwei,

CFRL is a very interesting and meaningful work. Congratulations on its acceptance by ACL!

There is a directory named "distantdata" in "data" dir ,but it isn't accessible by now.

could you post it?

Question about experiment for BERT

Thanks for your excellent work about incremental few-shot relation learning. It is really interesting and insightful.
I tried to experiment with BERT, but the accuracy is far from the results in the paper.
Are you welling to open how to run CFRE based on BERT? Whether I can get this part of the code.
My email address is [email protected]. Hope to hear from you!

Question about experimental results for BERT

Hi, @qcwthu
Thanks for your excellent work about incremental few-shot relation learning. It is really interesting and insightful.
I tried to experiment with BERT, but the accuracy is far from the results in the paper.
Are you welling to open how to run CFRE based on BERT?

ImpoerError

I follow the steps but when bash runall.sh it tells me cannot import name logging from transformers.utils.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.