Coder Social home page Coder Social logo

Comments (14)

todpole3 avatar todpole3 commented on September 28, 2024

You mean changing action dropout rate from 0.0 to 0.1? 0.9 is very aggressive dropout and 1.0 implies dropout everything and randomly sample an edge.

If so 0.1 dropout rate shouldn't make such a huge difference. Would you mind posting the action dropout code you added to the original minerva code? And how many iterations did you train to observe this result difference? It would be great if you can plot the training curve before and after adding action dropout for comparison.

from multihopkg.

David-Lee-1990 avatar David-Lee-1990 commented on September 28, 2024

@todpole3 I re-edited my issue to show more detailed information of the training results.

from multihopkg.

todpole3 avatar todpole3 commented on September 28, 2024

@David-Lee-1990 In MINERVA code, is pre_distribution used in gradient computation?

For us we only use action dropout to encourage diverse sampling but the policy gradient is still computed using the original probability vector.

from multihopkg.

David-Lee-1990 avatar David-Lee-1990 commented on September 28, 2024

In MINERVA, there is no dropout and I try to add this advancement on it. Follow your idea, I use dropout to encourage diverse sampling and the policy gradient is still compuated using the original distribution. I tested this for two versions: one is relation-only and the other not. Both versions show the similar results as i stated in the issue.

from multihopkg.

todpole3 avatar todpole3 commented on September 28, 2024

@David-Lee-1990 My question is, after adding "action dropout", did you use the updated probability vector pre_distribution to compute the policy gradient?

from multihopkg.

David-Lee-1990 avatar David-Lee-1990 commented on September 28, 2024

No, i use the original one.

from multihopkg.

todpole3 avatar todpole3 commented on September 28, 2024

@David-Lee-1990 I cannot spot anything wrong with the code snippet you posted. Thanks for sharing. It might have something to do with the integration with the rest of MINERVA code.

Technically you only disturbed the sampling prob by a small factor (and your policy gradient computation still follows the traditional formula) so the result shouldn't change so significantly no matter what.

Would you mind running a sanity-checking experiments by setting the dropout rate to 0.01 and see how the result turned out? Technically the change should be very small. Then maybe try 0.02 and 0.05 and see if the results change gradually?

from multihopkg.

David-Lee-1990 avatar David-Lee-1990 commented on September 28, 2024

hi, i run a sanity-checking experiments by setting the keep rate in [1.0, 0.99, 0.98, 0.97, 0.95, 0.93, 0.90]. The results of hits@1 on training batch is as follows:

1.0 VS 0.99
result-99
1.0 VS 0.98
result-98
1.0 VS 0.97
result-97
1.0 VS 0.95
result-95
1.0 VS 0.93
result-93
1.0 VS 0.90
result-9

from multihopkg.

todpole3 avatar todpole3 commented on September 28, 2024

@David-Lee-1990 Very interesting. I want to look deeper into this issue.

The most noticeable difference is that the dev result you reported without action dropout is close to what we have with 0.1 action dropout and significantly higher than what we have without action dropout.

Besides action dropout rate, did you use the same set of hyperparameters as we did in the configuration files?
If not, would you mind sharing your set of hyperparameters? I want to see if I can reproduce the same results on our code repo.

And one more question, did you observe similar trend on other datasets using MINERVA code + action dropout?

from multihopkg.

todpole3 avatar todpole3 commented on September 28, 2024

I tested this for two versions: one is relation-only and the other not. Both versions show the similar results as i stated in the issue.

@David-Lee-1990 Are the plots shown above generated with relation-only or not?

from multihopkg.

David-Lee-1990 avatar David-Lee-1990 commented on September 28, 2024

The most noticeable difference is that the dev result you reported without action dropout is close to what we have with 0.1 action dropout and significantly higher than what we have without action dropout.

@todpole3 about the dev result, i need to clarify that i used "sum" method, which is different with "max" method as you used, when calculating hit@k and MRR. "sum" method ranks the predicted entity by adding those probalibily, which predict the same end entity, up. The following code is from MINERVA, where lse is calculating the log sum.

image

And I also test "max" method on WN18RR, MRR on dev set is as follows.
result

as comparison, i paste the result of "sum" and "max "method together here:

result

from multihopkg.

David-Lee-1990 avatar David-Lee-1990 commented on September 28, 2024

I tested this for two versions: one is relation-only and the other not. Both versions show the similar results as i stated in the issue.

@David-Lee-1990 Are the plots shown above generated with relation-only or not?

relation only

from multihopkg.

David-Lee-1990 avatar David-Lee-1990 commented on September 28, 2024

Besides action dropout rate, did you use the same set of hyperparameters as we did in the configuration files?
If not, would you mind sharing your set of hyperparameters? I want to see if I can reproduce the same results on our code repo.

I give my hyperparameters in the form of your notation as follows:

group_examples_by_query="False"
use_action_space_bucketing="False"
bandwidth=200
entity_dim=100
relation_dim=100
history_dim=100
history_num_layers=1
train_num_rollouts=20
dev_num_rollouts=40
num_epochs=1000 # follow minerva, i randomly choose training data which has batch_size samples
train_batch_size=128
dev_batch_size=128
learning_rate=0.001
grad_norm=5
emb_dropout_rate=0
ff_dropout_rate=0
action_dropout_rate=1.0
beta=0.05
relation_only="True"
beam_size=100

from multihopkg.

David-Lee-1990 avatar David-Lee-1990 commented on September 28, 2024

And one more question, did you observe similar trend on other datasets using MINERVA code + action dropout?
@todpole3 Follow your advice, I test action dropout on nell-995 today, its performance is as follows.

result

result-mrr

from multihopkg.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.