Coder Social home page Coder Social logo

mbpo_pytorch's People

Contributors

songminjae avatar xingyu-lin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mbpo_pytorch's Issues

Error when trying to run

Hey,
thank you for your work but sadly I'm not able to run your code I'm getting this inplace operation error. Weirdly that this only happens to me, I was just cloning the repo and running your example command.

File "mbpo.py", line 267, in
main()
File "mbpo.py", line 263, in main
train(args, env_sampler, predict_env, agent, env_pool, model_pool)
File "mbpo.py", line 124, in train
train_policy_steps += train_policy_repeats(args, total_step, train_policy_steps, cur_step, env_pool, model_pool, agent)
File "mbpo.py", line 220, in train_policy_repeats
agent.update_parameters((batch_state, batch_action, batch_reward, batch_next_state, batch_done), args.policy_train_batch_size, i
)
File "/shared/sebastian/replication-mbpo/sac/sac.py", line 89, in update_parameters
policy_loss.backward()
File "/shared/sebastian/miniconda3/envs/rrc_simulation/lib/python3.6/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/shared/sebastian/miniconda3/envs/rrc_simulation/lib/python3.6/site-packages/torch/autograd/init.py", line 132, in backw
ard
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTenso
r [256, 1]], which is output 0 of TBackward, is at version 3; expected version 2 instead. Hint: the backtrace further above shows th
e operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

Epoch length?

Hi,

Thank you for your code. It is really helpful.

Could you please check the line 115 in the main_mbpo.py? Since start_step will become larger and larger, if the condition is cur_step >= start_step + epoch_length, the truth epoch_length will also become larger and larger. So, is it a bug? Should we use

cur_step >= args.epoch_length

Correct me if I am wrong.

Thanks

if cur_step >= start_step + args.epoch_length and len(env_pool) > args.min_pool_size:

`
cur_step = total_step - start_step

        if cur_step >= start_step + args.epoch_length and len(env_pool) > args.min_pool_size:
            break

`

Could you please add a requirements.txt file?

Hi,
Really appreciate your reimplementation of MBPO with Pytorch!
However, there are several versions of TF and Pytorch, and the numpy versions they depend on are different to mujoco_py which will lead to a dependency conflict.

Will you add the requirements.txt of your environment and therefore i can reproduce the experiments? Thanks a lot!

cannot reproduce

hi,
I ran the hopper experiment with the provided command, and now the reward during the 65k-68k envstep is between 400 and 700, which is much lower than the provided figure.
image
Is there anything that I missed potentially?

cannot reproduce Again

image
As the upper figure shows, the walker2D environment's performance can not match the figure you put in the README. What could I do ?

rollout_batch_size

rollout_batch_size is default to 100k, which is what I don't understand? Does this mean even is real data is something like 5k, you still sample each data 20 times, and produce 100k data each time you call that function??

Missing utils

Hello,
Thanks for your awesome pytorch reimplementation! I'd like to have a try but I notice that I cannot find the utils in the main_mbpo.py file. May I have your help? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.