Coder Social home page Coder Social logo

Comments (5)

dylan-asmar avatar dylan-asmar commented on August 30, 2024 1

Sorry for the delay in response.

For the CarRacingEnv, the state space is the same as the observation space defined here. The reward function you linked (line 203) takes a single environment and returns a single reward. Since the velocity components are the 4th and 5th entries of the state vector, it uses env.state[4:5] for the velocity.

In the calculate_trajectory_costs function, since we haveK samples we create K different environments and simulate them across T time steps (here). The control cost is a scalar value in line 204 as it is from sample k and time step t.

After stepping through all the K samples for the T time steps, we have a trajectory cost for each sample which is of size K.

So the reward function returns the reward based on the environment at a given state (contained within the environment struct). The calculate_trajectory_costs function calculates the cost of each K samples across the time horizon by calling the reward function at each time step for each sample (and combining it with the control cost).

The function defined here

function calculate_trajectory_costs(pol::MPPI_Policy, env::AbstractEnv)

Is the main function for calculating the trajectory costs for each sample for most environments that are a subtype of AbstractEnv. The function defined here
function calculate_trajectory_costs(pol::MPPI_Policy, env::EnvpoolEnv)

is the same function, but for use with the EnvpoolEnv environment. This function is for MuJoCo environments and uses EnvPool to help with running numerous MuJoCo simulations at once. In this function, reward(env) returns a vector of size K which is the number of environments. So we only need to loop over the T time steps in this function.

I hope this helps clear up some of the confusion. Let me know if you have any more questions.

from mpopis.

dylan-asmar avatar dylan-asmar commented on August 30, 2024

Hello!

The code was structured to allow for extensions of other environments based on CommonRLInterface. The reward for each environment are extensions of the reward function defined in CommonRLInterface.

Code location for the reward function for different environments:

From Line 174 that you linked, that is how we are calculating Line 9 in the pseudocode. We are just incrementally adding the costs across all time steps. reward(env) is the state-dependent cost at each time step and returns the terminal cost at the final time step. So it represents φ(X) + c(X) from the pseudocode. Line 167 is where the control costs are being calculated which is the third term in Line 9 from the pseudocode in the paper.

I hope this helps clear up any confusion.

from mpopis.

dylan-asmar avatar dylan-asmar commented on August 30, 2024

Let me know if you still have any questions about this.

from mpopis.

YihanLi126 avatar YihanLi126 commented on August 30, 2024

Hello!

The code was structured to allow for extensions of other environments based on CommonRLInterface. The reward for each environment are extensions of the reward function defined in CommonRLInterface.

Code location for the reward function for different environments:

From Line 174 that you linked, that is how we are calculating Line 9 in the pseudocode. We are just incrementally adding the costs across all time steps. reward(env) is the state-dependent cost at each time step and returns the terminal cost at the final time step. So it represents φ(X) + c(X) from the pseudocode. Line 167 is where the control costs are being calculated which is the third term in Line 9 from the pseudocode in the paper.

I hope this helps clear up any confusion.

Yes, the explanation is clear for me, and thank you for your reply!
I'll let you know if I have any questions in my following work.

from mpopis.

YihanLi126 avatar YihanLi126 commented on August 30, 2024

Hello!
I have a small question about the dimension of the reward:

function RLBase.reward(env::CarRacingEnv{T}) where {T}

I'm a little bit confused about the I/O of the state of the environment. It seems that the function here uses env.state[4:5] for calculating the cost of the velocity. Is it just for a single state, or a series of states?(That is to say, what is the data structure of the state of the environment?) I have this question because both trajectory_cost and control_costs here are array with K elements:
trajectory_cost = trajectory_cost - reward(env) + control_costs

if the reward function calculates the cost of a series of state costs, where can I find function that apply the system dynamics to get the the state related to the input? Is it like the step function here:
function _step!(env::CarRacingEnv, a::Vector{Float64})

And what are the purposes of the two cost function here for two kinds of environments:

function calculate_trajectory_costs(pol::MPPI_Policy, env::EnvpoolEnv)

function calculate_trajectory_costs(pol::MPPI_Policy, env::AbstractEnv)

Can I get some information about differences between them which served for different purposes?

Thank you for your patience!

from mpopis.

Related Issues (6)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.