Coder Social home page Coder Social logo

j3soon / omniisaacgymenvs-dofbotreacher Goto Github PK

View Code? Open in Web Editor NEW
35.0 2.0 3.0 33.38 MB

Dofbot Reacher Reinforcement Learning Sim2Real Environment for Omniverse Isaac Gym/Sim

License: Other

Python 98.32% Jupyter Notebook 0.36% Shell 0.91% Dockerfile 0.18% Kit 0.23%
gym isaac isaac-gym omniverse reinforcement-learning sim2real dofbot omni-gym sim-to-real isaac-sim

omniisaacgymenvs-dofbotreacher's Issues

DofBot Sim2Real Error

I'm currently encountering some challenges with the sim2real transition. To provide a clearer picture of the issue, I've attached 3 videos that include demonstrations and a snapshot

I am receiving error messages when I switch Sim2Real to True and then run the Sim2real command. I would greatly appreciate your expertise in guiding me through a debugging process or providing any insights on potential solutions.

Screenshot from 2023-12-07 18-02-42
https://github.com/j3soon/OmniIsaacGymEnvs-DofbotReacher/assets/1728575/c2ff81a4-07e4-4ad7-af1a-1df884d6ddc1
https://github.com/j3soon/OmniIsaacGymEnvs-DofbotReacher/assets/1728575/8614bc2e-31ed-4ed9-a994-0a29fd45fe06
https://github.com/j3soon/OmniIsaacGymEnvs-DofbotReacher/assets/1728575/e3a85944-c4de-46e3-af59-95c922ba8e1f

Thank you in advance for your time and assistance. Looking forward to your valuable input.

Windows Patch

Just wondering what steps I need to take to get it set up on windows, thanks!

Error when I custom this repo

Fisrt, I want to say thank you for making this repo, it's really helpfull.

Now I m trying to custom this to a dofbot can pick a thing (Dynamic Cuboid). I get stuck from the first step - pick and place a dynamic cuboid to environment.

I did it bellow:
` def get_object(self):
self.object_start_translation = torch.tensor([0.0, 0.0, 0.0], device=self.device)
self.object_start_orientation = torch.tensor([1.0, 0.0, 0.0, 0.0], device=self.device)

    # self.object_usd_path = "C:\\Users\\tpnei\\Downloads\\random_cube.usd"
    # add_reference_to_stage(self.object_usd_path, self.default_zero_env_path + "/object")

    obj = DynamicCuboid(
                prim_path=self.default_zero_env_path + "/object/object",
                name="object",
                position=np.array([0.221, 0.221, 0.2]),
                scale=np.array([0.03, 0.03, 0.07]),
                color=np.array([0, 0, 1.0]),
                #translation=self.object_start_translation,
                orientation=self.object_start_orientation,
    )
    self._sim_config.apply_articulation_settings(
        "object", get_prim_at_path(obj.prim_path),
        self._sim_config.parse_actor_config("object")
    )`

and goal

` def get_goal(self):
self.goal_displacement_tensor = torch.tensor([0.0, 0.0, 0.0], device=self.device)
self.goal_start_translation = torch.tensor([0.0, 0.0, 0.0], device=self.device) + self.goal_displacement_tensor
self.goal_start_orientation = torch.tensor([1.0, 0.0, 0.0, 0.0], device=self.device)

    # self.goal_usd_path = "C:\\Users\\tpnei\\Downloads\\random_cube.usd"
    # add_reference_to_stage(self.goal_usd_path, self.default_zero_env_path + "/goal")

    goal = DynamicCuboid(
                prim_path=self.default_zero_env_path + "/goal/object",
                name="goal",
                position=np.array([0.221, 0.221, 0.2]),
                scale=np.array([0.03, 0.03, 0.07]),
                color=np.array([0, 0, 1.0]),
                #translation=self.goal_start_translation,
                orientation=self.goal_start_orientation,
    )
    self._sim_config.apply_articulation_settings(
        "goal", get_prim_at_path(goal.prim_path),
        self._sim_config.parse_actor_config("goal_object"))`

and also the set_up_scene

` def set_up_scene(self, scene: Scene) -> None:
self._stage = get_current_stage()
#self._assets_root_path = 'omniverse://localhost/Projects/J3soon/Isaac/2022.1'
self.get_arm()
self.get_object()
self.get_goal()

    super().set_up_scene(scene)
    self._arms = self.get_arm_view(scene)
    scene.add(self._arms)
    self._objects = RigidPrimView(
        prim_paths_expr="/World/envs/env_.*/object/object",
        name="object_view",
        reset_xform_properties=False,
    )
    scene.add(self._objects)
    self._goals = RigidPrimView(
        prim_paths_expr="/World/envs/env_.*/goal/object",
        name="goal_view",
        reset_xform_properties=False,
    )
    scene.add(self._goals)`

I am new to this, so i really dont understand that the log export when i run "python omniisaacgymenvs/scripts/dummy_dofbot_policy.py task=DofbotReacher test=True num_envs=1"

2023-12-19 03:47:41 [35,763ms] [Error] [carb.graphics-direct3d.plugin] NGX EvaluateFeature failed: 0xbad00002
2023-12-19 03:47:41 [35,766ms] [Error] [carb.graphics-direct3d.plugin] Failed to evaluate DLSS feature.
2023-12-19 03:47:41 [35,766ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:41 [35,766ms] [Error] [carb.graphics-direct3d.plugin] CommandList::Close() failed.
2023-12-19 03:47:41 [35,768ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:41 [35,768ms] [Error] [carb.graphics-direct3d.plugin] Closing command list failed.
2023-12-19 03:47:41 [35,768ms] [Error] [carb.scenerenderer-rtx.plugin] Failed to execute RenderGraph on device 0. Error Code: 1
2023-12-19 03:47:41 [35,770ms] [Error] [gpu.foundation.plugin] Failed to submit RenderGraph commands
2023-12-19 03:47:43 [37,368ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:43 [37,368ms] [Error] [carb.graphics-direct3d.plugin] CreateHeap failed.
2023-12-19 03:47:43 [37,368ms] [Error] [gpu.foundation.plugin] Texture creation failed for the device: 0.
2023-12-19 03:47:43 [37,368ms] [Error] [gpu.foundation.plugin] Failed to update params for RenderOp 9
2023-12-19 03:47:43 [37,369ms] [Error] [gpu.foundation.plugin] Failed to update params for RenderOp GBuffer RT. Will not execute this or subsequent RenderGraph operations. Aborting RenderGraph execution
2023-12-19 03:47:44 [37,998ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:44 [37,998ms] [Error] [carb.graphics-direct3d.plugin] CreateHeap failed.
2023-12-19 03:47:44 [37,998ms] [Error] [gpu.foundation.plugin] subAllocate() failed for AccelStruct size: 74318464
2023-12-19 03:47:44 [37,998ms] [Error] [gpu.foundation.plugin] AccelStruct creation failed for the device: 0.
2023-12-19 03:47:44 [37,999ms] [Error] [rtx.scenedb.plugin] getResourceFromAccelStructDesc() failed for compaction size: 74318464
2023-12-19 03:47:44 [38,039ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:44 [38,039ms] [Error] [carb.graphics-direct3d.plugin] CreateHeap failed.
2023-12-19 03:47:44 [38,040ms] [Error] [gpu.foundation.plugin] subAllocate() failed for AccelStruct size: 11307264
2023-12-19 03:47:44 [38,040ms] [Error] [gpu.foundation.plugin] AccelStruct creation failed for the device: 0.
2023-12-19 03:47:44 [38,040ms] [Error] [rtx.scenedb.plugin] getResourceFromAccelStructDesc() failed for compaction size: 11307264
2023-12-19 03:47:44 [38,041ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:44 [38,041ms] [Error] [carb.graphics-direct3d.plugin] CreateHeap failed.
2023-12-19 03:47:44 [38,041ms] [Error] [gpu.foundation.plugin] subAllocate() failed for AccelStruct size: 11239040
2023-12-19 03:47:44 [38,042ms] [Error] [gpu.foundation.plugin] AccelStruct creation failed for the device: 0.
2023-12-19 03:47:44 [38,042ms] [Error] [rtx.scenedb.plugin] getResourceFromAccelStructDesc() failed for compaction size: 11239040
2023-12-19 03:47:44 [38,043ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:44 [38,043ms] [Error] [carb.graphics-direct3d.plugin] CreateHeap failed.
2023-12-19 03:47:44 [38,043ms] [Error] [gpu.foundation.plugin] subAllocate() failed for AccelStruct size: 13257088
2023-12-19 03:47:44 [38,043ms] [Error] [gpu.foundation.plugin] AccelStruct creation failed for the device: 0.
2023-12-19 03:47:44 [38,044ms] [Error] [rtx.scenedb.plugin] getResourceFromAccelStructDesc() failed for compaction size: 13257088
2023-12-19 03:47:45 [39,420ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:45 [39,420ms] [Error] [carb.graphics-direct3d.plugin] Closing command list failed.
2023-12-19 03:47:45 [39,421ms] [Error] [gpu.foundation.plugin] Failed to submit RenderGraph commands
2023-12-19 03:47:45 [39,469ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:45 [39,469ms] [Error] [carb.graphics-direct3d.plugin] CommandList::Close() failed.
2023-12-19 03:47:45 [39,470ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:45 [39,470ms] [Error] [carb.graphics-direct3d.plugin] Closing command list failed.
2023-12-19 03:47:45 [39,470ms] [Error] [gpu.foundation.plugin] Failed to submit RenderGraph commands
2023-12-19 03:47:45 [39,510ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:45 [39,510ms] [Error] [carb.graphics-direct3d.plugin] CreateHeap failed.
2023-12-19 03:47:45 [39,510ms] [Error] [gpu.foundation.plugin] Unable to allocate buffer
2023-12-19 03:47:45 [39,511ms] [Error] [gpu.foundation.plugin] Buffer creation failed for the device: 0.
2023-12-19 03:47:45 [39,511ms] [Error] [gpu.foundation.plugin] Failed to update params for RenderOp 8
2023-12-19 03:47:45 [39,625ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:45 [39,625ms] [Error] [carb.graphics-direct3d.plugin] Closing command list failed.
2023-12-19 03:47:45 [39,625ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x8007000e
2023-12-19 03:47:45 [39,626ms] [Error] [carb.graphics-direct3d.plugin] Closing command list failed.

I really need your help (or anyone). Thanks for reading it!

How to use exported .pth file

Hi,

I am working through the training example for Dofbot reacher, and wanting to deploy it to a real arm.

I am wondering how I will be able to export the saved .pth to an actual dofbot and run it. What kinds of inputs to the loaded model from the .pth will I need to give it, and what will the output be?

I’m aware that I may be able to use PyTorch to load the model, but I don’t know what kinds of inputs I will need to give it.

In the Config for this task the observation type is set to full. What exactly does the “full” observation look like?

I'm assuming that the full state is like this based on the dofbot_reacher.py code

    self.num_obs_dict = {
        "full": 29,
        # 6: dofbot joints position (action space)
        # 6: dofbot joints velocity
        # 3: goal position
        # 4: goal rotation
        # 4: goal relative rotation
        # 6: previous action
    }

Can you explain to me what each of these values in the state dictionary are?

How Are Actions Being Generated

How exactly are actions being generated from the model for the Dofbot? The exported model I have has a mu, logstd, and value output. How are these being used? Can you show me in the code where predictions are being made for the actions?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.