Coder Social home page Coder Social logo

igibsonchallenge2021's Introduction

iGibson Challenge 2021 @ CVPR2021 Embodied AI Workshop

This repository contains starter code for iGibson Challenge 2021 brought to you by Stanford Vision and Learning Lab and Robotics @ Google. For an overview of the challenge, visit the challenge website. For an overview of the workshop, visit the workshop website.

Tasks

The iGibson Challenge 2021 uses the iGibson simulator [1] and is composed of two navigation tasks that represent important skills for autonomous visual navigation:

Interactive Navigation Social Navigation
  • Interactive Navigation: the agent is required to reach a navigation goal specified by a coordinate (as in PointNav [2]) given visual information (RGB+D images). The agent is allowed (or even encouraged) to collide and interact with the environment in order to push obstacles away to clear the path. Note that all objects in our scenes are assigned realistic physical weight and fully interactable. However, as in the real world, while some objects are light and movable by the robot, others are not. Along with the furniture objects originally in the scenes, we also add additional objects (e.g. shoes and toys) from the Google Scanned Objects dataset to simulate real-world clutter. We will use Interactive Navigation Score (INS) [3] to evaluate agents' performance in this task.

  • Social Navigation: the agent is required to navigate the goal specified by a coordinate while moving around pedestrians in the environment. Pedestrians in the scene move towards randomly sampled locations, and their movement is simulated using the social-forces model ORCA [4] integrated in iGibson [1], similar to the simulation enviroments in [5]. The agent shall avoid collisions or proximity to pedestrians beyond a threshold (distance <0.3 meter) to avoid episode termination. It should also maintain a comfortable distance to pedestrians (distance <0.5 meter), beyond which the score is penalized but episodes are not terminated. We will use the average of STL (Success weighted by Time Length) and PSC (Personal Space Compliance) to evaluate the agents' performance. More details can be found in the "Evaluation Metrics" section below.

Evaluation Metrics

  • Interactive Navigation: We will use Interactive Navigation Score (INS) as our evaluation metrics. INS is an average of Path Efficiency and Effort Efficiency. Path Efficiency is equivalent to SPL (Success weighted by Shortest Path). Effort Efficiency captures both the excess of displaced mass (kinematic effort) and applied force (dynamic effort) for interaction with objects. We argue that the agent needs to strike a healthy balance between taking a shorter path to the goal and causing less disturbance to the environment. More details can be found in our paper.

  • Social Navigation: We will use the average of STL (Success weighted by Time Length) and PSC (Personal Space Compliance) as our evaluation metrics. STL is computed by success * (time_spent_by_ORCA_agent / time_spent_by_robot_agent). The second term is the number of timesteps that an oracle ORCA agent take to reach the same goal assigned to the robot. This value is clipped by 1. In the context of Social Navigation, we argue STL is more applicable than SPL because a robot agent can achieve perfect SPL by "waiting out" all pedestrians before it makes a move, which defeats the purpose of the task. PSC (Personal Space Compliance) is computed as the percentage of timesteps that the robot agent comply with the pedestrians' personal space (distance >= 0.5 meter). We argue that the agent needs to strike a heathy balance between taking a shorted time to reach the goal and incuring less personal space violation to the pedestrians.

Dataset

We provide 8 scenes reconstructed from real world apartments in total for training in iGibson. All objects in the scenes are assigned realistic weight and fully interactable. For interactive navigation, we also provide 20 additional small objects (e.g. shoes and toys) from the Google Scanned Objects dataset. For fairness, please only use these scenes and objects for training.

For evaluation, we have 2 unseen scenes in our dev split and 5 unseen scenes in our test split. We also use 10 unseen small objects (they will share the same object categories as the 20 training small objects, but they will be different object instances).

Visualizations for the 8 training scenes.

alt text

Setup

We adopt the following task setup:

  • Observation: (1) Goal position relative to the robot in polar coordinates, (2) current linear and angular velocities, (3) RGB+D images.
  • Action: Desired normalized linear and angular velocity.
  • Reward: We provide some basic reward functions for reaching goal and making progress. Feel free to create your own.
  • Termination conditions: The episode termintes after 500 timesteps or the robot collides with any pedestrian in the Social Nav task.

The tech spec for the robot and the camera sensor can be found in here.

For Interactive Navigation, we place N additional small objects (e.g. toys, shoes) near the robot's shortest path to the goal (N is proportional to the path length). These objects are generally physically lighter than the objects originally in the scenes (e.g. tables, chairs).

For Social Navigation, we place M pedestrians randomly in the scenes that pursue their own random goals during the episode while respecting each other's personal space (M is proportional to the physical size of the scene). The pedestrians have the same maximum speed as the robot. They are aware of the robot so they won't walk straight into the robot. However, they also won't yield to the robot: if the robot moves straight towards the pedestrians, it will hit them and the episode will fail.

Participation Guidelines

Participate in the contest by registering on the EvalAI challenge page and creating a team. Participants will upload docker containers with their agents that evaluated on a AWS GPU-enabled instance. Before pushing the submissions for remote evaluation, participants should test the submission docker locally to make sure it is working. Instructions for training, local evaluation, and online submission are provided below.

Local Evaluation

  • Step 1: Clone the challenge repository

    git clone https://github.com/StanfordVL/iGibsonChallenge2021.git
    cd iGibsonChallenge2021

    Three example agents are provided in simple_agent.py and rl_agent.py: RandomAgent, ForwardOnlyAgent, and SACAgent.

    Here is the RandomAgent defined in simple_agent.py.

    ACTION_DIM = 2
    LINEAR_VEL_DIM = 0
    ANGULAR_VEL_DIM = 1
    
    
    class RandomAgent:
        def __init__(self):
            pass
    
        def reset(self):
            pass
    
        def act(self, observations):
            action = np.random.uniform(low=-1, high=1, size=(ACTION_DIM,))
            return action

    Please, implement your own agent and instantiate it from agent.py.

  • Step 2: Install nvidia-docker2, following the guide: https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0).

  • Step 3: Modify the provided Dockerfile to accommodate any dependencies. A minimal Dockerfile is shown below.

    FROM gibsonchallenge/gibson_challenge_2021:latest
    ENV PATH /miniconda/envs/gibson/bin:$PATH
    
    ADD agent.py /agent.py
    ADD simple_agent.py /simple_agent.py
    ADD rl_agent.py /rl_agent.py
    
    ADD submission.sh /submission.sh
    WORKDIR /

    Then build your docker container with docker build . -t my_submission , where my_submission is the docker image name you want to use.

  • Step 4:

    Download challenge data by running ./download.sh and the data will be decompressed in gibson_challenge_data_2021.

  • Step 5:

    Evaluate locally:

    You can run ./test_minival_locally.sh --docker-name my_submission

    If things work properly, you should be able to see the terminal output in the end:

    ...
    Episode: 1/3
    Episode: 2/3
    Episode: 3/3
    Avg success: 0.0
    Avg stl: 0.0
    Avg psc: 1.0
    Avg episode_return: -0.6209138999323173
    ...
    

    The script by default evaluates Social Navigation. If you want to evaluate Interactive Navigation, you need to change CONFIG_FILE, TASK and EPISODE_DIR in the script and make them consistent. It's recommended that you use TASK environment variable to switch agents in agent.py if you intend to use different policies for these two tasks.

Online submission

Follow instructions in the submit tab of the EvalAI challenge page to submit your docker image. Note that you will need a version of EvalAI >= 1.2.3. Here we reproduce part of those instructions for convenience:

# Installing EvalAI Command Line Interface
pip install "evalai>=1.2.3"

# Set EvalAI account token
evalai set_token <your EvalAI participant token>

# Push docker image to EvalAI docker registry
evalai push my_submission:latest --phase <phase-name>

The valid challenge phases are: igibson-minival-social-808, igibson-minival-interactive-808, igibson-dev-social-808, igibson-dev-interactive-808, igibson-test-social-808, igibson-test-interactive-808.

Our iGibson Challenge 2021 consists of four phases:

  • Minival Phase (igibson-minival-social-808, igibson-minival-interactive-808): The purpose of this phase to make sure your policy can be successfully submitted and evaluated. Participants are expected to download our starter code and submit a baseline policy, even a trivial one, to our evaluation server to verify their entire pipeline is correct.
  • Dev Phase (igibson-dev-social-808, igibson-dev-interactive-808): This phase is split into Interactive Navigation and Social Navigation tasks. Participants are expected to submit their solutions to each of the tasks separately. You may use the exact same policy for both tasks if you want, but you still need to submit twice. The results will be evaluated on the dataset dev split and the leaderboard will be updated within 24 hours.
  • Test Phase (igibson-test-social-808, igibson-test-interactive-808): This phase is also split into Interactive Navigation and Social Navigation. Participants are expected to submit a maximum of 5 solutions during the last 15 days of the challenge. The solutions will be evaluated on the dataset test split and the results will NOT be made available until the end of the challenge.
  • Winner Demo Phase: To increase visibility, the best three entries of each task of our challenge will have the opportunity to showcase their solutions in live or recorded video format during CVPR2021! All the top runners will be able to highlight their solutions and findings to the CVPR audience. Feel free to check out our presentation and our participants' presentations from our challenge last year on YouTube.

Training

Using Docker

Train with minival split (with only one of the training scene: Rs_int): ./train_minival_locally.sh --docker-name my_submission

Train with train split (with all eight training scenes): ./train_locally.sh --docker-name my_submission

Not using Docker

  • Step 0: install anaconda and create a python3.6 environment

    conda create -n gibson python=3.6
    conda activate gibson
    
  • Step 1: install CUDA and cuDNN. We tested with CUDA 10.0 and 10.1 and cuDNN 7.6.5

  • Step 2: install EGL dependency

    sudo apt-get install libegl1-mesa-dev
    
  • Step 3: install iGibson from source by following the documentation. Please use the cvpr21_challenge branch instead of the master branch.

    cd iGibson
    git fetch
    git checkout cvpr21_challenge
    pip install -e .
    
  • Step 4: Download challenge data by running ./download.sh and the data will be decompressed in gibson_challenge_data_2021. Create the folder iGibson/gibson2/data and move the content of gibson_challenge_data_2021 there.

  • Step 5: install our fork of tf-agents. Please use the cvpr21_challenge branch instead of the master branch.

    cd agents
    git fetch
    git checkout cvpr21_challenge
    pip install tensorflow-gpu==1.15.0
    pip install -e .
    
  • Step 6: start training (with only one of the training scene: Rs_int)!

    cd agents
    ./tf_agents/agents/sac/examples/v1/train_minival.sh
    

    This will train in one single scene specified by model_id in config_file.

  • Step 7: scale up training (with all eight training scenes)!

    cd agents
    ./tf_agents/agents/sac/examples/v1/train.sh
    

    The full training takes around 100G CPU memory and 10G GPU memory.

Feel free to skip Step 5-7 if you want to use other frameworks for training. This is just a example starter code for your reference.

References

[1] iGibson, a Simulation Environment for Interactive Tasks in Large Realistic Scenes. Bokui Shen, Fei Xia, Chengshu Li, Roberto Martín-Martín, Linxi Fan, Guanzhi Wang, Shyamal Buch, Claudia D'Arpino, Sanjana Srivastava, Lyne P Tchapmi, Micael E Tchapmi, Kent Vainio, Li Fei-Fei, Silvio Savarese. Preprint arXiv:2012.02924, 2020.

[2] On evaluation of embodied navigation agents. Peter Anderson, Angel Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, Amir R. Zamir. arXiv:1807.06757, 2018.

[3] Interactive Gibson: A Benchmark for Interactive Navigation in Cluttered Environments. Xia, Fei, William B. Shen, Chengshu Li, Priya Kasimbeg, Micael Tchapmi, Alexander Toshev, Roberto Martín-Martín, and Silvio Savarese. arXiv preprint arXiv:1910.14442 (2019).

[4] RVO2 Library: Reciprocal Collision Avoidance for Real-Time Multi-Agent Simulation. Jur van den Berg, Stephen J. Guy, Jamie Snape, Ming C. Lin, and Dinesh Manocha, 2011.

[5] Robot Navigation in Constrained Pedestrian Environments using Reinforcement Learning Claudia Pérez-D'Arpino, Can Liu, Patrick Goebel, Roberto Martín-Martín and Silvio Savarese. Preprint arXiv:2010.08600, 2020.

igibsonchallenge2021's People

Contributors

cdarpino avatar chengshuli avatar fxia22 avatar roberto-martinmartin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

igibsonchallenge2021's Issues

Cannot extract encrypted objs or meshes

I intend to use the social navigation features from the challenge for my research and am having issues running the code given some changes that have been made to iGibson. Since the objects and meshes are now encrypted and I am using the challenge branch of iGibson, I get the errors below for a variety of objects:

/.../gibson2/data/ig_dataset/scene_instances/20220125-125018_3046166092929093048_30752/: cannot extract anything useful from mesh '/.../gibson2/data/ig_dataset/objects/window/103238/shape/visual/link_2_m1_vm.encrypted.obj' b3Warning[examples/SharedMemory/plugins/tinyRendererPlugin/TinyRendererVisualShapeConverter.cpp,558]: issue extracting mesh from COLLADA/STL file /.../gibson2/data/ig_dataset/objects/window/103238/shape/visual/link_2_m1_vm.encrypted.obj

I've added the key to the data folder as per the new instructions, but I would need to add the decryption code from master. Have these changes for compatibility been implemented anywhere? or are there any suggested fixes? I can use the challenge data, but am wondering if there is official support for social navigation besides the challenge code.

Broken pipe error using docker

Hi, I used the original docker to train a SAC agent, but I got a broken error while training. Can you help me with this issue?

I0520 11:37:22.428182 140700759025472 mesh_renderer_cpu.py:335] Loading /opt/iGibson/gibson2/data/assets/models/person_meshes/person_2/meshes/person.obj
I0520 11:37:23.353936 140700759025472 parallel_py_environment.py:89] All processes started.
Traceback (most recent call last):
File "train_eval.py", line 596, in
app.run(main)
File "/miniconda/envs/gibson/lib/python3.6/site-packages/absl/app.py", line 303, in run
_run_main(main, args)
File "/miniconda/envs/gibson/lib/python3.6/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "train_eval.py", line 589, in main
model_ids_eval=FLAGS.model_ids_eval,
File "/miniconda/envs/gibson/lib/python3.6/site-packages/gin/config.py", line 1032, in wrapper
utils.augment_exception_message_and_reraise(e, err_str)
File "/miniconda/envs/gibson/lib/python3.6/site-packages/gin/utils.py", line 49, in augment_exception_message_and_reraise
six.raise_from(proxy.with_traceback(exception.traceback), None)
File "", line 3, in raise_from
File "/miniconda/envs/gibson/lib/python3.6/site-packages/gin/config.py", line 1009, in wrapper
return fn(*new_args, **new_kwargs)
File "train_eval.py", line 239, in train_eval
parallel_py_environment.ParallelPyEnvironment(tf_py_env))
File "/miniconda/envs/gibson/lib/python3.6/site-packages/gin/config.py", line 1032, in wrapper
utils.augment_exception_message_and_reraise(e, err_str)
File "/miniconda/envs/gibson/lib/python3.6/site-packages/gin/utils.py", line 49, in augment_exception_message_and_reraise
six.raise_from(proxy.with_traceback(exception.traceback), None)
File "", line 3, in raise_from
File "/miniconda/envs/gibson/lib/python3.6/site-packages/gin/config.py", line 1009, in wrapper
return fn(*new_args, **new_kwargs)
File "/opt/agents/tf_agents/environments/parallel_py_environment.py", line 75, in init
if any(env.action_spec() != self._action_spec for env in self._envs):
File "/opt/agents/tf_agents/environments/parallel_py_environment.py", line 75, in
if any(env.action_spec() != self._action_spec for env in self._envs):
File "/opt/agents/tf_agents/environments/parallel_py_environment.py", line 250, in action_spec
self._action_spec = self.call('action_spec')()
File "/opt/agents/tf_agents/environments/parallel_py_environment.py", line 285, in call
self._conn.send((self._CALL, payload))
File "/miniconda/envs/gibson/lib/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/miniconda/envs/gibson/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/miniconda/envs/gibson/lib/python3.6/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
In call to configurable 'ParallelPyEnvironment' (<function ParallelPyEnvironment.init at 0x7ff6a7e94d08>)
In call to configurable 'train_eval' (<function train_eval at 0x7ff6c1b1dbf8>)
PyBullet Logging Information:
PyBullet Logging Information

OpenGL (GLSL): What may be the cause of the error “ERROR::SHADER::VERTEX::COMPILATION_FAILED”

NVIDIA-SMI 430.50;
Driver Version: 430.50;
CUDA Version: 10.1;
Ubuntu 16.04;
4* GeForce RTX 2080 Ti Graphics Card;

It is OK when I follow the iGibson documents, Using Docker and remote GUI access via VNC http://svl.stanford.edu/igibson/docs/quickstart.html#using-docker-and-remote-gui-access-via-vnc
On my local machine, I can use any VNC client to visit the remote GUI.

However, when I go to the starter code for iGibsonChallenge2021,
and
run ./test_minival_locally.sh --docker-name my_submission

There is errors as follows:

./test_minival_locally.sh --docker-name my_submission
INFO:root:Importing iGibson (gibson2 module)
INFO:root:Assets path: /opt/iGibson/gibson2/data/assets
INFO:root:Gibson Dataset path: /opt/iGibson/gibson2/data/g_dataset
INFO:root:iG Dataset path: /opt/iGibson/gibson2/data/ig_dataset
INFO:root:3D-FRONT Dataset path: /opt/iGibson/gibson2/data/threedfront_dataset
INFO:root:CubiCasa5K Dataset path: /opt/iGibson/gibson2/data/cubicasa_dataset
INFO:root:Example path: /opt/iGibson/gibson2/examples
INFO:root:Example config path: /opt/iGibson/gibson2/examples/configs
pybullet build time: Dec 23 2020 01:46:51
ERROR::SHADER::VERTEX::COMPILATION_FAILED
0:1(10): error: GLSL 4.50 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.40, 1.00 ES, and 3.00 ES

ERROR::SHADER::FRAGMENT::COMPILATION_FAILED
0:1(10): error: GLSL 4.50 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.40, 1.00 ES, and 3.00 ES

ERROR::SHADER::PROGRAM::LINKING_FAILED
error: linking with uncompiled/unspecialized shadererror: linking with uncompiled/unspecialized shader
ERROR::SHADER::VERTEX::COMPILATION_FAILED
0:1(10): error: GLSL 4.10 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.40, 1.00 ES, and 3.00 ES

ERROR::SHADER::FRAGMENT::COMPILATION_FAILED
0:1(10): error: GLSL 4.10 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.40, 1.00 ES, and 3.00 ES

ERROR::SHADER::PROGRAM::LINKING_FAILED
error: linking with uncompiled/unspecialized shadererror: linking with uncompiled/unspecialized shader
torch is not available, falling back to rendering to memory(instead of tensor)
Compiling GLSL shader: /opt/iGibson/gibson2/render/mesh_renderer/shaders/450/equirect2cube_cs.glsl
Traceback (most recent call last):
File "agent.py", line 34, in
main()
File "agent.py", line 30, in main
challenge.submit(agent)
File "/opt/iGibson/gibson2/challenge/challenge.py", line 49, in submit
physics_timestep=1.0 / 40.0)
File "/opt/iGibson/gibson2/envs/igibson_env.py", line 57, in init
render_to_tensor=render_to_tensor)
File "/opt/iGibson/gibson2/envs/env_base.py", line 77, in init
rendering_settings=settings)
File "/opt/iGibson/gibson2/simulator.py", line 86, in init
self.load()
File "/opt/iGibson/gibson2/simulator.py", line 131, in load
rendering_settings=self.rendering_settings)
File "/opt/iGibson/gibson2/render/mesh_renderer/mesh_renderer_cpu.py", line 208, in init
self.setup_pbr()
File "/opt/iGibson/gibson2/render/mesh_renderer/mesh_renderer_cpu.py", line 224, in setup_pbr
self.rendering_settings.light_dimming_factor
RuntimeError: Shader compilation failed: /opt/iGibson/gibson2/render/mesh_renderer/shaders/450/equirect2cube_cs.glsl
0:1(10): error: GLSL 4.50 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.40, 1.00 ES, and 3.00 ES
0:0(0): error: Compute shaders require GLSL 4.30 or GLSL ES 3.10

AssertionError while running ./test_minival_locally.sh

Hi all, I encountered a problem when following 'Local Evaluation' step. When I ran ./test_minival_locally.sh --docker-name my_submission, it showed the error:

INFO:root:Importing iGibson (gibson2 module)
INFO:root:Assets path: /opt/iGibson/gibson2/data/assets
INFO:root:Gibson Dataset path: /opt/iGibson/gibson2/data/g_dataset
INFO:root:iG Dataset path: /opt/iGibson/gibson2/data/ig_dataset
INFO:root:3D-FRONT Dataset path: /opt/iGibson/gibson2/data/threedfront_dataset
INFO:root:CubiCasa5K Dataset path: /opt/iGibson/gibson2/data/cubicasa_dataset
INFO:root:Example path: /opt/iGibson/gibson2/examples
INFO:root:Example config path: /opt/iGibson/gibson2/examples/configs
pybullet build time: Dec 23 2020 01:46:51
torch is not available, falling back to rendering to memory(instead of tensor)
Traceback (most recent call last):
  File "agent.py", line 34, in <module>
    main()
  File "agent.py", line 30, in main
    challenge.submit(agent)
  File "/opt/iGibson/gibson2/challenge/challenge.py", line 33, in submit
    assert os.path.isdir(split_dir)
AssertionError

I used the same code shown in the document, but I skipped Step2(Install nvidia-docker2). So, is nvidia-docker2 required? Should I change some parts in the Dockerfile? Thanks!

Got stuck when running test_minival_locally.sh and testing online

I tried to run test_minival_locally.sh --docker-name my_submission but the procedure got stuck after "compiled per shaders" (in seemingly rendering the environments)

image

However, this command worked well when I directly ran it in my local terminal.

export CONFIG_FILE=~/eai/igibson-challenge-codes/iGibson/gibson2/examples/configs/locobot_social_nav.yaml; export TASK=social; export SPLIT=minival; export EPISODE_DIR=~/eai/igibson-challenge-codes/iGibson/gibson2/data/episodes_data/social_nav; CUDA_VISIBLE_DEVICES=5 python agent.py --agent-class SAC --ckpt-path ckpt

This problem also seems to happen during online evaluation. It appeared to be "running" for a long time after I submitted the docker and finally (after 1 or 2 days), there are many "cancelled", 1 "finished" and even still 1 "running".

image

All of these results are of one submission.

Question about training set

Hi team,

I am working with the iGibson scene dataset for a project and am interested in using the train/test/val splits being used for the iGibson challenge for our task.

I see that for the challenge, 8 scenes have been chosen for the train split with their visualization provided in the README. I also read in the README (here) that the Rs_int scene is one of the training scenes.

However, I am not sure if that is correct. I have worked with the scenes enough to be able to identify which scenes these are (by name). I doubt that Rs_int scene belongs to the train set. Can someone please confirm this? Thanks :)

How can I find and view the robot in the complex scene

The Viewer can automatically pop up if I use 'gui' mode in Environment.
And I can use Left click, Middle click and CTRL + click to move the perspective to view the robot.
However, when I test my model in the complex scene, for example the wainscott_1_int, it is hard for me to find my robot with CTRL + click and drag.

So, I have 4 questions as follows,
0. is it possible to remove the roof of a scene so that I can easily find my robot?

  1. can I get the Top-down view of the environment?
  2. can I try Keyboard control to find my robot includes W, A, S, D to translate forward/left/backward/right like in the gui or iggui mode in Simulator when test in the Environment?
  3. can I get the traversability map of a scene when test my model in the Environment?

Are we allowed to use the Gibson dataset of scenes?

Hi,

My team is interested in training our agents within the Habitat Simulator. We are able to correctly load the 3D models of the people and the objects, but not the 8 rooms themselves. Are we allowed to train using scenes from the Gibson dataset instead? Our team would be OK with training with only 8 of them if necessary.

Running speed

Hi, global steps per sec on my machine is around 2.95. I am wondering is this speed slow? BTW, does this speed contain training time or only simulation time?

And does this mean steps per sec of one environment and the total speed if I run parallel environments should be steps * n_environments?

I run train.sh with 2 environments on Ubuntu 16.04, Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz and two Titan Xp.

If there any Suggestions for the training on GeForce RTX 30XX Graphics Card

Hi,
I tried to install tensorflow-gpu==1.15.0 through a wheel package on my GeForce RTX 3070 and 3090
(https://developer.nvidia.com/blog/accelerating-tensorflow-on-a100-gpus/)
But it seems unable to complete the training with CUDA.

Then I tried to install tensorflow-gpu==2.4.0, because I want to use CUDA on my GeForce RTX 3070 and 3090,
It seems that I need to use TF-agent v0.6.0.

I worry about whether the Online submission can be completed, if I use tensorflow-gpu==2.4.0 together with TF-agent v0.6.0.

I want to try the starter code with tensorflow-gpu==1.15.0 with TF-agent v0.3.0 on my GeForce RTX 3070 and 3090
So, Could you please give me some Suggestions?
Thank you in advance for your help

get stuck when evaluate multi checkpoint

I want to check the performance on different checkpoint. And I run the following same script but different gpus and different checkpoints by modifying the first line of "checkpoint" file in policy directory.

export CONFIG_FILE=/opt/igibson/gibson2/examples/configs/locobot_interactive_nav.yaml; export TASK=interactive; export SPLIT=minival; export EPISODE_DIR=/opt/igibson/gibson2/data/episodes_data/interactive_nav; CUDA_VISIBLE_DEVICES=1 python agent.py --agent-class SAC --ckpt-path /opt/agents/tf_agents/agents/sac/examples/v1/test

But I got stuck in the segmentation fault.
image

By the way, I notice that the results are wrote to tensorboard, but I found it hard to forward port of tensorboard in a docker container. Could you provide a more convenient way to track different checkpoint of the model like visualizing in the tensorboard file or read the tensorboard?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.