Coder Social home page Coder Social logo

devendrachaplot / object-goal-navigation Goto Github PK

View Code? Open in Web Editor NEW
290.0 6.0 57.0 8.8 MB

Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"

Home Page: https://devendrachaplot.github.io/projects/semantic-exploration

License: MIT License

Dockerfile 1.44% Python 98.56%
deep-learning robotics navigation deep-reinforcement-learning exploration visual-navigation object-goal-navigation deep-rl semantic-navigation sem-exp

object-goal-navigation's Introduction

Object Goal Navigation using Goal-Oriented Semantic Exploration

This is a PyTorch implementation of the NeurIPS-20 paper:

Object Goal Navigation using Goal-Oriented Semantic Exploration
Devendra Singh Chaplot, Dhiraj Gandhi, Abhinav Gupta, Ruslan Salakhutdinov
Carnegie Mellon University, Facebook AI Research

Winner of the CVPR 2020 Habitat ObjectNav Challenge.

Project Website: https://devendrachaplot.github.io/projects/semantic-exploration

example

Overview:

The Goal-Oriented Semantic Exploration (SemExp) model consists of three modules: a Semantic Mapping Module, a Goal-Oriented Semantic Policy, and a deterministic Local Policy. As shown below, the Semantic Mapping model builds a semantic map over time. The Goal-Oriented Semantic Policy selects a long-term goal based on the semantic map to reach the given object goal efficiently. A deterministic local policy based on analytical planners is used to take low-level navigation actions to reach the long-term goal.

overview

This repository contains:

  • Episode train and test datasets for Object Goal Navigation task for the Gibson dataset in the Habitat Simulator.
  • The code to train and evaluate the Semantic Exploration (SemExp) model on the Object Goal Navigation task.
  • Pretrained SemExp model.

Installing Dependencies

Installing habitat-sim:

git clone https://github.com/facebookresearch/habitat-sim.git
cd habitat-sim; git checkout tags/v0.1.5; 
pip install -r requirements.txt; 
python setup.py install --headless
python setup.py install # (for Mac OS)

Installing habitat-lab:

git clone https://github.com/facebookresearch/habitat-lab.git
cd habitat-lab; git checkout tags/v0.1.5; 
pip install -e .

Check habitat installation by running python examples/benchmark.py in the habitat-lab folder.

  • Install pytorch according to your system configuration. The code is tested on pytorch v1.6.0 and cudatoolkit v10.2. If you are using conda:
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 #(Linux with GPU)
conda install pytorch==1.6.0 torchvision==0.7.0 -c pytorch #(Mac OS)
  • Install detectron2 according to your system configuration. If you are using conda:
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.6/index.html #(Linux with GPU)
CC=clang CXX=clang++ ARCHFLAGS="-arch x86_64" python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' #(Mac OS)

Docker and Singularity images:

We provide experimental docker and singularity images with all the dependencies installed, see Docker Instructions.

Setup

Clone the repository and install other requirements:

git clone https://github.com/devendrachaplot/Object-Goal-Navigation/
cd Object-Goal-Navigation/;
pip install -r requirements.txt

Downloading scene dataset

Downloading episode dataset

  • Download the episode dataset:
wget --no-check-certificate 'https://drive.google.com/uc?export=download&id=1tslnZAkH8m3V5nP8pbtBmaR2XEfr8Rau' -O objectnav_gibson_v1.1.zip
  • Unzip the dataset into data/datasets/objectnav/gibson/v1.1/

Setting up datasets

The code requires the datasets in a data folder in the following format (same as habitat-lab):

Object-Goal-Navigation/
  data/
    scene_datasets/
      gibson_semantic/
        Adrian.glb
        Adrian.navmesh
        ...
    datasets/
      objectnav/
        gibson/
          v1.1/
            train/
            val/

Test setup

To verify that the data is setup correctly, run:

python test.py --agent random -n1 --num_eval_episodes 1 --auto_gpu_config 0

Usage

Training:

For training the SemExp model on the Object Goal Navigation task:

python main.py

Downloading pre-trained models

mkdir pretrained_models;
wget --no-check-certificate 'https://drive.google.com/uc?export=download&id=171ZA7XNu5vi3XLpuKs8DuGGZrYyuSjL0' -O pretrained_models/sem_exp.pth

For evaluation:

For evaluating the pre-trained model:

python main.py --split val --eval 1 --load pretrained_models/sem_exp.pth

For visualizing the agent observations and predicted semantic map, add -v 1 as an argument to the above command.

The pre-trained model should get 0.657 Success, 0.339 SPL and 1.474 DTG.

For more detailed instructions, see INSTRUCTIONS.

Cite as

Chaplot, D.S., Gandhi, D., Gupta, A. and Salakhutdinov, R., 2020. Object Goal Navigation using Goal-Oriented Semantic Exploration. In Neural Information Processing Systems (NeurIPS-20). (PDF)

Bibtex:

@inproceedings{chaplot2020object,
  title={Object Goal Navigation using Goal-Oriented Semantic Exploration},
  author={Chaplot, Devendra Singh and Gandhi, Dhiraj and
            Gupta, Abhinav and Salakhutdinov, Ruslan},
  booktitle={In Neural Information Processing Systems (NeurIPS)},
  year={2020}
  }

Related Projects

Acknowledgements

This repository uses Habitat Lab implementation for running the RL environment. The implementation of PPO is borrowed from ikostrikov/pytorch-a2c-ppo-acktr-gail. The Mask-RCNN implementation is based on the detectron2 repository. We would also like to thank Shubham Tulsiani and Saurabh Gupta for their help in implementing some parts of the code.

object-goal-navigation's People

Contributors

devendrachaplot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

object-goal-navigation's Issues

error with scikit-fmm==2019.1.30

When I run " pip install -r requirements.txt " to install env, there is something wrong in the following.

$ pip install -r requirements.txt

Collecting scikit-fmm==2019.1.30 (from -r requirements.txt (line 1))
Using cached scikit-fmm-2019.1.30.tar.gz (418 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [16 lines of output]
/home/dji/anaconda3/envs/pointnav/lib/python3.9/site-packages/setuptools/_distutils/dist.py:268: UserWarning: Unknown distribution option: 'configuration'
warnings.warn(msg)
error: Multiple top-level packages discovered in a flat-layout: ['skfmm', 'profile'].

  To avoid accidental inclusion of unwanted files or directories,
  setuptools will not proceed with this build.
  
  If you are trying to create a single distribution with multiple packages
  on purpose, you should not rely on automatic discovery.
  Instead, consider the following options:
  
  1. set up custom discovery (`find` directive with `include` or `exclude`)
  2. use a `src-layout`
  3. explicitly set `py_modules` or `packages` with a list of names
  
  To find more information, look for "package discovery" on setuptools docs.
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

keyerror when eval the pretrained model provided by the offical

Process ForkServerProcess-2:
Traceback (most recent call last):
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/utils/vector_env.py", line 193, in _worker_env
observations = env.reset()
File "/home/zyj/vln/Object-Goal-Navigation/agents/sem_exp.py", line 62, in reset
obs, info = super().reset()
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/objectgoal_env.py", line 329, in reset
obs = self.load_new_episode()
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/objectgoal_env.py", line 106, in load_new_episode
goal_name = episode["goal_name"]
KeyError: 'goal_name'
I0406 18:01:11.748121 174960 Simulator.cpp:39] Deconstructing Simulator
I0406 18:01:11.749099 174959 Renderer.cpp:33] Deconstructing Renderer
I0406 18:01:11.749117 174959 WindowlessContext.h:16] Deconstructing WindowlessContext
I0406 18:01:11.749120 174959 WindowlessContext.cpp:245] Deconstructing GL context
Process ForkServerProcess-1:
Traceback (most recent call last):
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/utils/vector_env.py", line 193, in _worker_env
observations = env.reset()
File "/home/zyj/vln/Object-Goal-Navigation/agents/sem_exp.py", line 62, in reset
obs, info = super().reset()
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/objectgoal_env.py", line 329, in reset
obs = self.load_new_episode()
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/objectgoal_env.py", line 106, in load_new_episode
goal_name = episode["goal_name"]
KeyError: 'goal_name'
I0406 18:01:11.749852 174959 Simulator.cpp:39] Deconstructing Simulator
I0406 18:01:11.753590 174961 Renderer.cpp:33] Deconstructing Renderer
I0406 18:01:11.753607 174961 WindowlessContext.h:16] Deconstructing WindowlessContext
I0406 18:01:11.753609 174961 WindowlessContext.cpp:245] Deconstructing GL context
Process ForkServerProcess-3:
Traceback (most recent call last):
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/utils/vector_env.py", line 193, in _worker_env
observations = env.reset()
File "/home/zyj/vln/Object-Goal-Navigation/agents/sem_exp.py", line 62, in reset
obs, info = super().reset()
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/objectgoal_env.py", line 329, in reset
obs = self.load_new_episode()
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/objectgoal_env.py", line 106, in load_new_episode
goal_name = episode["goal_name"]
KeyError: 'goal_name'
I0406 18:01:11.754305 174961 Simulator.cpp:39] Deconstructing Simulator
I0406 18:01:11.761523 174962 Renderer.cpp:33] Deconstructing Renderer
I0406 18:01:11.761543 174962 WindowlessContext.h:16] Deconstructing WindowlessContext
I0406 18:01:11.761547 174962 WindowlessContext.cpp:245] Deconstructing GL context
Process ForkServerProcess-4:
Traceback (most recent call last):
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/utils/vector_env.py", line 193, in _worker_env
observations = env.reset()
File "/home/zyj/vln/Object-Goal-Navigation/agents/sem_exp.py", line 62, in reset
obs, info = super().reset()
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/objectgoal_env.py", line 329, in reset
obs = self.load_new_episode()
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/objectgoal_env.py", line 106, in load_new_episode
goal_name = episode["goal_name"]
KeyError: 'goal_name'
I0406 18:01:11.762240 174962 Simulator.cpp:39] Deconstructing Simulator
I0406 18:01:11.767433 174963 Renderer.cpp:33] Deconstructing Renderer
I0406 18:01:11.767453 174963 WindowlessContext.h:16] Deconstructing WindowlessContext
I0406 18:01:11.767457 174963 WindowlessContext.cpp:245] Deconstructing GL context
Process ForkServerProcess-5:
Traceback (most recent call last):
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/utils/vector_env.py", line 193, in _worker_env
observations = env.reset()
File "/home/zyj/vln/Object-Goal-Navigation/agents/sem_exp.py", line 62, in reset
obs, info = super().reset()
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/objectgoal_env.py", line 329, in reset
obs = self.load_new_episode()
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/objectgoal_env.py", line 106, in load_new_episode
goal_name = episode["goal_name"]
KeyError: 'goal_name'
I0406 18:01:11.768168 174963 Simulator.cpp:39] Deconstructing Simulator
Traceback (most recent call last):
File "main.py", line 697, in
main()
File "main.py", line 84, in main
obs, infos = envs.reset()
File "/home/zyj/vln/Object-Goal-Navigation/envs/init.py", line 24, in reset
obs, info = self.venv.reset()
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/utils/vector_env.py", line 330, in reset
results.append(read_fn())
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError
Exception ignored in: <function VectorEnv.del at 0x7f1d5c804820>
Traceback (most recent call last):
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/utils/vector_env.py", line 539, in del
self.close()
File "/home/zyj/vln/Object-Goal-Navigation/envs/habitat/utils/vector_env.py", line 403, in close
read_fn()
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/home/zyj/anaconda3/envs/semexp/lib/python3.8/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError:

Does anyone face the same question?How can I solve this problem? Thank you .This problem "KeyError: 'goal_name' " just exists in the eval stage not exists in training stage.

Issue about the generated occupancy map.

Hey! Thanks for your awesome work here!
I have a question as below:
When I visualize the agent_view and corresponding RGB (from obs) in model. Semantic_Mapping.forward, I found that the generated occupancy map is opposite to the occupancy of the actual scene in RGB. For example, in the figure below, the generated object A in the generated map is on the right side of the agent, while in the real RGB it is on the left. Several other major objects can also be seen from the red two-way arrows.
a7bfb7d221c46870323ec564a95fd70
At first, I thought that after applying the rotated and translated operation behind, this situation would disappear. But when I continue to visualize translated, it seems that this situation still exists (shown as below). Can you explain why this is? Thanks!

75ca50c4e4ed9ff257737ef1d1f8191

Train_info.pbz2 of MP3D

Hey folks
Your work is brilliant. But I encountered a problem in running MP3D, which is lack of Train_info.pbz2 of MP3D. Could you provide this file or how this file is generated? Thank you.

Running on Matterport3D

Hi,

Thanks for your amazing works. I am trying to get it run on Matterport3D, but an info file (train_info.pbz2) is required. Would you be able to provide the codes for generating that file?

Thanks!

Visualize Ground Truth Semantic Map

hi everyone : )
I found the _visualize() in semantic_exp.py is used to render Predicted Semantic Map, and it works pretty well.

But how can we get a Ground Truth Semantic Map like this? It seems that simply replace inputs['sem_map_pred'] with scene_info[floor_idx]['sem_map'](which means ground truth semantic map in objectgoal_env.py) didn't work well.
image

CMake error in MacOS

When I try to install habitat-sim:

-- Check for working C compiler: /usr/bin/cc - broken
CMake Error at /Users/miniforge3/envs/semexp/lib/python3.8/site-packages/cmake/data/share/cmake-3.26/Modules/CMakeTestCCompiler.cmake:67 (message):
The C compiler

"/usr/bin/cc"

is not able to compile a simple test program.

It fails with the following output:

Change Dir: /Users/leyuansun/PycharmProjects/habitat-sim/build/CMakeFiles/CMakeScratch/TryCompile-K32F3s

Run Build Command(s):/Users/leyuansun/miniforge3/envs/semexp/lib/python3.8/site-packages/cmake/data/bin/cmake -E env VERBOSE=1 /usr/bin/make -f Makefile cmTC_6eb5b/fast && xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun

CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
CMakeLists.txt:13 (project)

Running on real robot

Amazing work!. I am wondering if I can run pretrained model on real robot. Would you like to share the info for running the pretrained model on real robot. Thank you for help.

Questions about Semantic Mapping Module

Hi everybody ! I have been trying to extract Semantic Mapping Module as a baseline.

However, just like ugurbolat mentioned, semantic mapping part is not learned, and there are only projection transformation functions to get the bird eye's view.

When I use semantic mapping module to update my local map, it dose't work well, the local map generated by this part is far from ground-truth.
I don't know if someone else encountered with same questions, can you get proper map using this module?

The Semantic Map building in the real-world experiment

When I try to build a semantic map in a ROS environment using the Semantic_Mapping module you provided, the updates to the semantic map are misaligned. When you build the semantic map in the real robot experiment, do you also use the Semantic_Mapping module in the code, and how do you set the parameters, such as roof_thre and floor_thre.

How to get the sem_map of HM3D dataset?

The Gibson dataset was used in the case, and the sem_map was obtained directly from the "val_info.pbz2" file, but the HM3D dataset does not have this file. How to obtain the sem_map of HM3D? Can the HM3D dataset be tested using this algorithm?

Observation shows black images/screen

Hi,
When I run the evaluation python main.py --split val --eval 1 --load pretrained_models/sem_exp.pth -v 1the output visualization shows black screen for observations and the semantic map does not show any objects. [See reference image below]
Thread 0_screenshot_11 07 2022

I am using python 3.7.0, pytorch==1.9.0, torchvision==0.10.0, torchaudio==0.9.0, cudatoolkit=11.1.
My data seems to be in place:
The .glb and .navmesh files are under ~/Object-Goal-Navigation/data/scene_datasets/gibson_semantic/
The .glb.json.gz files are under ~/Object-Goal-Navigation/data/datasets/objectnav/gibson/v1.1/{split}/content/
The .gz and .pbz2 files are under ~/Object-Goal-Navigation/data/datasets/objectnav/gibson/v1.1/{split}/

Running the python test.py --agent random -n1 --num_eval_episodes 1 --auto_gpu_config 0 script shows Test successfully completed

Following is the console output for running python main.py --split val --eval 1 --load pretrained_models/sem_exp.pth -v 1
`Auto GPU config:
Number of processes: 1
Number of processes on GPU 0: 1
Number of processes per GPU: 0
Dumping at ./tmp//models/exp1/
Namespace(agent='sem_exp', alpha=0.99, auto_gpu_config=1, camera_height=0.88, cat_pred_threshold=5.0, clip_param=0.2, collision_threshold=0.2, cuda=True, du_scale=1, dump_location='./tmp/', entropy_coef=0.001, env_frame_height=480, env_frame_width=640, eps=1e-05, eval=1, exp_name='exp1', exp_pred_threshold=1.0, floor_thr=50, frame_height=120, frame_width=160, gamma=0.99, global_downscaling=2, global_hidden_size=256, hfov=79.0, intrinsic_rew_coeff=0.02, load='pretrained_models/sem_exp.pth', log_interval=10, lr=2.5e-05, map_pred_threshold=1.0, map_resolution=5, map_size_cm=2400, max_d=100.0, max_depth=5.0, max_episode_length=500, max_grad_norm=0.5, min_d=1.5, min_depth=0.5, no_cuda=False, num_eval_episodes=200, num_global_steps=20, num_local_steps=25, num_mini_batch=1, num_processes=1, num_processes_on_first_gpu=1, num_processes_per_gpu=0, num_sem_categories=16, num_train_episodes=10000, num_training_frames=10000000, ppo_epoch=4, print_images=0, reward_coeff=0.1, save_interval=1, save_periodic=500000, seed=1, sem_gpu_id=-1, sem_pred_prob_thr=0.9, sim_gpu_id=1, split='val', success_dist=1.0, task_config='tasks/objectnav_gibson.yaml', tau=0.95, total_num_scenes=5, turn_angle=30, use_gae=False, use_recurrent_global=0, value_loss_coef=0.5, version='v1.1', vision_range=100, visualize=1)
Scenes per thread:
Thread 0: ['Collierville.glb', 'Corozal.glb', 'Darden.glb', 'Markleeville.glb', 'Wiconisco.glb']
2022-07-12 22:17:44,765 Initializing dataset PointNav-v1
2022-07-12 22:17:44,766 initializing sim Sim-v0
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0712 22:17:44.778574 252548 Simulator.cpp:96] Loading navmesh from data/scene_datasets/gibson_semantic//Collierville.navmesh
I0712 22:17:44.778640 252548 Simulator.cpp:98] Loaded.
I0712 22:17:44.778653 252548 SceneGraph.h:92] Created DrawableGroup:
Renderer: Mesa Intel(R) UHD Graphics (TGL GT1) by Intel
OpenGL version: 4.6 (Core Profile) Mesa 21.2.6
Using optional features:
GL_ARB_ES2_compatibility
GL_ARB_direct_state_access
GL_ARB_get_texture_sub_image
GL_ARB_invalidate_subdata
GL_ARB_multi_bind
GL_ARB_robustness
GL_ARB_separate_shader_objects
GL_ARB_texture_filter_anisotropic
GL_ARB_texture_storage
GL_ARB_texture_storage_multisample
GL_ARB_vertex_array_object
GL_KHR_debug
Using driver workarounds:
no-layout-qualifiers-on-old-glsl
mesa-implementation-color-read-format-dsa-explicit-binding
mesa-forward-compatible-line-width-range
I0712 22:17:44.809857 252548 ResourceManager.cpp:1380] Importing Basis files as ASTC 4x4
I0712 22:17:46.590839 252548 Simulator.cpp:167] Loading house from data/scene_datasets/gibson_semantic//Collierville.scn
I0712 22:17:46.590855 252548 Simulator.cpp:188] Not loading semantic mesh
I0712 22:17:46.591197 252548 simulator.py:146] Loaded navmesh data/scene_datasets/gibson_semantic//Collierville.navmesh
I0712 22:17:46.591398 252548 simulator.py:158] Recomputing navmesh for agent's height 0.88 and radius 0.18.
I0712 22:17:46.602424 252548 PathFinder.cpp:375] Building navmesh with 108x202 cells
I0712 22:17:46.676290 252548 PathFinder.cpp:643] Created navmesh with 246 vertices 126 polygons
I0712 22:17:46.676309 252548 Simulator.cpp:609] reconstruct navmesh successful
2022-07-12 22:17:46,678 Initializing task ObjectNav-v1
/home/arhanni/anaconda3/envs/ogn/lib/python3.7/site-packages/torchvision/transforms/transforms.py:281: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
"Argument interpolation should be of type InterpolationMode instead of int. "
[07/12 22:17:46 detectron2]: Arguments: Namespace(confidence_threshold=0.9, config_file='configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml', input=['input1.jpeg'], opts=['MODEL.WEIGHTS', 'detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl', 'MODEL.DEVICE', 'cuda:0'], output=None, video_input=None, webcam=False)
Changing scene: 0/data/scene_datasets/gibson_semantic//Collierville.glb
Loading episodes from: data/datasets/objectnav/gibson/v1.1/val/content/Collierville_episodes.json.gz
/home/arhanni/anaconda3/envs/ogn/lib/python3.7/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at /opt/conda/conda-bld/pytorch_1623448265233/work/aten/src/ATen/native/BinaryOps.cpp:467.)
return torch.floor_divide(self, other)
/home/arhanni/anaconda3/envs/ogn/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448265233/work/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Loading model pretrained_models/sem_exp.pth
/home/arhanni/anaconda3/envs/ogn/lib/python3.7/site-packages/torch/nn/functional.py:4044: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
"Default grid_sample and affine_grid behavior has changed "
/home/arhanni/anaconda3/envs/ogn/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448265233/work/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
QObject::moveToThread: Current thread (0x557aa5d2f580) is not the object's thread (0x557b53203880).
Cannot move to target thread (0x557aa5d2f580)

QObject::moveToThread: Current thread (0x557aa5d2f580) is not the object's thread (0x557b53203880).
Cannot move to target thread (0x557aa5d2f580)

QObject::moveToThread: Current thread (0x557aa5d2f580) is not the object's thread (0x557b53203880).
Cannot move to target thread (0x557aa5d2f580)
......

QObject::moveToThread: Current thread (0x557aa5d2f580) is not the object's thread (0x557b53203880).
Cannot move to target thread (0x557aa5d2f580)

Time: 00d 00h 00m 00s, num timesteps 0, FPS 0,
Rewards:
Losses:
Time: 00d 00h 00m 01s, num timesteps 10, FPS 8,
Rewards:
Losses:
Time: 00d 00h 00m 02s, num timesteps 20, FPS 9,
Rewards:
Losses:
Time: 00d 00h 00m 03s, num timesteps 30, FPS 9,
Rewards:
Losses:
Time: 00d 00h 00m 04s, num timesteps 40, FPS 9,
Rewards:
Losses:
...........
`

Following is what I have tried so far and have been unsuccessful.
I found the following issue reported on habitat-sim https://github.com/facebookresearch/habitat-sim/issues/1080.
I downloaded the 3DSceneGraph_tiny.zip and ran the tools/gen_gibson_semantics.sh script as specified here: https://github.com/facebookresearch/habitat-sim/blob/main/DATASETS.md. This script generated .ids and .scn files which I copied to ~/Object-Goal-Navigation/data/scene_datasets/gibson_semantic/ but the visualization still shows a blank screen. The gen_gibson_semantics.sh script gave the following error in generating _semantic.ply files Allensville wrote 2557798 bytes wrote 27718 bytes ESP_CHECK failed: esp::logging::LoggingContext: No current logging context. tools/gen_gibson_semantics.sh: line 17: 237742 Aborted "${TOOLS_DIR}"/../build/utils/datatool/Datatool create_gibson_semantic_mesh "${OBJ_PATH}"/"${scene}"/mesh.obj "${OUT_PATH}"/"${scene}".ids "${OUT_PATH}"/"${scene}"_semantic.ply because it couldn't find the mesh.obj files.
The gibson_habitat_trainval.zip does not contain the .obj files.

I need the RGB observations to extract the semantic predictions and build my project on top of this. Any support would be appreciated!

AttributeError: 'Configuration' object has no attribute 'SCENE'

Hi
I am trying to reproduce your results, however when I run main.py file, I get following error
Traceback (most recent call last):
File "/home/ma/rmoslemi/anaconda3/envs/habitat/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/ma/rmoslemi/anaconda3/envs/habitat/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/net/acadia13a/data/ramin/my_proj/object_navigation/envs/habitat/utils/vector_env.py", line 192, in _worker_env
observations = env.reset()
File "/net/acadia13a/data/ramin/my_proj/object_navigation/agents/sem_exp.py", line 62, in reset
obs, info = super().reset()
File "/net/acadia13a/data/ramin/my_proj/object_navigation/envs/habitat/objectgoal_env.py", line 321, in reset
self.scene_name = self.habitat_env.sim.config.SCENE
AttributeError: 'Configuration' object has no attribute 'SCENE'
I0429 16:04:39.295415 23132 Simulator.cpp:49] Deconstructing Simulator
I0429 16:04:39.436203 23133 Renderer.cpp:34] Deconstructing Renderer

the error originally belongs
obs, infos = envs.reset()

I took a look on envs/habitat/objectgola_env.py and I could not find where you define self.habitat_env.sim.config.SCENE for ObjectGoal_Env class.

Just to add i am using the stable version of habitat sim and lab. However, I do not think the error related to habitat versions .

Potential inconsistency in metric and reward computation

Why does starting_distance in ObjectGoal_Env includes the object_boundary distance?

self.starting_distance = self.gt_planner.fmm_dist[self.starting_loc]\
/ 20.0 + self.object_boundary
self.prev_distance = self.starting_distance

The shortest path should only reach the object boundary right? Not the object itself. This also propagates into the reward for action 0 since prev_distance includes the object boundary, but curr_distance does not.

self.curr_distance = self.gt_planner.fmm_dist[curr_loc[0],
curr_loc[1]] / 20.0
reward = (self.prev_distance - self.curr_distance) * \
self.args.reward_coeff

The object_boundary should be not be added to starting_distance. Happy to send a PR if this makes sense.

Pitch angle destorys the semantic map

Hey folks,

I use the pr2 robot with your mapping examples and found out that the additional camera tilt joint causes the map to get destroyed. Is there anything I can do to overcome this issue ?

transform map into robots frame ?

Hey folks,

thanks for the awesome work here.
My question is as follows: is it possible to transform the map, which is given as input to the policy into the agents coordinate frame ?

Thanks in advance !

Generating Ground Truth Semantic Map

Great work and thanks for releasing your code. I am currently reproducing the work for MP3D dataset only and would appreciate your responses to the following questions:

  1. Is the ground-truth semantic map for the MP3D environments also available for training our the model solely on MP3D dataset?

  2. A follow-up to the previous question: Any chance you might be releasing the code for generating the ground truth semantic map (referenced here): https://github.com/devendrachaplot/Object-Goal-Navigation/blob/master/envs/habitat/objectgoal_env.py#L112, for cases where we might generate our own mp3d ground truth semantic map.

Thanks!

Issues installing detectron 2

Good evening, I have been trying to follow the installation procedure via the standard instructions but I can not manage to install detectron2 on M1 Mac.

Did someone succeed in that?

request for help

Hi!
When I try to run “python test.py --agent random -n1 --num_eval_episodes 1 --auto_gpu_config 0”, I get an error. show in following!
ImportError: /home/.../miniconda3/envs/habitat/lib/python3.9/site-packages/_magnum.cpython-39-x86_64-linux-gnu.so: undefined symbol: _ZN6Assimp11AMFImporter16ParseNode_TexMapEb

Why is it so assigned here?

Hello, I don't understand that why use 'lmb' to assign a value to full_map directly here? Later 'lmb' will be reassigned for scene e. It makes me confused. Looking forward to your reply! Thanks!

image

Tips for extracting Semantic Mapping module as a baseline

Hi,

Thanks for the interesting work.

I would like to evaluate the Semantic Mapping module without the action planning/navigation part since the dataset I want to test already provides navigation.

As far as I've explored the project, there are two key related modules: (1) Semantic_Mapping class that does the local map and agent pose predictions and (2) Sem_Exp_Env_Agent class that uses SemanticPredMaskRCNN pretrained detectron2 for object detection and segmentation. I feel like the latter - (2) module is kind of tightly coupled to the habitat environment-related part.

What would be your tips/ideas for extracting the Semantic Mapping module from the codebase? Or do you think that there won't be any straightforward solution?

Would appreciate any kind of feedback.

Evaluate on custom dataset

Hello, thanks for sharing this work.

I was wondering if I could specify the scene to evaluate, rather than simply limiting the number of scenes evaluated simultaneously (I've tried to evaluate only the 1st and 2nd scenes because of my GPU limitation).

Furthermore, we wanted to test it in our own environments other than Gibson since we can get 3D point clouds using our scanner. Could you give me some advice on how to do this? Like how to change my 3D point clouds to the format same as Gibson and can be used in the Object-Goal-Navigation project.

Any comments would be appreciated. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.