Coder Social home page Coder Social logo

hssd's Introduction

HSSD: Habitat Synthetic Scene Dataset

arXiv

This repository serves as a guide for training and evaluating ObjectNav agents in HSSD, AI2THOR, and HM3D scene datasets using Habitat, and reproducing experiments provided in the HSSD paper.

Habitat Synthetic Scenes Dataset (HSSD): An Analysis of 3D Scene Scale and Realism Tradeoffs for ObjectGoal Navigation

teaser_small.mp4

Table of contents

About

We contribute the Habitat Synthetic Scenes Dataset, a dataset of 211 high-quality 3D scenes, and use it to test navigation agent generalization to realistic 3D environments. Our dataset represents real interiors and contains a diverse set of 18,656 models of real-world objects. We investigate the impact of synthetic 3D scene dataset scale and realism on the task of training embodied agents to find and navigate to objects (ObjectGoal navigation).

Pre-requisites

To load HSSD and other scene datasets in simulation and train and evlaute embodied agents in them, you will need to install:

  1. Habitat-sim (v0.2.5)
conda install habitat-sim=0.2.5 -c conda-forge -c aihabitat
  1. Habitat-lab (v0.2.5)
git clone --branch v0.2.5 https://github.com/facebookresearch/habitat-lab.git
cd habitat-lab
pip install -e habitat-lab  # install habitat_lab
pip install -e habitat-baselines  # install habitat_lab

Note: Even though the HSSD scene dataset is compatible with the latest habitat-sim and habitat-lab versions, the results in the paper were reported using v0.2.5 branch of both habitat libraries.

Scene datasets

HSSD:

HSSD has been hosted on HuggingFace at huggingface.co/datasets/hssd/hssd-hab. Here, you can preview the dataset and find information about the folder structure and instructions on getting started.

For conveniently running subsequent training and evaluation experiments, you can clone the dataset to the following path in your habitat-lab installation:

cd /path/to/habitat-lab
git clone https://huggingface.co/datasets/hssd/hssd-hab data/scene_datasets/hssd-hab

(Optional) ProcTHOR-HAB:

You can find Habitat-compatible AI2THOR scene datasets (like ProcTHOR, iTHOR, and RoboTHOR), here: huggingface.co/datasets/hssd/ai2thor-hab.

For conveniently running subsequent training and evaluation experiments, you can clone the dataset to the following path in your habitat-lab installation:

cd /path/to/habitat-lab
git clone https://huggingface.co/datasets/hssd/ai2thor-hab data/scene_datasets/ai2thor-hab

(Optional) HM3D-semantics:

To download the HM3D scene dataset, refer to the instructions provided here.

ObjectNav episode datasets

To download episode datasets for HSSD, ProcTHOR-HAB, and HM3D-semantics, you will need to fetch the zipped files from the provided links and extract them to the corresponding paths specified below.

Task Scenes Link Extract path Config to use Archive size
ObjectNav HSSD-hab objectnav_hssd-hab_v0.2.5.zip data/datasets/objectnav/hssd-hab datasets/objectnav/hssd-hab.yaml 24 MB
ObjectNav ProcTHOR-hab objectnav_procthor-hab.zip data/datasets/objectnav/procthor-hab datasets/objectnav/procthor-hab.yaml 755 MB
ObjectNav HM3DSem-v0.2 objectnav_hm3d_v2_locobot_multifloor.zip data/datasets/objectnav/hm3d/v2/ datasets/objectnav/hm3d_v2.yaml 245 MB

Training and evaluation setup

Now that you have habitat installed, and the scene and episode datasets downloaded, you are all set to train and evaluate ObjectNav agents in simulation.

However, to reproduce specific experiments from the HSSD paper – with the specified agent embodiment, camera sensor resolution, and a CLIP visual encoder backbone – you will need Habitat to use the task and training configuration files provided in this repository.

For your convenience, you can run the script below to conveniently move the necessary config files to appropriate folders in your habitat-lab installation directory. This will make it significantly easier to invoke train and eval commands using Habitat.

python setup.py --hab-lab-path /path/to/habitat-lab

Training and evaluation commands

Change directory to habitat-lab for successfuly running subsequent commands.

cd /path/to/habitat-lab

Pre-train

You can pre-train an ObjectNav agent on HSSD, ProcTHOR-hab, or HM3D, using any variant of the following command:

python -u -m habitat_baselines.run --config-name=objectnav/hssd-200_ver_clip_{hssd-hab, procthor-hab, hm3d}.yaml

You can find more information about training runs through checkpoints and tensorboard logs saved here: /path/to/habitat-lab/data/training/objectnav.

Note: The above script will run training on 1 GPU. However, all the training experiments in the paper used 4 A40 GPUs and 16 CPU cores per GPU on a SLURM-based compute cluster. To schedule a similar training run, you can start with the sample batch script provided in the scripts/slurm directory of this repository – and modify it as per your requirement.

Evaluate

You can evaluate trained models on val datasets as such:

python -u -m habitat_baselines.run --config-name=objectnav/hssd-200_ver_clip_{hssd-hab, procthor-hab, hm3d}.yaml habitat_baselines.evaluate=True

This will run evaluation using all training checkpoints. Eval performance metrics can also be visualized through tensorboard logs saved in the path mentioned above. To also save videos of episodes of trained agents, you can add modify the video_option flag in the --exp-config file passed above as such:

video_option: ["disk"]

Zero-shot evaluate on HM3D-semantics

To zero-shot evaluate models pre-trained above on HM3D-sem's val datasets, update the eval_ckpt_path_dir variable in the config files passed in the command below:

python -u -m habitat_baselines.run --config-name=objectnav/hssd-200_eval_zeroshot_{hssd-hab, procthor-hab}_to_hm3d.yaml habitat_baselines.evaluate=True

You can download weights for an HSSD-pretrained policy here.

Fine-tune on HM3D-semantics

To fine-tune models pre-trained on HSSD or ProcTHOR on the HM3D-sem training dataset:

  • Update config file habitat-baselines/habitat_baselines/config/objectnav/hssd-200_hm3d_finetune_ver_clip_{hssd-hab, procthor-hab}.yaml to specify path to pre-trained weights:

    pretrained_weights: /path/to/weights_file
    
  • Finetune by running:

    python -u -m habitat_baselines.run --config-name=objectnav/hssd-200_hm3d_finetune_ver_clip_{hssd-hab, procthor-hab}.yaml
    
  • Evaluate fine-tuned models by running:

    python -u -m habitat_baselines.run --config-name=objectnav/hssd-200_hm3d_finetune_ver_clip_{hssd-hab, procthor-hab}.yaml habitat_baselines.evaluate=True habitat_baselines.load_resume_state_config=False
    

Citing HSSD

If you use our dataset or find our work useful, please consider citing:

@article{khanna2023hssd,
    author={{Khanna*}, Mukul and {Mao*}, Yongsen and Jiang, Hanxiao and Haresh, Sanjay and Shacklett, Brennan and Batra, Dhruv and Clegg, Alexander and Undersander, Eric and Chang, Angel X. and Savva, Manolis},
    title={{Habitat Synthetic Scenes Dataset (HSSD-200): An Analysis of 3D Scene Scale and Realism Tradeoffs for ObjectGoal Navigation}},
    journal={arXiv preprint},
    year={2023},
    eprint={2306.11290},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

hssd's People

Contributors

luislofer89 avatar mukulkhanna avatar sammaoys avatar sanjayharesh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

hssd's Issues

Using HSSD with Habitat Lab

Hello! I'm trying to use HSSD with Habitat Lab, but am running into some issues. I'm running the following code:

config = habitat.get_config("benchmark/nav/objectnav/objectnav_hssd-hab.yaml", overrides=["habitat.dataset.split=val"])
split = "val"
with read_write(config):
    config.habitat.dataset.split = split
    config.habitat.dataset.data_path = f"./habitat-lab/data/datasets/objectnav/hssd-hab/{split}/{split}.json.gz"

env = habitat.Env(config=config)

to create a Habitat lab environment, but am getting the error:

AssertionError: ESP_CHECK failed: No Stage Attributes exists for requested scene 
'data/scene_datasets/hssd-hab/scenes/c/ProcTHOR-Val-682' in currently specified Scene 
Dataset `data/scene_datasets/ai2thor-hab/ai2thor-hab/ai2thor-hab.scene_dataset_config.json`. 
Likely the Scene Dataset Configuration requested was not found and so a new, empty Scene 
Dataset was created. Verify the Scene Dataset Configuration file name used.

From what I understand, the first path in the error is from appending the scene ID (c/ProcTHOR-Val-682) to the config.habitat.dataset.scenes_dir (which is data/scene_datasets/hssd-hab/scenes). This works for HM3D (their scene IDs are like hm3d/val/00877-4ok3usBNeis/4ok3usBNeis.basis.glb, which, when prepended with the scenes dir, gets you a path to that basis.glb file). However, that's not the convention that HSSD seems to take.

I thus had the following questions:

  • I am getting the config file objectnav_hssd-hab.yaml, but it seems like all the episodes discovered have prefix ProcTHOR (as the scenes are just all the ones from data/datasets/objectnav/hssd-hab/val/content). Is this intentional? Do they correspond to scenes in scene_datasets/hssd-hab or the optional scene_datasets/ai2thor-hab?
  • It seems like part of the issue is that, for HM3D, the config.habitat.dataset.scenes_dir joined with the scene ID gets the path to the .basis.glb file defining the scene. However, this does not seem to be the case for ProcTHOR. How do I go about resolving this issue?
  • Related to the above, I can't seem to find the equivalent glb file for each ProcTHOR scene. I printed out the deserialized contents of ProcTHOR-Val-0.json.gz and got the following:
goals_by_category
  ProcTHOR-Val-0_Bed [List of info of all beds in scene]
  ProcTHOR-Val-0_Chair [List of info of all chairs in scene]
  ...
episodes [List of all episodes in scene]
category_to_task_category_id
  Chair 0
  Bed 1
  ...
category_to_scene_annotation_category_id
  Chair 0
  Bed 1
  ...

Printing out each of the episodes, I get something like:

episode_id 0
scene_id c/ProcTHOR-Val-0
scene_dataset_config data/scene_datasets/ai2thor-hab/ai2thor-hab/ai2thor-hab.scene_dataset_config.json
additional_obj_config_paths []
start_position [-13.57502, 0.14062, 7.28813]
start_rotation [0, 0.51301, 0, 0.85839]
info
  geodesic_distance 1.05324, euclidean_distance 1.00184, closest_goal_object_id 98
goals []
start_room None
shortest_paths None
object_category Bed

However, none of this seems to specify which of the glb files to use. I guess scene_datasets/ai2thor-hab/ai2thor-hab/configs/stages/ProcTHOR/c/ProcTHOR-Val-0.stage_config.json does start with "../../../../assets/stages/ProcTHOR/c/ProcTHOR-Val-0.glb", but I'm not sure that's right + dunno how that can be found and used by Habitat Lab.

Any help with the above questions or general method for using Habitat Lab with HSSD would be extremely helpful. Please let me know if any additional information is required, and I'd be glad to provide it!

Git clone ai2thor-hab failed in windows platform

(base) PS E:\EmbodiedAI\Habitat> git clone https://huggingface.co/datasets/hssd/ai2thor-hab
Cloning into 'ai2thor-hab'...
remote: Enumerating objects: 52316, done.
remote: Total 52316 (delta 0), reused 0 (delta 0), pack-reused 52316
Receiving objects: 100% (52316/52316), 45.44 MiB | 15.24 MiB/s, done.
Resolving deltas: 100% (29852/29852), done.
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan201_physics-FP201:Cube.802.glb'
fatal: unable to checkout working tree
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'
(base) PS E:\EmbodiedAI\Habitat\ai2thor-hab> git checkout dev
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan201_physics-FP201:Cube.802.glb'
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan201_physics-FP201:Door.003.glb'
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan201_physics-FP201:DoorFrame.glb'
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan201_physics-FP201:polySurface1.glb'
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan202_physics-FP202:Sphere.041.glb'
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan203_physics-FP203:Cube.045.glb'
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan203_physics-FP203:Cylinder.018.glb'
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan203_physics-FP203:Cylinder.032.glb'
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan203_physics-FP203:Door.001.glb'
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan203_physics-FP203:Door.002.glb'
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan203_physics-FP203:Door.glb'
error: invalid path 'ai2thor-hab/assets/objects/FloorPlan204_physics-FP203:Cylinder.032.glb'

Hi, this error seems not support windows platform because the original dataset is build on other os?

How to expand the categories showed in semantic sensor

Dear authors, thanks for the great work.

While using HSSD dataset with habitat-lab, I find that only 28 object categories appeared in the semantic segmentation results. I tried to expand the category number to include other objects in the scene by,

  1. expand the object list in data/scene_datasets/hssd-hab/semantics/hssd-hab_semantic_lexicon.json
  2. add "semaitic_id" to the object config file (data/scene_dataset/hssd-hab/objects/*.object_config.json)

However, it seems that these methods does not work. (I can still only get 28 categories in the semantic result)
How can I do that? Looking forward to your replay, thanks!

ProcThor conversion steps

Hi,
Thanks for open sourcing your work!

I have some modified ProcThor examples (i.e. by adjusting ProcThor's parameters)
Could you provide please some information about how to convert them into Habitat-Lab's format?

How to obtain a fine-grained semantic label using semantic_sensor?

Dear authors,

Thank you for your wonderful dataset.
I found that only 28 categories of objects are used for objectnav and their label are defined at data/scene_datasets/hssd-hab/semantics/hssd-hab_semantic_lexicon.json. But in data/scene_datasets/hssd-hab/semantics/objects.csv, there are 466 categories are labeled. When I use a semantic_sensor in habitat-lab, I found that only the 28 categories of objects can be labeled EVEN IF I create my own hssd-hab_semantic_lexicon.json file with 466 cagegories of objects.

This is my hssd-hab_semantic_lexicon.json
image

These are the rgb and semantic observations.
image
image
it seems that the semantic sensor still use the original hssd-hab_semantic_lexicon.json not mine (and I've deleted the original file) so that only part of objects in the rgb image are segmented and their semantic_id still use the id in original hssd-hab_semantic_lexicon.json.

I would like to know how I can modify the file so that the semantic_ensor detects other objects in the HSSD dataset.
I checked the official repo for habitat-lab and habitat-sim, but I couldn't find anything about semantic_sensor in the HSSD dataset.

How to detect the collision with those small items like chairs or vases

Hi, thanks for your wonderful work! We are now using it in our project.
While testing, we find that the agent sometimes directly go through the barriers. Especially those small items like chairs vases, or even a bed. So I wander whether hssd support such fine collision detection. If so, how can I turn on it? Thank you so much :)

Articulated Object Interactions

Hi, great work on this dataset, very exciting.

In its social rearrangement task, the Habitat 3.0 paper shows that it is possible to interact with objects in this dataset for pick&place.

I was wondering if it is also possible to interact with the articulated objects in this dataset? In particular in two ways

  • is it possible to open & close the doors (even just with magic simulator actions would be fine)?
  • is it possible to open any of the other articulated objects in the scenes, such as drawers and cabinets?

Thanks a lot for your help.

Load scene in blender/unreal

Hi, first of all great work, thanks for sharing and release all the data.

I would be interested in using your dataset in my research and try it out. However, I need to load the scene in blender/unreal.
Is that possible?

I'm also curious. How does lighting work in your dataset? do you define light sources/color or is just a plain diffuse lighting condition?

Example code for object navigation

Hi, I'm new to object navigation tasks in Habitat. Can someone link an example code of an object navigation task (not pointnav and preferably using hssd)
Thanks

Not in the authorized list ! How can I get access to this dataset ?

Hello, this is really awsome work and thanks for your sharing of the dataset.

I really want to use this dataset for my research. But I just can't clone the dataset for some reason. The detail is as following.

$ git clone https://huggingface.co/datasets/hssd/hssd-scenes
Cloning into 'hssd-scenes'...
remote: Access to dataset hssd/hssd-scenes is restricted and you are not in the authorized list. Visit https://huggingface.co/datasets/hssd/hssd-scenes to ask for access.
fatal: unable to access 'https://huggingface.co/datasets/hssd/hssd-scenes/': The requested URL returned error: 403

How can I download this dataset ?

lack baked lighting

Hello, I use the habitat-viewer to load the datasets but it seems to lack baked lighting. I am using habitat-lab v0.2.5, and I am wondering whether this is beacuse not using version 0.2.3_hssd. Hoping for your reply!

Renderred image with black walls

Hi, I am using the codebase to run objnav on hssd / procthor dataset. In procthor dataset, the rendered image is sometimes with black wall, black counter like this
截屏2024-03-04 上午1 24 14

In hssd dataset, the rendered image usually has a gray wall, black objects like these:
截屏2024-03-04 上午1 54 02
截屏2024-03-04 上午1 54 59
截屏2024-03-04 上午1 57 36
The image quality is not very good.

I wonder if this is normal or is there something wrong?

How to obtain a object list and their position in a scene

Dear authors,
Thanks for your wonderful dataset!
When I run the habitat-lab with HSSD dataset, I can not find the function to obtain the loaded object list in the current scene. I want to know how to obtain the object list and their position in the current scene?
Further more, How to obtain more other metadata in the scene?

Previewing HSSD dataset scenes in Habitat-2.0

Hi HSSD team,
Thank you for releasing a great dataset. I wanted to preview the HSSD scenes in Habitat-sim similar to the MP3D demo.

I arrange the MP3D scenes inside the habitat-sim as

habitat-sim
├── data
│      ├── scene_datasets
│      │      ├── mp3d
│      │      │     ├── 17DRP5sb8fy
│      │      │     ├── mp3d.scene_dataset_config.json
│      │      │     └── MP_TOS.pdf

When I run this demo from docs, it produces the RGB, instance seg and depth images (as expected).

import habitat_sim

import random
import matplotlib.pyplot as plt
import numpy as np

from utils import make_cfg, display_sample

test_scene    = "data/scene_datasets/mp3d/17DRP5sb8fy/17DRP5sb8fy.glb"
scene_dataset = "data/scene_datasets/mp3d/mp3d.scene_dataset_config.json"
max_frames    = 2

sim_settings  = {
    "width": 256,  # Spatial resolution of the observations
    "height": 256,
    "scene": test_scene,  # Scene path
    "scene_dataset": scene_dataset,
    "default_agent": 0,
    "sensor_height": 1.5,  # Height of sensors in meters
    "color_sensor": True,  # RGB sensor
    "semantic_sensor": True,  # Semantic sensor
    "depth_sensor": True,  # Depth sensor
    "seed": 1,
}

cfg   = make_cfg(sim_settings)
sim   = habitat_sim.Simulator(cfg)
scene = sim.semantic_scene

random.seed(sim_settings["seed"])
sim.seed(sim_settings["seed"])

# Set agent state
agent       = sim.initialize_agent(sim_settings["default_agent"])
agent_state = habitat_sim.AgentState()
agent_state.position = np.array([0.0, 0.072447, 0.0])
agent.set_state(agent_state)

# Get agent state
agent_state = agent.get_state()
print("agent_state: position", agent_state.position, "rotation", agent_state.rotation)

total_frames = 0
action_names = list(
    cfg.agents[
        sim_settings["default_agent"]
    ].action_space.keys()
)

while total_frames < max_frames:
    action = random.choice(action_names)
    print("action", action)
    observations = sim.step(action)
    rgb = observations["color_sensor"]
    semantic = observations["semantic_sensor"]
    depth = observations["depth_sensor"]

    display_sample(rgb, semantic, depth, instance= True, show= True)

    total_frames += 1

Figure_1

I wanted a similar preview of HSSD scenes. I arranged the HSSD dataset from Huggingface, and placed them as follows:

habitat-sim
├── data
│      ├── hssd
│      │      ├── ai2thor-hab
│      │      ├── ai2thorhab-uncompressed
│      │      ├── hssd-hab
│      │      ├── hssd-models
│      │      ├── hssd-scenes
│      │
│      ├── scene_datasets
│      │      ├── mp3d

I then changed the values of variables as

test_scene   = "data/hssd/hssd-scenes/scenes/102343992.glb" 
scene_dataset = "data/hssd/hssd-hab/hssd-hab.scene_dataset_config.json"

I then run the same demo file as

python demo.py

However, this time the code crashes with the following error message:

AssertionError: ESP_CHECK failed: Error loading general mesh data from file data/hssd/hssd-scenes/scenes/102343992.glb

How should one arrange the HSSD scenes inside the habitat-sim directory and what should be the values of test_scene and scene_dataset in the above demo file for previewing?

PS: My environment information is as follows:
sshot

Files do not exist when load hssd in habitat-lab

Dear authors,
I download the dataset in your huggingface repo, then when I run python -u -m habitat_baselines.run --config-name=objectnav/hssd-200_ver_clip_hssd-hab.yaml, it appears many errors shown in the image below.
image

It seems like some files donot exist in the dataset.
I wonder if I download the full dataset.

[Using HSSD with Habitat] Render asset template handle does not correspond to any existing file or primitive render asset, so registration is aborted.

Hi! I'm trying to preview the HSSD dataset scenes with habitat-lab (0.2.5) and habitat-sim (0.2.5) following the demo.py. However, when I try to instantiate the simulator:
sim = habitat_sim.Simulator(cfg)
the code crashes with the following error message, while it doesn't report any error when I view the scenes of MP3D.

>>> sim = habitat_sim.Simulator(cfg)
Renderer: NVIDIA TITAN X (Pascal)/PCIe/SSE2 by NVIDIA Corporation
OpenGL version: 4.6.0 NVIDIA 525.147.05
Using optional features:
    GL_ARB_vertex_array_object
    GL_ARB_separate_shader_objects
    GL_ARB_robustness
    GL_ARB_texture_storage
    GL_ARB_texture_view
    GL_ARB_framebuffer_no_attachments
    GL_ARB_invalidate_subdata
    GL_ARB_texture_storage_multisample
    GL_ARB_multi_bind
    GL_ARB_direct_state_access
    GL_ARB_get_texture_sub_image
    GL_ARB_texture_filter_anisotropic
    GL_KHR_debug
    GL_KHR_parallel_shader_compile
    GL_NV_depth_buffer_float
Using driver workarounds:
    no-forward-compatible-core-context
    nv-egl-incorrect-gl11-function-pointers
    no-layout-qualifiers-on-old-glsl
    nv-zero-context-profile-mask
    nv-implementation-color-read-format-dsa-broken
    nv-cubemap-inconsistent-compressed-image-size
    nv-cubemap-broken-full-compressed-image-query
    nv-compressed-block-size-in-bits
[22:00:06:793734]:[Error]:[Metadata] StageAttributesManager.cpp(87)::registerObjectFinalize : Render asset template handle `stages/102343992` specified in stage template with handle :stages/102343992does not correspond to any existing file or primitive render asset, so StageAttributes registration is aborted.
[22:00:06:794251]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `1efdc3d37dfab1eb9f99117bb84c59003d684811` specified in object template with handle : `1efdc3d37dfab1eb9f99117bb84c59003d684811` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:794330]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `1efdc3d37dfab1eb9f99117bb84c59003d684811` specified in object template with handle : `1efdc3d37dfab1eb9f99117bb84c59003d684811` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:794640]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `356ce92bc38493578fdf63b7f3edfaea8001c849` specified in object template with handle : `356ce92bc38493578fdf63b7f3edfaea8001c849` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:794963]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `27dd125228e0872193eb34907b03ef9caf98289f` specified in object template with handle : `27dd125228e0872193eb34907b03ef9caf98289f` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:795037]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `27dd125228e0872193eb34907b03ef9caf98289f` specified in object template with handle : `27dd125228e0872193eb34907b03ef9caf98289f` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:795394]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `b4197540cbbe2787185cd37ab9f1524b883cbdbe` specified in object template with handle : `b4197540cbbe2787185cd37ab9f1524b883cbdbe` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:795468]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `b4197540cbbe2787185cd37ab9f1524b883cbdbe` specified in object template with handle : `b4197540cbbe2787185cd37ab9f1524b883cbdbe` does not correspond to any existing file or primitive render asset, so registration is aborted.
......
[22:00:06:842435]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `9c61e7e98634fda17cd2276857cea3c03ad1024d` specified in object template with handle : `9c61e7e98634fda17cd2276857cea3c03ad1024d` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:842489]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `9c61e7e98634fda17cd2276857cea3c03ad1024d` specified in object template with handle : `9c61e7e98634fda17cd2276857cea3c03ad1024d` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:842538]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `9c61e7e98634fda17cd2276857cea3c03ad1024d` specified in object template with handle : `9c61e7e98634fda17cd2276857cea3c03ad1024d` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:842807]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `3753-0` specified in object template with handle : `3753-0` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:842847]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `3753-0` specified in object template with handle : `3753-0` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:843072]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `204-0` specified in object template with handle : `204-0` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:843297]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `220-0` specified in object template with handle : `220-0` does not correspond to any existing file or primitive render asset, so registration is aborted.
......
[22:00:06:851867]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `3753-3` specified in object template with handle : `3753-3` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:852086]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `218-18` specified in object template with handle : `218-18` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:852332]:[Error]:[Metadata] ObjectAttributesManager.cpp(258)::registerObjectFinalize : Render asset template handle : `218-19` specified in object template with handle : `218-19` does not correspond to any existing file or primitive render asset, so registration is aborted.
[22:00:06:853185]:[Warning]:[Metadata] MetadataMediator.cpp(489)::getFilePathForHandle : <getSemanticSceneDescriptorPathByHandle> : Unable to find file path for  , so returning empty string.
[22:00:06:855653]:[Error]:[Core] ManagedContainerBase.h(330)::checkExistsWithMessage : <Stage Template>::getObjectCopyByHandle:Stage Template managed object handle `` not found in ManagedContainer, so aborting.

My file structure is:

habitat-lab
├── data
│      ├── scene_datasets
│      │      ├── hssd-hab
│      │      │     ├── hssd-hab.scene_dataset_config.json
│      │      │     ├── hssd-hab-uncluttered.scene_dataset_config.json
│      │      │     └── objects
│      │      │      │.            └── ...
│      │      │     └── scenes
│      │      │      │.            └── ...
│      │      │     └── stages
│      │      │      │.            └── ...

Attached please also find my code.

import habitat_sim
import numpy as np
import matplotlib.pyplot as plt
import random
import os

test_scene    = "./data/scene_datasets/hssd-hab/scenes/102343992.scene_instance.json" 
scene_dataset = "./data/scene_datasets/hssd-hab/hssd-hab.scene_dataset_config.json"

rgb_sensor = True  # @param {type:"boolean"}
depth_sensor = True  # @param {type:"boolean"}
semantic_sensor = True  # @param {type:"boolean"}

sim_settings = {
    "width": 256,  # Spatial resolution of the observations
    "height": 256,
    "scene": test_scene,  # Scene path
    "scene_dataset": scene_dataset,
    "default_agent": 0,
    "sensor_height": 1.5,  # Height of sensors in meters
    "color_sensor": rgb_sensor,  # RGB sensor
    "depth_sensor": depth_sensor,  # Depth sensor
    "semantic_sensor": semantic_sensor,  # Semantic sensor
    "seed": 1,  # used in the random navigation
    "enable_physics": False,  # kinematics only
}

def make_cfg(settings):
    sim_cfg = habitat_sim.SimulatorConfiguration()
    sim_cfg.gpu_device_id = 0
    sim_cfg.scene_id = settings["scene"]
    sim_cfg.enable_physics = settings["enable_physics"]
    # Note: all sensors must have the same resolution
    sensors = {
        "color_sensor": {
            "sensor_type": habitat_sim.SensorType.COLOR,
            "resolution": [settings["height"], settings["width"]],
            "position": [0.0, settings["sensor_height"], 0.0],
        },
        "depth_sensor": {
            "sensor_type": habitat_sim.SensorType.DEPTH,
            "resolution": [settings["height"], settings["width"]],
            "position": [0.0, settings["sensor_height"], 0.0],
        },
        "semantic_sensor": {
            "sensor_type": habitat_sim.SensorType.SEMANTIC,
            "resolution": [settings["height"], settings["width"]],
            "position": [0.0, settings["sensor_height"], 0.0],
        },
    }
    sensor_specs = []
    for sensor_uuid, sensor_params in sensors.items():
        if settings[sensor_uuid]:
            sensor_spec = habitat_sim.CameraSensorSpec()
            sensor_spec.uuid = sensor_uuid
            sensor_spec.sensor_type = sensor_params["sensor_type"]
            sensor_spec.resolution = sensor_params["resolution"]
            sensor_spec.position = sensor_params["position"]
            sensor_specs.append(sensor_spec)
    # Here you can specify the amount of displacement in a forward action and the turn angle
    agent_cfg = habitat_sim.agent.AgentConfiguration()
    agent_cfg.sensor_specifications = sensor_specs
    agent_cfg.action_space = {
        "move_forward": habitat_sim.agent.ActionSpec(
            "move_forward", habitat_sim.agent.ActuationSpec(amount=0.25)
        ),
        "turn_left": habitat_sim.agent.ActionSpec(
            "turn_left", habitat_sim.agent.ActuationSpec(amount=30.0)
        ),
        "turn_right": habitat_sim.agent.ActionSpec(
            "turn_right", habitat_sim.agent.ActuationSpec(amount=30.0)
        ),
    }
    return habitat_sim.Configuration(sim_cfg, [agent_cfg])

cfg = make_cfg(sim_settings)
sim = habitat_sim.Simulator(cfg)

May I know if you have any idea on why this error occurs? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.