Coder Social home page Coder Social logo

robothor-challenge's Introduction


2021 RoboTHOR Object Navigation Challenge

Welcome to the 2021 RoboTHOR Object Navigation (ObjectNav) Challenge hosted at the CVPR'21 Embodied-AI Workshop. The goal of this challenge is to build a model/agent that can navigate towards a given object in a room using the RoboTHOR embodied-AI environment. Please follow the instructions below to get started.

Contents

Installation

If you are planning to evaluate an agent trained in AllenAct, you may simply pip install ai2thor==2.7.2, skip the following instructions, and follow our example for evaluating AllenAct baselines below instead.

Otherwise, to begin working on your own model you must have an GPU (required for 3D rendering).

Local Installation

Clone or fork this repository

git clone https://github.com/allenai/robothor-challenge
cd robothor-challenge

Install ai2thor (we assume you are using Python version 3.6 or later):

pip3 install -r requirements.txt
python3 robothor_challenge/scripts/download_thor_build.py

Run evaluation on random agent

python3 runner.py -a agents.random_agent -c ./challenge_config.yaml -d ./dataset -o ./random_metrics.json.gz --debug --nprocesses 1

This command runs inference with the random agent over the debug split. You can pass the args (--train, --val, and/or --test) or --submission instead to run this agent on other splits.

Docker Installation

If you prefer to use docker, you may follow these instructions instead:

Build the ai2thor-docker image

git clone https://github.com/allenai/ai2thor-docker
cd ai2thor-docker && ./scripts/build.sh && cd ..

Then, build the robothor-challenge image

git clone https://github.com/allenai/robothor-challenge
cd robothor-challenge && docker build -t robothor-challenge .

Run evaluation with random agent

EVAL_CMD="python3 runner.py -a agents.random_agent -c ./challenge_config.yaml -d ./dataset -o ./random_metrics.json.gz --debug --nprocesses 1"

docker run --privileged --env="DISPLAY" -v /tmp/.X11-unix:/tmp/.X11-unix:rw -v $(pwd):/app/robothor-challenge -it robothor-challenge:latest bash -c $EVAL_CMD

This command runs inference with the random agent over the debug split. You can pass the args (--train, --val, and/or --test) or --submission instead to run this agent on other splits.

You can update the Dockerfile and example script as needed to setup your agent.

After installing and running the demo, you should see log messages that resemble the following:

2020-02-11 05:08:00,545 [INFO] robothor_challenge - Task Start id:59 scene:FloorPlan_Train1_1 target_object:BaseballBat|+04.00|+00.04|-04.77 initial_position:{'x': 7.25, 'y': 0.910344243, 'z': -4.708334} rotation:180
2020-02-11 05:08:00,895 [INFO] robothor_challenge - Agent action: MoveAhead
2020-02-11 05:08:00,928 [INFO] robothor_challenge - Agent action: RotateLeft
2020-02-11 05:08:00,989 [INFO] robothor_challenge - Agent action: Stop

Submitting to the Leaderboard

We will be using an AI2 Leaderboard to host the challenge. The team with the best submission made by May 31st (midnight, anywhere on earth) will be announced at the CVPR'21 Embodied-AI Workshop and invited to produce a video describing their approach. You will be submitting your metrics file (e.g. submission_metrics.json.gz as below) for evaluation. During leaderboard evaluation, we will validate your results and compute several metrics (success rate, SPL, proximity-only success rate, proximity-only SPL, and episode length). Submissions will be ranked on the leaderboard by SPL on the test set.

To generate a submission, use the following evaluation command:

python3 runner.py -a agents.your_agent_module -c ./challenge_config.yaml -d ./dataset -o ./submission_metrics.json.gz --submission --nprocesses 8

We have provided an example submission file for you to view. The episodes in this example has been evaluated using our baselines (50% by a random agent and 50% by our baseline AllenAct agent).

If you are evaluating an agent trained in AllenAct, please follow our example in Using AllenAct Baselines instead.

You can make your submission at the following URL: https://leaderboard.allenai.org/robothor_objectnav/submissions/public

Agent

In order to generate the metrics.json.gz file for your agent, your agent must subclass robothor_challenge.agent.Agent and implement the act method. Please place this agent in the agents/ directory. For an episode to be successful, the agent must be within 1 meter of the target object and the object must also be visible to the agent. To declare success, respond with the Stop action. If Stop is not sent within the maxmimum number of steps (500 max), the episode will be considered failed and the next episode will be initialized. The agent in agents/random_agent.py takes a random action at each step. You must also implement a build() function to specify how the agent class should be initialized. Be sure any dependencies required by your agent are included in $PYTHONPATH.

agents/random_agent.py

from robothor_challenge.agent import Agent
import random

ALLOWED_ACTIONS = ["MoveAhead", "RotateRight", "RotateLeft", "LookUp", "LookDown", "Stop"]

class SimpleRandomAgent(Agent):
    def reset(self):
        pass

    def act(self, observations):
        rgb = observations["rgb"]           # np.uint8 : 480 x 640 x 3
        depth = observations["depth"]       # np.float32 : 480 x 640 (default: None)
        goal = observations["object_goal"]  # str : e.g. "AlarmClock"

        action = random.choice(ALLOWED_ACTIONS)

        return action

def build():
    agent_class = SimpleRandomAgent
    agent_kwargs = {}
    # resembles SimpleRandomAgent(**{})
    render_depth = False
    return agent_class, agent_kwargs, render_depth

Dataset

The dataset is divided into the following splits:

Split # Episodes Files
Debug 4 dataset/debug/episodes/FloorPlan_Train1_1.json.gz
Train 108000 dataset/train/episodes/FloorPlan_Train*.json.gz
Val 1800 dataset/val/episodes/FloorPlan_Val*.json.gz
Test 2040 dataset/test/episodes/FloorPlan_test-challenge*.json.gz

where each file is a compressed json file corresponding to a list of dictionaries. Each element of the list corresponds to a single episode of object navigation.

Episode Structure

Here is an example of the structure of a single episode in our training set.

{
    "id": "FloorPlan_Train1_1_AlarmClock_0",
    "scene": "FloorPlan_Train1_1",
    "object_type": "AlarmClock",
    "initial_position": {
        "x": 3.75,
        "y": 0.9009997248649597,
        "z": -2.25
    },
    "initial_orientation": 150,
    "initial_horizon": 30,
    "shortest_path": [
        { "x": 3.75, "y": 0.0045, "z": -2.25 },
        ... ,
        { "x": 9.25, "y": 0.0045, "z": -2.75 }
    ],
    "shortest_path_length": 5.57
}

The keys "shortest_path" and "shortest_path_length" are hidden from episodes in the test split.

Target Objects

The following (12) target object types exist in the dataset:

  • Alarm Clock
  • Apple
  • Baseball Bat
  • Basketball
  • Bowl
  • Garbage Can
  • House Plant
  • Laptop
  • Mug
  • Spray Bottle
  • Television
  • Vase

All the episodes for each split (train/val/test) can be found within dataset/. There is also a "debug" split available. Configuration parameters for the environment can be found within dataset/challenge_config.yaml. These are the same values that will be used for generating the leaderboard. You are free to train your model with whatever parameters you choose, but these params will be reset to the original values for leaderboard evaluation.

Utility Functions

Once you've created your agent class and loaded your dataset:

cfg = 'challenge_config.yaml'
agent_class, agent_kwargs, render_depth = agent_module.build()
r = RobothorChallenge(cfg, agent_class, agent_kwargs, render_depth=render_depth)
train_episodes, train_dataset = r.load_split('dataset', 'train')

You can move to points in the dataset by calling the following functions in the RobothorChallenge class:

To move to a random point in the dataset for a particular scene and object_type:

event = r.move_to_random_dataset_point(train_dataset, "FloorPlan_Train2_1", "Apple")

Useful if you load the dataset yourself, to move to a specific dataset point:

datapoint = random.choice(train_dataset["FloorPlan_Train2_1"]["Apple"])
event = r.move_to_point(datasetpoint)

To move to a random point in the scene, given by the GetReachablePositions unity function:

event = r.move_to_random_point("FloorPlan_Train1_1", y_rotation=180)

All of these return an Event Object with the frame and metadata (see: documentation). This is the data you will likely use for training.

Using AllenAct Baselines

We have built support for this challenge into the AllenAct framework, this support includes

  1. Several CNN->RNN model baseline model architectures along with our best pretrained model checkpoint (trained for 300M steps) obtaining a test-set succcess rate of ~26%.
  2. Reinforcement/imitation learning pipelines for training with Distributed Decentralized Proximal Policy Optimization (DD-PPO) and DAgger.
  3. Utility functions for visualization and caching (to improve training speed).

For more information, or to see how to evaluate a trained AllenAct model, see here.

Converting AllenAct metrics to evaluation trajectories

When using AllenAct, it is generally more convenient to run evaluation within AllenAct rather than using the evaluation script we provide in this repository. When doing this evaluation, the metrics returned by AllenAct are in a somewhat different format than expected when submitting to our leaderboard. Because of this we provide the robothor_challenge/scripts/convert_allenact_metrics.py script to convert metrics produced by AllenAct to those expected by our leaderboard submission format.

export ALLENACT_VAL_METRICS = /path/to/metrics__val_*.json
export ALLENACT_TEST_METRICS = /path/to/metrics__test_*.json

python3 -m robothor_challenge.scripts.convert_allenact_metrics -v $ALLENACT_VAL_METRICS -t $ALLENACT_TEST_METRICS -o submission_metrics.json.gz

robothor-challenge's People

Contributors

apoorvkh avatar roozbehm avatar alvarohg avatar lucaweihs avatar mattdeitke avatar schmmd avatar dhruvbatra avatar

Stargazers

 avatar  avatar  avatar Linx avatar  avatar Siddharth Nayak avatar Zhaowei Wang avatar YZhao avatar Zhang Jiahui avatar Richard Liu avatar Xiong Jun Wu(熊君武) avatar Afonso Diela avatar Hanlin Wang avatar Jacob Zietek avatar  avatar Jamie Kay avatar  avatar Xiancheng, Sun avatar Clifton avatar Jinghan LIN avatar Andyoung Johnson avatar kkduter avatar feng avatar  avatar Xiaohan Lei avatar  avatar Dong An avatar Winky Ge avatar Jafar Mohammadi nasab avatar Wenjie Qiu avatar  avatar  avatar Moritz Stephan avatar Arturo avatar  avatar Zhao Han avatar  avatar THU&ZJU XA avatar Lingzhe Zhao avatar yizhouzhao avatar  avatar  avatar Feng Gao avatar Mengdi Li avatar CAJvon avatar Zhiyong Yang avatar  avatar  avatar  avatar ekwska avatar Fernando Rodrigo Bairros de Pilger avatar BBC avatar Smrutiranjan Sahu avatar luca avatar Jeff / Jianfei Guo avatar Divyaprabha M avatar Andrew Story avatar Mohit Shridhar avatar Sinan Tan avatar Baoxiong Jia avatar Xiaofeng Gao avatar  avatar  avatar  avatar Utsav Shah avatar Gregory Everhart avatar Bleyddyn avatar sile avatar Eric Kolve avatar Chan avatar SOUSIC avatar Huang ShengKai avatar  avatar eipi10 avatar 爱可可-爱生活 avatar Deep Narain Singh avatar  avatar Arka Sadhu avatar Aditya Kusupati avatar

Watchers

Russell Reas avatar Vivek Srikumar avatar Yoav Goldberg avatar  avatar James Cloos avatar Oleksandr avatar Tyler Murray avatar Chris Wilhelm avatar Rachel Ratner avatar Mausam avatar  avatar Hillel Taub-Tabib avatar  avatar Peter Clark avatar  avatar David Graham avatar Jiasen Lu avatar HY avatar Sumithra Bhakthavatsalam avatar Daniel Lin  avatar Spencer Clark avatar Caroline Paulic avatar jonathan m borchardt avatar Allen Institute for Artificial Intelligence avatar Tao Li avatar Vivek Ramanujan avatar  avatar Amanda Baughan avatar  avatar  avatar Daniel Weld avatar Chloe Anastasiades avatar Matthew Finlayson avatar  avatar  avatar Kusasalethu Sithole avatar Winson Han avatar Katherine A. Keith avatar  avatar Brian Henn avatar

robothor-challenge's Issues

Are these two images correctly rendered & is it possible to get real world image samples?

Hi, I am wondering if these two images from RoboTHOR simulation environment are correctly rendered. The second image is rendered by rotating the camera 30 degrees. I think the shape of the table is a little weird and the table gets closer to the camera. Is it supposed to be so? Here are the codes to reproduce the images:

import ai2thor.controller
from PIL import Image

controller = ai2thor.controller.Controller(
    start_unity=True,
    width=640,
    height=480,
    scene='FloorPlan_Train1_1')

event = controller.step(
    dict(action="Teleport", x=8.511058, y=0.9009997, z=-1.57475924)
)
event = controller.step(dict(action="Rotate", rotation=270))
img1 = Image.fromarray(event.frame)
img1.show()

event = controller.step(dict(action="Rotate", rotation=300))
img2 = Image.fromarray(event.frame)
img2.show()

43837A67514DD610880D95A0D9FDF8BB

Do I set the params of ai2thor.controller.Controller correctly?

(BTW: is it possible to release some image samples of the real environment taken by the LocoBot?)

Any help is appreciated! Thanks in advance.

Teleporting to 60+ degree CameraHorizon

Hi there, I'm currently using Robothor in one of my projects, and I've encountered an issue with the Teleport function that I'd like to share. When teleporting to positions with a Horizon value of approximately 60, the behavior becomes quite unpredictable. Even minor fluctuations in the Horizon values lead to unexpected changes in the agent's positions and sometimes rotation. Unfortunately, this behavior is not documented, which leads me to suspect it might be a bug.

here's a code snippet illustrating the issue:

from ai2thor.controller import Controller
import random

controller = Controller(
    agentMode="locobot",
    visibilityDistance=1.5,
    scene="FloorPlan_Train1_2",
    gridSize=0.25,
    movementGaussianSigma=0,
    rotateStepDegrees=90,
    rotateGaussianSigma=0,
    renderDepthImage=False,
    renderInstanceSegmentation=False,
    width=300,
    height=300,
    fieldOfView=60
)

position = random.choice(controller.step(
    action="GetReachablePositions"
    ).metadata["actionReturn"])

rotation = {"x": 0, "y": 270, "z": 0}
horizon = 60.00001525878906   # actual value returned from the controller

metadata = controller.step(
    action="Teleport",
    position=position,
    rotation=rotation,
    horizon=horizon
).metadata["agent"]

print("Horizon:", horizon)
print(metadata["position"], metadata["rotation"], metadata["cameraHorizon"])
print('='*10)

horizon=59.9999
metadata = controller.step(
    action="Teleport",
    position=position,
    rotation=rotation,
    horizon=horizon
).metadata["agent"]

print("Horizon:", horizon)
print(metadata["position"], metadata["rotation"], metadata["cameraHorizon"])

output:

Horizon: 60.00001525878906
{'x': 2.25, 'y': 0.9009996652603149, 'z': -3.0} {'x': -0.0, 'y': 269.9995422363281, 'z': 0.0} -0.0
==========
Horizon: 59.9999
{'x': 4.0, 'y': 0.9009996652603149, 'z': -4.0} {'x': -0.0, 'y': 270.0, 'z': 0.0} 59.999908447265625

when the Horizon value is strictly less than 60, no unexpected results occur. However, as soon as the value exceeds 60, even a small increment, which I expect it to be clipped to 60, can lead to problematic behavior.
Im using AI2-THOR Version: 5.0.0

Top-down Map

Hi, is there a way to get a top-down map so we can visualize the trajectory of the agent from a bird's eye view in RoboThor? Thanks!

error when installing nvidia driver using scripts/build.sh

I get this error message when I do:
cd robothor-challenge
./scripts/build.sh

i tried with sudo and added my user to docker group
are the links working?

Step 10/13 : RUN NVIDIA_VERSION=$NVIDIA_VERSION /app/install_nvidia.sh
 ---> Running in 
--2020-03-17 00:14:00--  http://us.download.nvidia.com/XFree86/Linux-x86_64/440.48.02/NVIDIA-Linux-x86_64-440.48.02.run
Resolving us.download.nvidia.com (us.download.nvidia.com)...  
Connecting to us.download.nvidia.com (us.download.nvidia.com) connected.
HTTP request sent, awaiting response... 404 Not Found
2020-03-17 00:14:00 ERROR 404: Not Found.

--2020-03-17 00:14:00--  http://ai2-vision-nvidia.s3-us-west-2.amazonaws.com/NVIDIA-Linux-x86_64-440.48.02.run
Resolving ai2-vision-nvidia.s3-us-west-2.amazonaws.com (ai2-vision-nvidia.s3-us-west-2.amazonaws.com)...
Connecting to ai2-vision-nvidia.s3-us-west-2.amazonaws.com (ai2-vision-nvidia.s3-us-west-2.amazonaws.com)... connected.
HTTP request sent, awaiting response... 403 Forbidden
2020-03-17 00:14:00 ERROR 403: Forbidden.

Evaluation of random agent dies

Run evaluation on random agent
./scripts/evaluate_train.sh

2020-02-16 01:06:18,857 [INFO] robothor_challenge - Agent action: MoveBack
2020-02-16 01:06:18,870 [INFO] robothor_challenge - Agent action: RotateRight
2020-02-16 01:06:18,883 [INFO] robothor_challenge - Agent action: Stop
2020-02-16 01:06:18,898 [INFO] robothor_challenge - Task Start id:420 scene:FloorPlan_Train1_2 target_object:Apple|+01.98|+00.77|-01.75 initial_position:{'x': 3.0, 'y': 0.910344243, 'z': -1.75} rotation:270
2020-02-16 01:06:19,135 [INFO] robothor_challenge - Agent action: Stop
Traceback (most recent call last):
File "example_agent.py", line 17, in
r.inference()
File "/app/robothor_challenge/init.py", line 86, in inference
episode_result['success'] = simobj['visible']
TypeError: 'NoneType' object is not subscriptable


System info: Fresh GCP instance n1-standard-4 (4 vCPUs, 15 GB memory) 1 x NVIDIA Tesla P4, Ubuntu 18; nvidia drivers and nvidia-docker installed.

A problem about the data.

In the data file train.json and val.json, the "y" value in "shortest_path" seem incorrect compared with the "y" in "initial_position". For example, in "Train_1_1_Apple_0", the "y" in "initial_position" is 0.9009997, but "y" in "shortest_path" is all 0.0103442669, so does the value of "y" have some special meaning? Which action will lead to the that value?

Running without GPU or on headless system

Hi, I am wondering if it is possible to run the training scripts without GPU on the machine. I know it could be slow when running on the CPU. I was also trying to run the training scripts on AWS headless machines, and after downloading the data, the script does not move forward. So I was wondering if it was possible to run them on the headless system since some of the Unity stuff is needed to be installed and images are visualized while training.

Thanks.

The number of target object in the scene.

Excuse me. We want to know if the number of the target object is one in every scene? It means if there is only one target object in each scene. Does there exist several objects of the same target object type?

The success judgement.

In robothor-challenge, only if the agent is within 1 meter of the target object and the object is also visible to the agent, the episode is considered to be successful. But in the code 'robothor-challenge/init.py', it looks like that it does not consider the distance between the agent and the target object when judging wether it is successful. So will this affect the evaluation in the robothor-challenge ? Or we need to add the condition to the code ourselves and submit it?

About the submission.

Excuse me. We want to know what we need to run in submission.sh,do we need to run a python file, or just complenment the model and act method.

Issue with Success Evaluation

Even when the object_type is in the view of agent and less than 1 m away from object, success is evaluated as FALSE

To give an example, in scene=FloorPlan_Train1_1 and object_type=Apple, at agent location position={'x': 2.069062, 'y': 0.9009997, 'z': -2.241056}, rotation={'x': 0.0, 'y': 329.689972, 'z': 0.0}, apple is completely visible and less than 1 m away from object, still success is evaluated (using this) as FALSE. For reference I have also attached agents RGB obeservation.

test

Bug Report: xserver quotes

Hi, I was struggling to get xserver working in the docker install due to a bug:
image

Comparing the xserver code in this repo vs in ai2thor-docker, I noticed the quotes were inverted. I think the inversion causes the above bug (so reverting will fix the issue), and am posting in case it helps others.

How to ensure the robot's horizon is in a valid range?

Hi guys, I was playing robothor ObjNav tasks recently and it was really interesting! Thanks for your great work!

I have a few questions about the actions of the robot:

  1. I checked out the example_submission file and found that the "success" of an "Stop" action can be either be true of false. Why the simple "Stop" action can produce both true or false outcomes? Does the success of the "Stop" action indicate the final success of the entire trajectory? I don't really think so because I also noticed that when "Stop"'s "success" is false, the final result is guaranteed to be false. However, when "Stop"'s "success" is true, still the final results can be false.

  2. How to guarantee the horizons of the robot are in the valid range? I am aware that the expected horizon for locobot is {-30, 0, +30}, as stated in the documentation. But, how to ensure when the horizon is 30, another "LookDown" will not performed? Or those actions are deemed invalid and will not be executed? I mean, I realized that the horizon of 60, or -60 are possible, I just want to know how to ensure they are not exist in the submissions, thanks in advance!

How to collect such a large dataset ( 110k Episodes )?

Hi,

Thanks for your great job! I wander how to collect such a large dataset ( 110k Episodes ). Just by tele-operation by exports or some other high efficient automatic way?

Thanks for your attention and keep waiting for your kind response!

Issue with docker installation

I have tried the ./scripts/build.sh, however it is failing to install nvidia driver. From log it looks like it is unable to download files from the links that are mentioned in install_nvidia.sh.

This is the error I received

Resolving us.download.nvidia.com (us.download.nvidia.com)... 192.229.211.70, 2606:2800:21f:3aa:dcf:37b:1ed6:1fb
Connecting to us.download.nvidia.com (us.download.nvidia.com)|192.229.211.70|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-05-22 16:35:39 ERROR 404: Not Found.

--2020-05-22 16:35:39--  http://ai2-vision-nvidia.s3-us-west-2.amazonaws.com/NVIDIA-Linux-x86_64-418.116.00.run
Resolving ai2-vision-nvidia.s3-us-west-2.amazonaws.com (ai2-vision-nvidia.s3-us-west-2.amazonaws.com)... 52.218.242.105
Connecting to ai2-vision-nvidia.s3-us-west-2.amazonaws.com (ai2-vision-nvidia.s3-us-west-2.amazonaws.com)|52.218.242.105|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2020-05-22 16:35:39 ERROR 403: Forbidden.

Possible Bug Report of Teleport in Ver. 2.3.4

Hi! Previously, I found a possible bug of Teleport function in Ithor dataset, which has been posted in issue in allenai/aithor repo.

Later I also found the same bug in Robothor dataset:

from ai2thor.controller import Controller

def get_state_from_evenet(event):
    x=event.metadata["agent"]["position"]["x"]
    y=event.metadata["agent"]["position"]["y"]
    z=event.metadata["agent"]["position"]["z"]
    rotation=event.metadata["agent"]["rotation"]["y"]
    horizon=event.metadata["agent"]["cameraHorizon"]
    return (x, y, z, rotation, horizon)

controller = Controller(
    scene="FloorPlan_Train1_1", gridSize=0.25,
    fieldOfView=90, cameraY=1.00)

event = controller.step(action="Teleport", x=2.50, y=1.00, z=-1.25)
print("Initial state: (x, y, z, rotation, horizon)={}".format(get_state_from_evenet(event)))
event = controller.step(action="Teleport", x=2.50, y=1.00, z=-1.00)
print("State after Teleport: (x, y, z, rotation, horizon)={}".format(get_state_from_evenet(event)))
print("Last Action Success:{}; Error Message:{}".format(
    event.metadata["lastActionSuccess"],
    event.metadata["errorMessage"]))

The agent is expected to teleport to (x=2.50, y=1.00, z=-1.00), but the script outputs:

Initial state: (x, y, z, rotation, horizon)=(2.5, 0.9009992, -1.25, 269.999542, 0.0)
State after Teleport: (x, y, z, rotation, horizon)=(2.5, 0.9009996, -1.25, 269.999542, 0.0)
Last Action Success:True; Error Message:

which possibly means that the Teleport action fails while no error message returns.

Is there anyone to help out? This really affects my progress in the challenge.

A million thanks!

Question about the hardware in the testing phase

Hi! Here are two questions about the evaluation.

  • We want to know how many graphics cards our uploaded model can use and how much memory each card has during the test phase.
  • Is there a time limit for making decisions for our model?

Thanks!

Questions about challenge configuration in Test-Dev phase

Hi! In the released code, Controller parameters are specified in the file ./robothor-challenge/dataset/challenge_config.yaml, including

initialize:
    rotateStepDegrees: 30
    visibilityDistance: 1.0
    gridSize: 0.25
    ...

Now I can train/test the model under this set of parameters or customize it. e.g.

initialize:
    rotateStepDegrees: 45
    visibilityDistance: 2.0
    gridSize: 0.3
    ...

Yet I wonder what scope of code should we submit to the eval AI once provided. Could I customize this configuration file in the Test-Dev phase? Or these parameters are simply fixed for this challenge and we only submit a model which outputs choice of actions to the eval AI system?

Your answer really matters for my following plan. Thanks for your help!

The return event of the function 'controller.step(action)'

Excuse me, we have a question about the function 'controller.step()'. When the agent perform the action, what will this function return if the agent collide with the environment or other objects ? Does the function return the event with the agent state of the collision position? Or does the function return a invalid state?

In addition to the provided obsevations, can we use extra info in the challenge?

Our team is interested in participating in the RoboTHOR Challenge. At runtime, in addition to the observations provided, our method would also require the wall structure, which we can extract from the 3D scan. We are wondering if it is eligible to encode this wall structure prior in the Agent class and submit to the challenge? Thanks in advance!

A key problem in train.json and val.json

It seems that the object_id of target object across all scenes are same as FloorPlan_Train1_1, but I found the same objects in different scene have different objectId actually, which would make our programming fail, and influent the results of training. In fact, the script 'example_agent.py' already failed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.