peteanderson80 / matterport3dsimulator Goto Github PK
View Code? Open in Web Editor NEWAI Research Platform for Reinforcement Learning from Real Panoramic Images.
License: Other
AI Research Platform for Reinforcement Learning from Real Panoramic Images.
License: Other
I'm trying to install the M3D Simulator on Ubuntu 16.04.5 LTS
. I encountered this undefined reference error when running make:
Scanning dependencies of target MatterSim
[ 9%] Building CXX object CMakeFiles/MatterSim.dir/src/lib/MatterSim.cpp.o
[ 18%] Building CXX object CMakeFiles/MatterSim.dir/src/lib/Benchmark.cpp.o
[ 27%] Linking CXX shared library libMatterSim.so
[ 27%] Built target MatterSim
Scanning dependencies of target random_agent
[ 36%] Building CXX object CMakeFiles/random_agent.dir/src/driver/random_agent.cpp.o
[ 45%] Linking CXX executable random_agent
libMatterSim.so: undefined reference to `Json::Value::asString[abi:cxx11]() const'
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/random_agent.dir/build.make:131: random_agent] Error 1
make[1]: *** [CMakeFiles/Makefile2:68: CMakeFiles/random_agent.dir/all] Error 2
make: *** [Makefile:84: all] Error 2
I've installed a bunch of dependencies locally since I don't have access to root. I'm using a conda environment to manage some packages. My understanding of the issue is that some packages might have conflicting versions of cxx, and reverting back to older versions of abi by setting _GLIBCXX_USE_CXX11_ABI=0
should resolve the issue (see here). But setting set(_GLIBCXX_USE_CXX11_ABI 0)
in CMakeLists.txt doesn't help. I'm not sure how to proceed next.
Hi,
I am running your python demo as instructed.
python src/driver/driver.py
But the rendered image is pretty weird. It's all red as below (no error reported). I have tested on Ubuntu 14.04 and MacOS 10.13. Both have the same result. Can you please provide some hints in solving this issue? Did I miss something?
when i run the python demo, i meet NO module named MatterSim,how can i solved it?
Hi, if we want to use our own image extractor, how do we get images (all 36 of them) from the simulator for a given viewpoint?
Shaders are loaded using absolute paths. This causes issues when the simulator is called from a different path. It was always rendering red frames because of this.
Matterport3DSimulator/src/lib/MatterSim.cpp
Line 176 in 229af69
I have a simple solution where you can set a shadersPath
member to the MatterSim
class, and define a setShadersPath()
method to assign the variable. The shaders path would then be accessed as shadersPath + "/vertex.sh"
(or) shadersPath + "/fragment.sh"
. I can send a pull request if this is acceptable.
Would be nice to have some visual indication of where we can jump to, much like the web version.
Maybe we could return an empty or all-zero image instead. But really if you do this, just don't do this.
Hi, can you explain a bit about how to load Matterport3D meshgrid data for top-down visualization? I see an instruction at ronghanghu/speaker_follower#8, however, I can't find the mesh_name.json anywhere.
I was able to successfully perform a local installation of MatterSim without docker. However, running driver.py
threw the following error:
Traceback (most recent call last):
File "src/driver/driver.py", line 22, in <module>
sim.initialize()
RuntimeError: EGL error 0x3001 at eglInitialize
My system configurations are as follows:
Ubuntu 16.04.5 LTS
Nvidia-driver version: 384.111
Cuda 9.0
CUDNN v7.1
We don't yet restrict this list based on locations that are within the left/right bounds of the view.
Right now it's not implemented as commented. For example, one can change configuration after calling init(), and it will crash if newEpisode is not called before the first makeAction. :)
how to get the data of the 3D position in world coordinates?
In the spec these are deltas. As implemented, they're absolute.
Hi, it seems that downloading the Matterport3D dataset using their script directly is difficult for me due to the annoying disconnection problem. Could you share the folder structure of all matterport_skybox images to me such that I can manually download every necessary zip file using other powerful downloading tool ? Thank you so much!
Hi, access denied to download the R2R Dataset. could you please fix this?
--2019-02-24 19:23:10-- https://storage.googleapis.com/bringmeaspoon/R2Rdata/R2R_test.json
Resolving storage.googleapis.com (storage.googleapis.com)... 216.58.199.80, 2404:6800:4006:804::2010
Connecting to storage.googleapis.com (storage.googleapis.com)|216.58.199.80|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-02-24 19:23:11 ERROR 403: Forbidden.
Thanks!
Suggestion: It would aid reproducibility and adoption if there was a docker file/container for this simulator.
Probably means a whole bunch of geometry calculations need to be fixed.
Hi, if we want to use our own image extractor, how do we get images from the simulator for a given viewpoint?
It seems that for the scan=8WUmhLawc2A
, instr_id=6825_2
, the robot from viewpoint 550d66ef28114bef8525d3a2d6db9cd2
can not reach the next viewpoint 01b439d39a8f412fa1837be7afb45254
by adjusting heading and elevation. The viewpoint 01b439d39a8f412fa1837be7afb45254
might be in the lists of navigableLocations of 36 combinations of heading and elevation, but the indexes are never 1. Hence, it can not be reached by adjusting the angles of heading and elevation and moving forward.
Hi,
I am running the drive.py
file following your instruction. How could I adjust the camera heading or elevation or navigate to another viewpoint from here ?
I didnot see the arrows as expected. Also, where should I key in the navigation numbers to move to another viewpoint?
BTW, It would be great if you could provide some simple rendering code to help us play with the environment.
Thank you very much.
Zuo
Hi,
It is given in the paper that connectivity graph is constructed by ray tracing between viewpoints in Matterport3d scene meshes. Could you elaborate on this?
Thank you.
When I try to build after:
sudo apt-get install libopencv-dev python-opencv freeglut3 freeglut3-dev libglm-dev libjsoncpp-dev doxygen libosmesa6-dev libosmesa6
as given in README, cmake and make complain about missing stuff. After I add cmake libglew-dev libpython2.7-dev
to the list the build goes through, but I get the following error when testing:
> build/mattersim_main
(process:30174): Gtk-WARNING **: 23:19:46.291: Locale not supported by C library.
Using the fallback 'C' locale.
OpenCV Error: No OpenGL support (Library was built without OpenGL support) in cvNamedWindow, file /build/opencv-L2vuMj/\
opencv-3.2.0+dfsg/modules/highgui/src/window_gtk.cpp, line 1064
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/highgui/src/window_gtk.cpp:1064: error: (-218) Library was bu\
ilt without OpenGL support in function cvNamedWindow
Does this mean I have to build opencv from scratch and the libopencv-dev from apt won't work? (I am using version 18.04.1 LTS of ubuntu).
Hi @peteanderson80,
This error "ValueError: MatterSim: Invalid action index: 1" occurs sometimes during training.
I suspect this is a bug in the C++ code of your simulator. Say if the action is forward
, then the corresponding env action
is (1, 0, 0), whose index
is 1. Combining with you following c++ code:
void Simulator::makeAction(int index, double heading, double elevation) {
totalTimer.Start();
// move
if (!initialized || index < 0 || index >= state->navigableLocations.size() ){
std::stringstream msg;
msg << "MatterSim: Invalid action index: " << index;
throw std::domain_error( msg.str() );
}
Then it basically means that the agent always chooses the navigable location
whose index is 1. So if the size is less than or equal 1, then it raises the error. This seems an edge case (the agent chooses to go forward but there is no next navigable location
to go).
Is my understanding correct? If so, how do you think the bug can be fixed? Thanks!
It seems that the range of the camera elevation is [-PI/6, PI/6], not [-PI/2, PI/2] as mentioned in the paper. The value of state->elevation
can only be -elevationIncrement
, elevationIncrement
and 0.
https://github.com/peteanderson80/Matterport3DSimulator/blob/master/src/lib/MatterSim.cpp#L352-L361
Primarily these are GLM deprecation warnings
We need to add some sort of config / build options to support OpenCV 2 and python 2.
Related to this, we should keep track of build instructions and dependencies in the main README.md.
At any given viewpoint, there seem to be 36 viewIndices (12 headings per elevation; 3 elevations). On the other hand, matterport_skybox_images provides 6 images per viewpoint (one top, one bottom and mainly, 4 images which could be stitched to form a panorama). Can you please explain how the 12 headings of 0 elevation (or viewIndices 12-23) correspond to the 4 skybox images for a given viewpoint e.g. which of the 4 images corresponds to the agent is looking straight? Kindly help me understand the mapping.
Why sometimes nextViewpointId
is not in the list of navigableLocations
?
https://github.com/peteanderson80/Matterport3DSimulator/blob/master/tasks/R2R/env.py#L173
Help, anyone can tell me the how to solve the below problem ??
GLib-GIO-Message: 00:41:33.081: Using the 'memory' GSettings backend. Your settings will not be saved or shared with other applications.
OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in Mat, file /build/opencv-ys8xiq/opencv-2.4.9.1+dfsg/modules/core/src/matrix.cpp, line 323
Traceback (most recent call last):
File "src/driver/driver.py", line 25, in
sim.newRandomEpisode(['17DRP5sb8fy'])
RuntimeError: /build/opencv-ys8xiq/opencv-2.4.9.1+dfsg/modules/core/src/matrix.cpp:323: error: (-215) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows in function Mat
We should consider if there are parts of the code that we can unit test. Also, we can probably come up with a few acceptance tests, e.g. comparing generated images against a test bank output by the web version.
Yes this is research and not industry-strength software. On the other hand, if we are successful quite a few other groups will use this - and it's embarrassing if they find bugs. Particularly if they may invalidate our results / publications.
Can anyone explain how to generate pano images from skybox images directly with viewIndex aligned ?
I reproduced a code that used the Matterport3DSimulator. I have downloaded the dataset and placed it in the appropriate folder as specified. But it still has the following error. I don't know how to solve this problem. Could anyone know how to fix it?
tests is a Catch v2.0.1 host application.
Run with -? for options
/home/cjh/code/regretful-agent/src/test/main.cpp:302
...............................................................................
/home/cjh/code/regretful-agent/src/test/main.cpp:324: FAILED:
REQUIRE_NOTHROW( sim.newEpisode(scanId, viewpointId, heading, elevation) )
due to unexpected exception with message:
MatterSim: Could not open skybox files at: ./data/v1/scans/17DRP5sb8fy/
matterport_skybox_images/85c23efeaecd4d43a7dcd5b90137179e_skybox*_sami.jpg
===============================================================================
test cases: 5 | 4 passed | 1 failed
assertions: 118337 | 118336 passed | 1 failed
I am interested in vision-language navigation and want to download matterport3d and R2R datasets. However, 1.7T data is too large for me to download(around 10 days-time keeping 2MB/s downloading speed).
I come to ask if I can train/test model under R2R setting with a small part of data like MINOS, which is a dataset uses only 6.3GB data of matterport3d environment.
OpenCV Error: Assertion failed (_src1.sameSize(_src2) && _src1.type() == _src2.type()) in norm, file /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/core/src/stat.cpp, line 3545
tests is a Catch v2.0.1 host application.
Run with -? for options
-------------------------------------------------------------------------------
RGB Image
-------------------------------------------------------------------------------
/root/mount/Matterport3DSimulator/src/test/main.cpp:342
...............................................................................
/root/mount/Matterport3DSimulator/src/test/main.cpp:342: FAILED:
{Unknown expression after the reported line}
due to unexpected exception with messages:
[
{
"elevation" : 0.0085573808395640535,
"heading" : 2.551961945320492,
"reference_image" : "17DRP5sb8fy_85c23efeaecd4d43a7dcd5b90137179e_2.
551961945320492_0.008557380839564054.png",
"scanId" : "17DRP5sb8fy",
"viewpointId" : "85c23efeaecd4d43a7dcd5b90137179e"
},
{
"elevation" : 0.00049218360228025842,
"heading" : 1.8699330579409539,
"reference_image" : "1LXtFkjw3qL_187589bb7d4644f2943079fb949c0be9_1.
8699330579409539_0.0004921836022802584.png",
"scanId" : "1LXtFkjw3qL",
"viewpointId" : "187589bb7d4644f2943079fb949c0be9"
},
{
"elevation" : -0.024443526143047459,
"heading" : 4.6263310475510773,
"reference_image" : "1pXnuDYAj8r_163d61ac7edb43fb958c5d9e69ae11ad_4.
626331047551077_-0.02444352614304746.png",
"scanId" : "1pXnuDYAj8r",
"viewpointId" : "163d61ac7edb43fb958c5d9e69ae11ad"
},
{
"elevation" : -0.00068389140394051673,
"heading" : 5.8441199099264436,
"reference_image" : "29hnd4uzFmX_1576d62e7bbb45e8a5ef9e7bb37b1839_5.
844119909926444_-0.0006838914039405167.png",
"scanId" : "29hnd4uzFmX",
"viewpointId" : "1576d62e7bbb45e8a5ef9e7bb37b1839"
}
]
/build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/core/src/stat.cpp:3545: error:
(-215) _src1.sameSize(_src2) && _src1.type() == _src2.type() in function norm
===============================================================================
test cases: 5 | 4 passed | 1 failed
assertions: 119189 | 119188 passed | 1 failed
Hi,
I followed the instructions and encountered this problem "ImportError: No module named MatterSim". Any ideas about this? Thanks!
Hi,
Thanks for sharing this amazing repo and R2R dataset.
Do you have any plan for supporting CPU parallelism? If not, I might be able to help with that.
I see your feature extraction code and I have a question. The input image size is set to be 480X640 and this will results in that the shape of the output of pool5 layer of resnet152 to be BX2048X9X14. Why did you only slice the feature in a single spatial position [:, :, 0, 0] instead of averaging features across all spatial positions?
Why didn't the paper report teacher-forcing performance in test unseen? (table 1).
I want to get RGB image for each angle views. However, when I want to access the image, the program failed. The error is listed below:
im = state.rgb
TypeError: Unable to convert function return value to a Python type! The signature was
(self: MatterSim.SimState) -> object
I compile the simulator with python 3.6.9, numpy 1.13.3 and opencv 3.1.0
It seems that on ubuntu, the libglew-dev is another necessary dependency to run cmake, which can be installed with sudo apt-get install libglew-dev
Hi,
Great work.
It would be nice to have better documentation. Especially for the python API.
A simple document stepping through the capabilities of this framework (with the R2R pytorch model for example) would save alot of time in figuring out how to use the framework.
setVFOV()
setElevationLimits(min, max)
. It is useful to restrict this <90 as there is nothing to see at the poles.setScanId()
function, scanId
should become a parameter of newEpisode()
, along with viewpointId
. When starting a new episode we must be able to move to a different building. Refactoring this may need to wait till the threading / OSMESA questions are sorted.Everything is fine until I run the unittest:
/root/mount/Matterport3DSimulator/src/test/main.cpp:350: FAILED:
REQUIRE_NOTHROW( sim.initialize() )
due to unexpected exception with message:
EGL error 0x300c at eglGetDisplay
===============================================================================
test cases: 5 | 4 passed | 1 failed
assertions: 119187 | 119186 passed | 1 failed
I don't know how to fix it, please help
https://github.com/peteanderson80/Matterport3DSimulator/blob/master/scripts/precompute_img_features.py#L76 should be sim.initialize()
hi,when i run the python tasks/R2R/eval.py , i could not find difference if i provide image feature or not,did there some difference or is the image features useful ?
Is the path (list of viewpoints) in the dataset always the shortest path?
We need to run multiple simulators to support multiple agents (up to 50 - 100) learning in parallel, each with their own simulator (or at least, their own state). Each agent could be a different building, so we will be touching all the images. In total we have 18GB of matterport_skybox_images (compressed).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.