Coder Social home page Coder Social logo

wenbowen123 / bundletrack Goto Github PK

View Code? Open in Web Editor NEW
600.0 18.0 66.0 18.34 MB

[IROS 2021] BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models

License: Other

CMake 0.11% Shell 0.03% Python 17.54% Dockerfile 0.05% Jupyter Notebook 9.73% C++ 52.07% Cuda 5.07% C 15.23% HLSL 0.16%
6dof 6d-pose-estimation 6dof-pose 6dof-tracking pose-estimation pose-tracking pose-graph-optimization tracking robotics manipulation

bundletrack's Introduction

This is the official implementation of our paper:

BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models

accepted in International Conference on Intelligent Robots and Systems (IROS) 2021.

Abstract

Most prior 6D object pose tracking often assume that the target object's CAD model, at least at a category-level, is available for offline training or during online template matching. This work proposes BundleTrack, a general framework for 6D pose tracking of novel objects, which does not depend upon 3D models, either at the instance or category-level. It leverages the complementary attributes of recent advances in deep learning for segmentation and robust feature extraction, as well as memory-augmented pose graph optimization for spatiotemporal consistency. This enables long-term, low-drift tracking under various challenging scenarios, including significant occlusions and object motions. Comprehensive experiments given two public benchmarks demonstrate that the proposed approach significantly outperforms state-of-art, category-level 6D tracking or dynamic SLAM methods. When compared against state-of-art methods that rely on an object instance CAD model, comparable performance is achieved, despite the proposed method's reduced information requirements. An efficient implementation in CUDA provides a real-time performance of 10Hz for the entire framework.

This repo can be readily applied to 6D pose tracking for novel unknown objects. For CAD model-based 6D pose tracking, please check out my another repository of se(3)-TrackNet

Bibtex

@inproceedings{wen2021bundletrack,
  title={BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models},
  author={Wen, B and Bekris, Kostas E},
  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year={2021}
}

Supplementary Video

Click to watch

IROS 2021 Presentation

Click to watch

Results

Benchmark Output Results

For convenience of benchmarking and making plots, results of pose outputs can be downloaded below

Setup

For the environment setup, it's strongly recommended to use our provided docker environment (setting up from scratch is very complicated and not supported in this repo). For this, you don't have to know how docker works. Only some basic commands are needed and will be provided in the below steps.

  • Install docker (https://docs.docker.com/get-docker/).

  • Run

    docker pull wenbowen123/bundletrack:latest
    docker pull wenbowen123/lf-net-release-env:latest
    
  • Edit the docker/run_container.sh, update the paths of BUNDLETRACK_DIR, NOCS_DIR and YCBINEOAT_DIR

  • Run bash docker/run_container.sh

  • cd [PATH_TO_BUNDLETRACK]

  • rm -rf build && mkdir build && cd build && cmake .. && make

Data

Depending on what you want to run, download those data that are neccessary.

Run predictions on NOCS

  • Open a separate terminal and run

    bash lf-net-release/docker/run_container.sh
    cd [PATH_TO_BUNDLETRACK]
    cd lf-net-release && python run_server.py
    
  • Go back to the terminal where you launched the bundletrack docker in above and run below. The output will be saved to debug_dir specified in config file. By default it's /tmp/BundleTrack/. For more detailed logs, change LOG to 2 or higher in config_nocs.yml.

    python scripts/run_nocs.py --nocs_dir [PATH_TO_NOCS] --scene_id 1 --port 5555 --model_name can_arizona_tea_norm
    
  • Finally, the results will be saved in /tmp/BundleTrack/

  • For evaluating on the entire NOCS Dataset, run (NOTE that this will add noise to perturb the initial ground-truth pose for evaluation as explained in the paper. If you want to see how BundleTrack actually performs, run the above section)

    python scripts/eval_nocs.py --nocs_dir [PAHT TO NOCS]  --results_dir [PATH TO THE RUNNING OUTPUTS]
    

Run predictions on YCBInEOAT

  • Change the model_name and model_dir in config_ycbineoat.yml to the path to the .obj file (e.g. For folder bleach0, the model_name is 021_bleach_cleanser, and model_dir is [Your path to YCB Objects]/021_bleach_cleanser/textured_simple.obj)

  • Open a separate terminal and run

    bash lf-net-release/docker/run_container.sh
    cd [PATH_TO_BUNDLETRACK]
    cd lf-net-release && python run_server.py
    
  • Go back to the terminal where you launched the bundletrack docker in above, and run below. The output will be saved to debug_dir specified in config file. By default it's /tmp/BundleTrack/

    python scripts/run_ycbineoat.py --data_dir [PATH_TO_YCBInEOAT] --port 5555 --model_name [The YCB object's name, e.g. 021_bleach_cleanser]
    
  • Finally, the results will be saved in /tmp/BundleTrack/. For more detailed logs, change LOG to 2 or higher in config_ycbineoat.yml.

  • For evaluating on the entire YCBInEOAT Dataset, run

    python scripts/eval_nocs.py --ycbineoat_dir [PAHT TO YCBINEOAT] --ycb_model_dir [YCB MODELS FOLDER] --results_dir [PATH TO THE RUN OUTPUTS]
    

Run predictions on your own RGBD data

  • Download YCBInEOAT, if you haven't done so in above.

  • Open a separate terminal and run

    bash lf-net-release/docker/run_container.sh
    cd [PATH_TO_BUNDLETRACK]
    cd lf-net-release && python run_server.py
    
  • Prepare segmentation masks. In YCBInEOAT Dataset, we computed masks from robotic arm forward kinematics. If your scene is not too complicated similar to NOCS Dataset, you can run the video segmentation network to get masks as below:

    • First you need to prepare an initial mask (grayscale image, where 0 means background, else foreground).

    • python transductive-vos.pytorch/run_video.py --img_dir [THE PATH TO COLOR FILES] --init_mask_file [THE INITIAL MASK FILE YOU PREPARED ABOVE] --mask_save_dir [WHERE TO SAVE]

    • Prepare your folder structure same as any folder (e.g. "mustard_easy_00_02") in YCBInEOAT Dataset. Put it under the same directory in YCBInEOAT, i.e. next to "mustard_easy_00_02". Then edit config_ycbineoat.yml to make sure the paths at top are right.

      Structure:

      mustard_easy_00_02
      ├── rgb
      ├── masks
      ├── depth
      └── cam_K.txt
      
  • Go back to the terminal where you launched the bundletrack docker, run below. The output will be saved to debug_dir specified in config file. By default it's /tmp/BundleTrack/

    python scripts/run_ycbineoat.py --data_dir [PATH TO YOUR FOLDER ABOVE] --port 5555
    

bundletrack's People

Contributors

wenbowen123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bundletrack's Issues

Running transducive-vos-pytorch in the docker container returns torch not found and mask issue

Running python transductive-vos.pytorch/run_video.py --img_dir [THE PATH TO COLOR FILES] --init_mask_file [THE INITIAL MASK FILE YOU PREPARED ABOVE] --mask_save_dir [WHERE TO SAVE] to generate masks returns that torch not found error in the container. If I try to install torch it screws up the numpy version due to which I am unable to proceed.

I also wanted to ask what these mask represent. I thought that the BundleTrack code only requires mask to be at time t=0 (initially) but when I look at the dataset I found that masks were available in plenty for each image, could you help me understand this. Thanks

Can't run predictions on NOCS

Running command
python scripts/run_nocs.py --nocs_dir [PATH_TO_NOCS] --scene_id 1 --port 5555 --model_name can_arizona_tea_norm
results in the following error
image
I couldn't find these files in the NOCS real_test directory. Could you share the same
Any idea how to go about this
Thanks

Unable to install bundletrack

Hello,

Thanks for providing the implementation for Bundle Track, you work is amazing. While I was trying this project, I'm repeatedly facing the issue with opencv. I've already compiled opencv 4.5 from source but the issue persists.
image

Can you please guide which version of opencv should work with bundle track. Also if you can include the requirements.txt file to the project, would be great.

Thanks,
Have a great day!,
Ali

Segmentation fault

Hi,bowen,I met somes problem when setting up environmennt with your provided instructions. I check the file is not empty. can you please help me figure out this problem?
image

Running prediction on own dataset

Hey Bowen,
When I try to run predictions on my own data, I get a segmentation fault core dumped error
image
Do you happen to know if anyone else faced something similar. I tried making the directory as similar to bleack0 (which worked really well though, so I doubt there is something going wrong in installation)

No visualization output on YCB and custom data

Hi wenbowen :
thx for your great job on bundletrack!
I have some problem when running prediction on YCB data. There is no output on my /tmp/BundleTrack file but I get poses output txt.
2f92a16ab6bd744439c1e09d88c57f3

3

and when I try to predict on my own rgbd data, it shows

1

I donot understand why it need a obj file or annotation file?

error: token ""__CUDACC_VER__ is no longer supported.

HI, @wenbowen123. I was able to run the project before. Recently, I made some modifications to the project and recompiled it, but this issue occurred.

` #warning "crt/math_functions.hpp is an internal header file and must not be used directly. Please use cuda_runtime_api.h or cuda_runtime.h instead."
^~~~~~~
In file included from /usr/local/cuda-10.0/include/cuda_runtime.h:120:0,
from :0:
/usr/local/cuda-10.0/include/crt/common_functions.h:74:24: error: token ""CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."" is not valid in preprocessor expressions
#define CUDACC_VER "CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."
^
/usr/local/cuda-10.0/include/crt/common_functions.h:74:24: note: in definition of macro ‘CUDACC_VER
#define CUDACC_VER "CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda-10.0/include/crt/common_functions.h:74:24: error: token ""CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."" is not valid in preprocessor expressions
#define CUDACC_VER "CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."
^
/usr/local/cuda-10.0/include/crt/common_functions.h:74:24: note: in definition of macro ‘CUDACC_VER
#define CUDACC_VER "CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda-10.0/include/crt/common_functions.h:74:24: error: token ""CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."" is not valid in preprocessor expressions
#define CUDACC_VER "CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."
^
/usr/local/cuda-10.0/include/crt/common_functions.h:74:24: note: in definition of macro ‘CUDACC_VER
#define CUDACC_VER "CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda-10.0/include/crt/common_functions.h:74:24: error: token ""CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."" is not valid in preprocessor expressions
#define CUDACC_VER "CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."
^
/usr/local/cuda-10.0/include/crt/common_functions.h:74:24: note: in definition of macro ‘CUDACC_VER
#define CUDACC_VER "CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda-10.0/include/crt/common_functions.h:74:24: error: token ""CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."" is not valid in preprocessor expressions
#define CUDACC_VER "CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."
^
/usr/local/cuda-10.0/include/crt/common_functions.h:74:24: note: in definition of macro ‘CUDACC_VER
#define CUDACC_VER "CUDACC_VER is no longer supported. Use CUDACC_VER_MAJOR, CUDACC_VER_MINOR, and CUDACC_VER_BUILD instead."
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ckq/project/new_BudnleTrack/BundleTrack-master/src/cuda/Solver/../Solver/ICPUtil.h:8:0,
from /home/ckq/project/new_BudnleTrack/BundleTrack-master/src/cuda/CUDACache.cu:6:
/usr/local/cuda-10.0/include/device_functions.h:54:2: warning: #warning "device_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead." [-Wcpp]
#warning "device_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead."
^~~~~~~
In file included from /home/ckq/project/new_BudnleTrack/BundleTrack-master/src/cuda/Solver/ICPUtil.h:8:0,
from /home/ckq/project/new_BudnleTrack/BundleTrack-master/src/cuda/Solver/SolverBundlingEquationsLie.h:22,
from /home/ckq/project/new_BudnleTrack/BundleTrack-master/src/cuda/Solver/SolverBundling.cu:7:
/usr/local/cuda-10.0/include/device_functions.h:54:2: warning: #warning "device_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead." [-Wcpp]

/usr/local/cuda-10.0/include/cuda.h:2984:36: note: declared here
__CUDA_DEPRECATED CUresult CUDAAPI cuDeviceComputeCapability(int *major, int *minor, CUdevice dev);
^~~~~~~~~~~~~~~~~~~~~~~~~
CMakeFiles/Makefile2:141: recipe for target 'CMakeFiles/BundleTrack.dir/all' failed
make[1]: *** [CMakeFiles/BundleTrack.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2`

Running Predictions on own RGBD Data

Hi, thanks for releasing code for this paper.

When I try to use BundleTrack to run predictions on my own RGBD Data, there are some issues:

  1. I prepared my directory similar to that in YCBInEOAT,
mustard_easy_00_02
├── rgb
├── masks
├── depth
└── cam_K.txt

However, it appears that when I run the code, I get the following error

image

Was wondering what is the role of poses in the annotated_poses folder, and how can I generate such poses for my data?

  1. When I populate the annotated_poses folder with arbitrary poses, I am able to generate keypoints, however, I get an error along the lines of

frame # and # findNN too few match, # status marked as FAILIn saveNewframeResult

This error is printed for every frame in the sequence. This may be connected with the annotated_poses folder problem, but not sure how to generate poses to ensure that the keypoint correspondence algo works.

Thanks!

Segmentation on YCBInEOAT

Hi, I noticed that the ground truth segmentation in YCBInEOAT dataset seems not good. The segmentation tracking by VOS is even worse. For example, if we look at "bleach0", the object is partially segmented with only a few points. How does BundleTrack handle this scenario?

Could not find *.mtl files in NOCS/obj_models/real_test

Hi Wenbo,

Thank you for sharing the code on GitHub. I followed your steps to install/download everything. And when I ran the code (run_nocs.py), it remind me that: "Could not find file NOCS/object_models/real_test/*.mtl". When I checked the NOCS dataset website, I didn't find *.mtl in "object_model".

Could you please let me know where I can get *.mtl files?

Many thanks

open3d not found

I can't run the eval_*.py as there seems to be no open3d in the environment.
after pip install open3d, I got another error when importing open3d

OSError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27' not found (required by /opt/conda/lib/python3.6/site-packages/open3d/cpu/pybind.cpython-36m-x86_64-linux-gnu.so)

How to fit that? And by the way, should I run 'eval_nocs' when evaluating YCBInEOAT dataset, just as mentioned in the README?

About /pose/*.txt

Hi Bowen, it's me again ^ ^.
I have successfully run the pipeline of bundletrack on my own data. I input the initial pose (world coordinate sys) and get the output poses sequence (/pose/000-999.txt), the content of the txt file are 4*4 transformation matrices.
Here is one thing I want to know: which coordinate system are those transformation matrices in? My ground truth labels are set under the world coordinate sys, so if these transformations are in another system, could you tell me how to transfer them into world coordinate system? That would be very helpful to me!
Thanks again for your great work and patience!

OpenCV Error no kernel image when running predictions on YCBInEOAT

image

Everything is currenly running inside docker. Packages from pip list shows these which are aligned with the one in lf-net-release-env:

  1. Opencv-python 3.4.0.12
  2. Tensorflow-gpu 1.4.0
  3. Cuda 10.1.243

Top bash is running python run_server.py, running properly.

Bottom bash is running ython scripts/run_ycbineoat.py --data_dir /home/jrojas/YCBInEOAT/tomato_soup_can_yalehand0 --port 5555 --model_name 005_tomato_soup_can --model_dir /home/jrojas/YCBInEOAT/YCB_Video_Dataset/models/005_tomato_soup_can/textured_simple.obj
/home/jrojas/BundleTrack/scripts/../build/bundle_track_ycbineoat /tmp/config_YCBInEOAT.yml

I downloaded https://archive.cs.rutgers.edu/archive/a/2020/pracsys/Bowen/iros2020/YCBInEOAT/https://archive.cs.rutgers.edu/archive/a/2020/pracsys/Bowen/iros2020/YCBInEOAT/ and https://drive.google.com/file/d/1gmcDD-5bkJfcMKLZb3zGgH_HUFbulQWu/view . I put them in /home/usrname/YCBInEOAT . However i noticed that when i run predictions, its looking for a missing masks folder, so i downloaded https://archive.cs.rutgers.edu/archive/a/2021/pracsys/2021_iros_bundletrack/masks.tar.gz and put it inside /home/usrname/YCBInEOAT/tomato_soup_can_yalehand0/

Is there anyway to run without docker

we want to combine this method to other algorithm, and run on some server that can not use docker, is there any tutorial to configure from scratch?

run error

Hi Bowen:
When i run scripts/run_ycbineoat.py , it reported an error, it should be unable to find the file, and the error is:
[pcl::OBJReader::readHeader] Could not find file '/media/bowen/e25c9489-2f57-42dd-b076-021c59369fec/DATASET/YCB_Video_Dataset/CADmodels/021_bleach_cleanser/textured.obj'.
[pcl::OBJReader::read] Problem reading header!
[pcl::OBJReader::readHeader] Could not find file '/media/bowen/e25c9489-2f57-42dd-b076-021c59369fec/DATASET/YCB_Video_Dataset/CADmodels/021_bleach_cleanser/textured.obj'.
[pcl::OBJReader::read] Problem reading header!
How can i fix this? May I ask whether you have not provided this document? Thanks!

Segmentation fault (core dumped)

Hi bowen,
After I configure the environment as your provided instructions, I start to run predictions on YCBInEOAT but there occures an error as follows:
2022-03-01 21-09-11 的屏幕截图

real-time pose tracking

Hello, I am studying the problems related to the recognition and grasping of objects by mechanical arms, so I want to know whether this project can realize real-time pose tracking. If not, is there any way to modify it to achieve real-time pose tracking?

No files present in /tmp/Bundletrack

With reference to #16, I tried going to the location where the results would be present /tmp/BundleTrack. After running the script run_nocs.py as specified in the instructions. I found that the file itself is empty. Is this okay?
image
I thought that we would be getting the 6D poses of the objects whose masks were provided in the results. Did I get it wrong? Could someone help with this, thanks

Originally posted by @kausiksivakumar in #16 (comment)

Structure of files for YCBINEOAT data directory

Hi Bowen,
Could you let me know how the files should be stored i the YCBINEOAT directory for prediction. Like how you showed it for NOCS
image
Also could you provide an example of what the model_name and model_directory should be changed to in the same.
Thanks much!

some questions about mask file

Hi thx for the great job on bundletrack.
I have some questions on the YCB data.
I noticed that you mentioned :

[Download YCBInEOAT] to make it like

YCBInEOAT
├── bleach0
| ├──annotated_poses
| ├──depth
| ├──depth_filled
| ├──gt_mask
| ├──masks
| ├──masks_vis
| ├──rgb
| ├──cam_K.txt
| ├──......
└── ....

But there is no masks file on the tar file. how can I get it?

Setup:rm -rf build && mkdir build && cd build && cmake .. && make

[ 4%] Building NVCC (Device) object CMakeFiles/cuda_compile_1.dir/src/cuda/cuda_compile_1_generated_cuda_ransac.cu.o
nvcc fatal : A single input file is required for a non-link phase when an outputfile is specified
CMake Error at cuda_compile_1_generated_cuda_ransac.cu.o.Release.cmake:220 (message):
Error generating
/data/Jieie/BundleTrack/build/CMakeFiles/cuda_compile_1.dir/src/cuda/./cuda_compile_1_generated_cuda_ransac.cu.o

CMakeFiles/BundleTrack.dir/build.make:125: recipe for target 'CMakeFiles/cuda_compile_1.dir/src/cuda/cuda_compile_1_generated_cuda_ransac.cu.o' failed
make[2]: *** [CMakeFiles/cuda_compile_1.dir/src/cuda/cuda_compile_1_generated_cuda_ransac.cu.o] Error 1
CMakeFiles/Makefile2:99: recipe for target 'CMakeFiles/BundleTrack.dir/all' failed
make[1]: *** [CMakeFiles/BundleTrack.dir/all] Error 2
Makefile:103: recipe for target 'all' failed
make: *** [all] Error 2

Cannot run server

I have build BundleTrack according to the instructions but when I want to run the server, I am met with this error

(base) root@nixie:/homes/corcodel/BundleTrack/BundleTrack# python lf-net-release/run_server.py 
2021-12-20 19:47:03.615220: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2021-12-20 19:47:03.615250: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
  File "lf-net-release/run_server.py", line 6, in <module>
    import tensorflow as tf
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/__init__.py", line 41, in <module>
    from tensorflow.python.tools import module_util as _module_util
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/__init__.py", line 49, in <module>
    from tensorflow.python.feature_column import feature_column_lib as feature_column
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/feature_column/feature_column_lib.py", line 22, in <module>
    from tensorflow.python.feature_column.feature_column import *
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/feature_column/feature_column.py", line 147, in <module>
    from tensorflow.python.layers import base
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 20, in <module>
    from tensorflow.python.keras.legacy_tf_layers import base
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/__init__.py", line 25, in <module>
    from tensorflow.python.keras import models
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/models/__init__.py", line 21, in <module>
    from tensorflow.python.keras._impl.keras.models import load_model
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/__init__.py", line 21, in <module>
    from tensorflow.python.keras._impl.keras import activations
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/activations.py", line 24, in <module>
    from tensorflow.python.keras._impl.keras.engine import Layer
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/engine/__init__.py", line 21, in <module>
    from tensorflow.python.keras._impl.keras.engine.topology import get_source_inputs
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/engine/topology.py", line 51, in <module>
    InputSpec = tf_base_layers.InputSpec
AttributeError: module 'tensorflow.python.layers.base' has no attribute 'InputSpec'

If I do pip list | grep tensorflow I get:

tensorflow                      2.6.2    
tensorflow-estimator            2.6.0    
tensorflow-gpu                  1.4.0    
tensorflow-object-detection-api 0.1.1    
tensorflow-tensorboard          0.4.0

Any suggestion what I could do? Please ignore the fist error where it complains about not finding the NVIDIA libraries, TF can run without no problem. The issue possibly can be the version of Tensorflow according to this: https://stackoverflow.com/questions/51186116/attributeerror-module-tensorflow-python-layers-layers-has-no-attribute-layer

Thank you,
Radu

EDIT:

OK, so I uninstalled TF and reinstalled an earlier version pip install tensorflow==1.15.5. The server now runs.
I tried to run the first example in NOCS and it runs although it gets a segfault at the end but it's alright I think.

Now for evaluating on the entire NOCS Dataset:

I tried installing an earlier version of Open3D pip install open3d=0.11.0.0, the newest version of Open3D, both give the same error when ran:

ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27' not found (required by /opt/conda/lib/python3.6/site-packages/open3d/open3d_pybind.cpython-36m-x86_64-linux-gnu.so)

I ran strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBC to see what's installed in the container and sure enough, 2.7 is not there. The highest is 2.4. So it looks like the most recent version of O3D that works is pip install open3d==0.9.0.0. Then other problems like ModuleNotFoundError: No module named 'transformations', that's simple to install with pip install transformations. There is a directory structure mismatch in the instructions which causes:

File "/homes/corcodel/BundleTrack/BundleTrack/scripts/benchmark.py", line 192, in main
    assert len(model_list)>0
AssertionError

But that was easy to fix once echoing the path it's searching. What I can't really do now is figure out what directory I need to point to when running python scripts/eval_nocs.py --nocs_dir [PAHT TO NOCS] --results_dir [PATH TO THE RUNNING OUTPUTS]. Which one exactly is --results_dir?

Thank you.
Radu,

Running BundleTrack online

Hey Bowen, in case I want to run Bundletrack online, how should I go for it. Would you have any thoughts

How can I get these files?

It seems that somes files mentioned in DataLoader.cpp are not provided
std::smatch what; std::regex_search(data_dir, what, std::regex("NOCS/.*")); _model_dir = data_dir; boost::replace_all(_model_dir, std::string(what[0]), "NOCS/obj_models/real_test/"+_model_name+".obj"); _gt_dir = data_dir; boost::replace_all(_gt_dir, std::string(what[0]), "NOCS/gts/real_test_text/scene_" + std::to_string(_scene_id) + "/model_" + _model_name + "/"); _scale_dir = data_dir; boost::replace_all(_scale_dir, std::string(what[0]), "NOCS/NOCS-REAL275-additional/model_scales/"+_model_name+".txt");

Without these files, I fail to run the code.

source code

Hi bowen,

Awosome job!
When will you upload your code?

Best,
Min

Run predictions on own RGBD data

python transductive-vos.pytorch/run_video.py --img_dir /data/Jieie/BundleTrackData/YCBInEOAT/0311_9/depth --init_mask_file /data/Jieie/BundleTrackData/YCBInEOAT/0311_9/one_mask.png --mask_save_dir /data/Jieie/BundleTrackData

RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

A possible solution to "findNN too few match"

I am running on an older GPU card, GTX Titan X (Maxwell cards). Running it for the YCB object.

I was having a similar issue of "findNN too few match" as previous issues: #9 #15

I tried to compile OpenCV from the source but it didn't help.
Then, I tried to add -gencode arch=compute_52,code=sm_52 in CMakeList.txt

set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcompiler;-fPIC;-gencode arch=compute_30,code=sm_30;-gencode arch=compute_61,code=sm_61;-gencode arch=compute_75,code=sm_75;-O3;-std=c++11;-use_fast_math;--default-stream per-thread)

It works now after rebuilding BundleTrack, I tried for ycbineoat/bleach0. The images in color_viz show the mask and also the yellow points for every frame.

I imagine it may also run on a newer GPU (Ampere), if you add something like sm_86, compute_86.

Last, Bowen, do you have visualization code to plot bounding boxes like your gifs that can be released?
Thanks!

A weird error when compiling

Hi, bowen!

Thanks for your great job!

I'm trying to reproduce your results. But when compiling, I got a weird error

[ 81%] Linking CXX shared library libBundleTrack.so
/usr/bin/ld: CMakeFiles/cuda_compile_1.dir/src/cuda/cuda_compile_1_generated_CUDAImageUtil.cu.o: relocation R_X86_64_PC32 against symbol `_ZN13CUDAImageUtil39adaptiveBilateralFilterIntensity_KernelEPfPKfS2_fffjj' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: bad value
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/BundleTrack.dir/build.make:459: libBundleTrack.so] Error 1
make[1]: *** [CMakeFiles/Makefile2:134: CMakeFiles/BundleTrack.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

This was happening when running

mkdir build && cd build && cmake .. && make

Any ideas of why this happens?

SyntaxError: invalid syntax

python version:3.8.8
sudo python scripts/run_nocs.py --nocs_dir /data/Jieie/NOCS --scene_id 1 --port 5555 --model_name can_arizona_tea_norm

File "scripts/run_nocs.py", line 58
cur_out_dir = f'/tmp/BundleTrack/nocs/{model_name}_{scene_id}'
^
SyntaxError: invalid syntax

Segmentation fault

Hi,

I was trying to run the pretrained model on NOCS with command "python run_nocs.py --scene_id 1 --port 5555 --model_name can_arizona_tea_norm". But the Segmentation fault poped out. Details are as below:

`(base) root@Friday:/home/chao/GitHub_repo/BundleTrack/scripts# sh run_nocs.sh
K
591.013 0 322.525
0 590.168 244.111
0 0 1

Segmentation fault (core dumped)
`

The error message is a bit ambigious. Any idea on how to solve this?

Thanks in advance!

illegal instruction (core dumped)

Thanks for the great work. I was trying to run predictions on YCBInEOAT by following your guide, and I run the command:

python scripts/run_ycbineoat.py --data_dir ycb_dir/bleach0 --port 5555 --model_name 021_bleach_cleanser

it gave this error:

/home/airlab/enes/bundle/BundleTrack/scripts/../build/bundle_track_ycbineoat /tmp/config_ycb_dir.yml
illegal instruction (core dumped)

First, I suspected from tensorflow version because old CPUs do not support AVX instruction which is used by newer tensorflow versions, but while lfnet container works without an error, this happened in the main container. And, afaik, there is no tensorflow in there.

Here is my lscpu | grep Flags output to compare with yours:

flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt aes lahf_lm pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d

What could cause this? And How can I solve it?
Thanks in advance.

Which checkpoint to use for transductive-vos?

Hi, I downloaded models of transduction-vos according to README instructions. There are three checkpoints in the folder transductive-vos.pytorch: davis_train.pth.tar, davis_trainval.pth.tar and youtube_train.pth.tar. Which one should I use to reproduce your mask results?

environment Settings

Your work is excellent!
May I ask about your Ubuntu and CUDA versions?
thank you!

run the command "python lf-net-release/run_server.py" erro"ModuleNotFoundError: No module named 'mso_resnet_detector'".why?

Traceback (most recent call last):
  File "lf-net-release/run_server.py", line 113, in <module>
    ops = build_networks(config, photo_ph, is_training)
  File "lf-net-release/run_server.py", line 30, in build_networks
    DET = importlib.import_module(config.detector)
  File "/opt/conda/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'mso_resnet_detector'

segmentation question

Hi. I have a question.
I want to test in real time by connecting a camera after learning.
But I think vos segmentation is meant to be used only offline. Is there another way like lfnet communicates with bundletrack by communication?

Core dump problem

Hi, after setting up the environment, I am caught with a segmentation fault.
The problem happens when the texture.obj is loaded into a PolygonMesh object by pcl::io::loadOBJFile (Dataloader.cpp line318). The terminal output is Segmentation fault (core dumped). I am wondering if the textured.obj is too big for cpus to load.
I notice that you are using texture_simple.obj rather than the texture.obj, could you provide me the .obj files you have tested ( e.g. 021_bleach_cleanser) ?

frame %s and %s findNN too few match, %s status marked as FAIL

I have tried to run the code on both datasets, but both can't produce the correct result. The error message is produced by SiftManager::findCorresbyNN. It seems that it can't find any match points between any two frames. The size of _matches[{frameA,frameB}] is always 0. And the pose data produced finally are all the same, same as the initial pose. Could you please help me figure out the problem?@wenbowen123

Segmentation fault

(base) root@hung-Alienware-Aurora-R9:/home/hung/Desktop/BundleTrack# python scripts/run_nocs.py --nocs_dir /home/hung/Desktop/BundleTrack/NOCS --scene_id 1 --port 5555 --model_name can_arizona_tea_norm
/home/hung/Desktop/BundleTrack/scripts/../build/bundle_track_nocs /tmp/config_model_can_arizona_tea_norm_scene1.yml
K
591.013 0 322.525
0 590.168 244.111
0 0 1

Reading directory failed: /home/hung/Desktop/BundleTrack/NOCS/gts/real_test_text/scene_1/model_can_arizona_tea_norm/
Segmentation fault (core dumped)

could you please upload the NOCS/gts/real_test_text/scene_1/model_can_arizona_tea_norm/

Can't find NOCS visualizations and path to the runnning_outputs

In the instructions mentioned for running evaluations on NOCS, what is the directory of [PATH TO THE RUNNING OUTPUTS]?
Also while running predictions on NOCS, I followed #15 's instructions and updated Bundler.cpp accordingly. Even after doing that, I could not find the the visualizations of NOCS stored in the results directory. I could only find a directory called poses that contains all the homogeneous transforms in a txt file. The directory color_viz does not contain any file. Could you help me with it. Thanks

I did three things,

  1. set log = 3 (in config_nocs.yml)
  2. Add _fm->vizKeyPoints(frame); in Bundler.cpp
  3. I did not quite understand what you were going for here #15 (comment) . Could you clarify how to save the image?
    This is my output that I received at terminal while running predictions on NOCS

New frame 0280
zmq start waiting for reply
zmq got reply
finding corres between 0280(id=1) and 0000(id=0)
frame 0280 and 0000 findNN too few match, 0280 status marked as FAILcolor file: ./NOCS/real_test/scene_1//../../real_test/scene_1/0281_color.png

The above message is for all the frames
Could you help me with this. Thanks

The link returns an error 404

The YCBlnEOAT dataset link provided returns an error 404. So does the link to download pretrained weights and masks. Could you update them.

Thanks

编译问题

您好 ,编译是在docker环境编译还是在普通环境下编译,因为我在docker 环境下编译缺少cmakelist.txt文件。

How to run BundleTrack faster

Hi Bowen,

We were trying to run BundleTrack online. So far we were only able to reach around 2.5 fps. I saw that in your post that you were able to make it run at 10 fps online. Could you let me know what setting to change in the config_ycb.yml that tradeoffs accuracy for fast inference? Thanks

Create a segmentation mask with vos

I am trying to generate a mask through run_video.py of transductive-vos.pytorch .
However, after a few iterations, a CUDA out of memory error occurs.
How do I solve it?

My GPU is nvidia gtx titan and it has about 24GB of memory.
Error message:
CUDA out of memory. Tried to allocate 6.95 GiB (GPU 0; 23.65 GiB total capacity; 9.53 GiB already allocated; 4.29 GiB free; 17.11 GiB reserved in total by PyTorch)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.