Coder Social home page Coder Social logo

kooshyarkosari / hybridpose Goto Github PK

View Code? Open in Web Editor NEW

This project forked from chensong1995/hybridpose

0.0 0.0 0.0 295 KB

HybridPose: 6D Object Pose Estimation under Hybrid Representation (CVPR 2020)

License: MIT License

C++ 28.31% Python 68.07% C 0.49% Cuda 3.06% Makefile 0.06%

hybridpose's Introduction

HybridPose: 6D Object Pose Estimation under Hybrid Representations

This repository contains authors' implementation of HybridPose: 6D Object Pose Estimation under Hybrid Representations. Our implementation is based on PVNet. We warmly welcome any discussions related to our implementation and our paper. Please feel free to open an issue.

News (October 16, 2020): We have updated our experiments using the conventional data split on Linemod/Occlusion Linemod. Following baseline works, we use around 15% of Linemod examples for training. The rest of Linemod examples, as well as the entire Occlusion Linemod dataset, are used for testing. Both this GitHub repository and the arXiv paper are updated. HybridPose achieves an ADD(-S) score of 0.9125577238 on Linemod, and 0.4754330537 on Occlusion Linemod. We sincerely appreciate the readers who pointed out this issue to us, including but not limited to Shun Iwase and hiyyg.

Introduction

HybridPose consists of intermediate representation prediction networks and a pose regression module. The prediction networks take an image as input, and output predicted keypoints, edge vectors, and symmetry correspondences. The pose regression module consists of a initialization sub-module and a refinement sub-module. The initialization sub-module solves a linear system with predicted intermediate representations to obtain an initial pose. The refinement sub-module utilizes GM robust norm to obtain the final pose prediction. Approach overview

Download

git clone --recurse-submodules [email protected]:chensong1995/HybridPose.git

Environment set-up

Please install Anaconda first and execute the following commands:

conda create -y --name hybridpose python==3.7.4
conda install -y -q --name hybridpose -c pytorch -c anaconda -c conda-forge -c pypi --file requirements.txt
conda activate hybridpose

Compile the Ransac Voting Layer

The Ransac Voting Layer is used to generate keypoint coordinates from vector fields. Please execute the following commands (copied from PVNet):

cd lib/ransac_voting_gpu_layer
python setup.py build_ext --inplace

Compile the pose regressor

The pose regressor is written in C++ and has a Python wrapper. Please execute the following commands:

cd lib/regressor
make

Dataset set-up

We experimented HybridPose on Linemod and Occlusion Linemod. Let us first download the original datasets using the following commands:

python data/download_linemod.py
python data/download_occlusion.py

Let us then download our augumented labels to these two datasets. Our augumented labels include:

  • Keypoints: both 2D and 3D coordinates. These labels are generated using FSP.
  • Symmetry: Symmetry correspondences in 2D and the normal of symmetry plane in 3D. These labels are generated using SymSeg.
  • Segmentation masks: On Linemod, we create segmentation masks by projecting 3D models.

They are uploaded here:

The following commands unzip these labels to the correct directory:

unzip data/temp/linemod_labels.zip -d data/linemod
unzip data/temp/occlusion_labels.zip -d data/occlusion_linemod

We also use the synthetic data from PVNet. Please generate blender rendering and fuse data using their code. After data generation, please place blender data in data/blender_linemod, and fuse data in data/fuse_linemod. The directory structure should look like this:

data
  |-- blender_linemod
  |         |---------- ape
  |         |---------- benchviseblue
  |         |---------- cam
  |         |---------- ... (other objects)
  |-- fuse_linemod
  |         |---------- fuse
  |         |            |---------- 0_info.pkl
  |         |            |---------- 0_mask.png
  |         |            |---------- 0_rgb.jpg
  |         |            |---------- 1_info.pkl
  |         |            |---------- 1_mask.png
  |         |            |---------- 1_rgb.jpg
  |         |            |---------- ... (other examples)

After that, please use data/label.py and data/label_fuse.py to create intermediate representation labels blender and fuse data, respectively.

mv data/label.py data/blender_linemod/label.py
cd data/blender_linemod
python label.py
mv data/label_fuse.py data/fuse_linemod/label_fuse.py
cd data/fuse_linemod
python label_fuse.py

One of the arguments taken by the labeling scripts is --pvnet_linemod_path. This is the data directory used by PVNet render. The structure of this directory looks like this:

pvnet_lindmod
 |-- ape
 |    |--- amodal_mask
 |    |--- contours
 |    |--- JPEGImages
 |    |--- labels
 |    |--- labels_occlusion
 |    |--- mask
 |    |--- nosiy_contours
 |    |--- pose
 |    |--- ape.ply
 |    |--- ... (several other .txt and .pkl files)
 |-- benchviseblue
 |    |--- ...
 |-- cam
 |    |--- ...
 |-- ... (other objects)

Training

Please set the arguments in src/train_core.py execute the following command (note that we need to set LD_LIBRARY_PATH for the pose regressor):

# on bash shell
LD_LIBRARY_PATH=lib/regressor:$LD_LIBRARY_PATH python src/train_core.py
# on fish shell
env LD_LIBRARY_PATH="lib/regressor:$LD_LIBRARY_PATH" python src/train_core.py

If you use a different shell other than bash and fish, prepend "lib/regressor" to LD_LIBRARY_PATH and run python src/train_core.py.

Pre-trained weights

You can download our pre-trained weights below. We use train one set of weights on Linemod, and test on both Linemod and Occlusion Linemod:

We have configured random seeds in src/train_core.py and expect you to re-produce identical weights by running our training script. It turns out that completely reproducible results are not guaranteed across PyTorch releases, individual commits or different platforms. Furthermore, results need not be reproducible between CPU and GPU executions, even when using identical seeds. Also, the randomness in the PVNet synthetic data generation will create some difference in training outcome. Our training uses two graphics cards with a batch size of 10.

After you download the pre-trained weights, unzip them somewhere and configure --load_dir in src/train_core.py to the unzipped weights (e.g. saved_weights/linemod/ape/checkpoints/0.001/199).

Running src/train_core.py now will save both ground truth and predicted poses to a directory called output.

Evaluation

To evaluate ADD(-S) accuracy of predicted poses, please set the arguments in src/evaluate.py and run

python src/evaluate.py

Citation

If you find our work useful in your research, please kindly make a citation using:

@misc{song2020hybridpose,
    title={HybridPose: 6D Object Pose Estimation under Hybrid Representations},
    author={Chen Song and Jiaru Song and Qixing Huang},
    year={2020},
    eprint={2001.01869},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

hybridpose's People

Contributors

chensong1995 avatar grem-lin avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.