Coder Social home page Coder Social logo

baurst / self-supervised_non-rigid_flow_and_ego-motion Goto Github PK

View Code? Open in Web Editor NEW

This project forked from ivantishchenko/self-supervised_non-rigid_flow_and_ego-motion

0.0 2.0 0.0 123.53 MB

Code for "Self-Supervised Learning of Non-Rigid Residual Flow and Ego-Motion", 3DV 2020

Home Page: https://arxiv.org/abs/2009.10467

License: GNU General Public License v3.0

Python 56.77% C 9.64% Starlark 0.22% Jupyter Notebook 30.54% Shell 2.84%

self-supervised_non-rigid_flow_and_ego-motion's Introduction

Self-Supervised Learning of Non-Rigid Residual Flow and Ego-Motion, 3DV 2020

This is the code for our 3DV 2020 paper "Self-Supervised Learning of Non-Rigid Residual Flow and Ego-Motion", a method capable of supervised, hybrid and self-supervised learning of total scene flow from a pairt of point clouds. The code is developed and maintained by Ivan Tishchenko.

[ArXiv] [Video]

Citation

If you use this code for your research, please cite our paper:

Self-Supervised Learning of Non-Rigid Residual Flow and Ego-Motion, Ivan Tishchenko, Sandro Lombardi, Martin R. Oswald, Marc Pollefeys, International Conference on 3D Vision (3DV) 2020

@article{tishchenko2020self,
  title={Self-Supervised Learning of Non-Rigid Residual Flow and Ego-Motion},
  author={Tishchenko, Ivan and Lombardi, Sandro and Oswald, Martin R and Pollefeys, Marc},
  journal={arXiv preprint arXiv:2009.10467},
  year={2020}
}

Prerequisites

Our model is trained and tested under:

  • Ubuntu 18.04
  • Conda
  • Python 3.6.4
  • NVIDIA GPUs, CUDA 10.2, CuDNN 7.6
  • PyTorch 1.5
  • Numba 0.48
  • You may need to install cffi.
  • Mayavi for visualization.

We provide our environement in environment.yml. After installing conda run the following commands to reproduce our environment:

conda env create -f environment.yml
conda activate hplfn

Data preprocess

Our method works with 3 datasets:

FlyingThings3D

Download and unzip the "Disparity", "Disparity Occlusions", "Disparity change", "Optical flow", "Flow Occlusions" for DispNet/FlowNet2.0 dataset subsets from the FlyingThings3D website (we used the paths from this file, now they added torrent downloads) . They will be upzipped into the same directory, RAW_DATA_PATH. Then run the following script for 3D reconstruction:

python data_preprocess/process_flyingthings3d_subset.py --raw_data_path RAW_DATA_PATH --save_path SAVE_PATH/FlyingThings3D_subset_processed_35m --only_save_near_pts

Next you need to match the camera posees from the full dataset to the subset DispNet/FlowNet2.0. Download the "Camera Data" for the full dataset from FlyingThings3D website. Then execute the following:

tar -xvf flyingthings3d__camera_data.tar
# TAR_EXTRACT_PATH - directory where you extracted flyingthings3d__camera_data.tar
python data_preprocess/process_flyingthings3d_subset.py --poses TAR_EXTRACT_PATH --output SAVE_PATH/FlyingThings3D_subset_processed_35m 

WARNING: some frames in the full dataset are missing the corresponding camera poses. For the list of invalid frames refer to POSE.txt. Our scripts discard these frames during pre-processing.

KITTI Scene Flow 2015

Download and unzip KITTI Scene Flow Evaluation 2015 to directory RAW_DATA_PATH. Run the following script for 3D reconstruction:

python data_preprocess/process_kitti.py RAW_DATA_PATH SAVE_PATH/KITTI_processed_occ_final

RefRESH

Download ZIPs of all scenes from RefRESH Google doc. Unzip all of the scenes into the same directory, RAW_DATA_PATH. Then run the following script for 3D reconstruction:

python data_preprocess/process_refresh_rigidity.py --raw_data_path RAW_DATA_PATH --save_path SAVE_PATH/REFRESH_pc --only_save_near_pts

Get started

Setup:

cd models; python build_khash_cffi.py; cd ..

Train

Set data_root in the configuration file to SAVE_PATH in the data preprocess section. Then run

python main.py configs/train_xxx.yaml

Test

Set data_root in the configuration file to SAVE_PATH in the data preprocess section. Set resume to be the path of your trained model or our trained model in trained_models. Then run

python main.py configs/test_xxx.yaml

Current implementation only supports batch_size=1.

Visualization

If you set TOTAL_NUM_SAMPLES in evaluation_bnn.py to be larger than 0. Sampled results will be saved in a subdir of your checkpoint directory, VISU_DIR.

Use the following script to visualize:

python visualization.py -d VISU_DIR --relax

Acknowledgments

The codebase is a fork based on an excellent work HPLFlowNet by Xiuye Gu.

self-supervised_non-rigid_flow_and_ego-motion's People

Contributors

himangim avatar ivantishchenko avatar laoreja avatar tornadomeet avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.