Coder Social home page Coder Social logo

ml-lab / 3d-coded Goto Github PK

View Code? Open in Web Editor NEW

This project forked from thibaultgroueix/3d-coded

0.0 4.0 0.0 44.61 MB

Pytorch Implementation for the project : 3D-CODED : 3D Correspondences by Deep Deformation"

Home Page: http://imagine.enpc.fr/~groueixt/3D-CODED/index.html

Python 80.52% Shell 10.50% C 4.98% C++ 0.27% Cuda 3.72%

3d-coded's Introduction

3D-CODED : 3D Correspondences by Deep Deformation πŸ“ƒ

This repository contains the source codes for the paper 3D-CODED : 3D Correspondences by Deep Deformation. The task is to put 2 meshes in point-wise correspondence. Below, given 2 humans scans with holes, the reconstruction are in correspondence (suggested by color).

Citing this work

If you find this work useful in your research, please consider citing:

@inproceedings{groueix2018b,
          title = {3D-CODED : 3D Correspondences by Deep Deformation},
          author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
          booktitle = {ECCV},
          year = 2018}
        }

Project Page

The project page is available http://imagine.enpc.fr/~groueixt/3D-CODED/

Install πŸ‘·

Piece of advice

IYou'll have to compile pytorch v4 from source, you'll probably face compatibility issues with gcc. It's very easy to set up update-alternative for gcc . I recommend being able to navigate between gcc-4.8 , gcc-5 and gcc-6. You can look here for a quick tuto on how to set things up on ubuntu.

Clone the repo

## Download the repository
git clone [email protected]:ThibaultGROUEIX/3D-CODED.git
## Create python env with relevant packages
conda env create -f auxiliary/pytorch-sources.yml
source activate pytorch-sources

This implementation uses Pytorch. Please note that the Chamfer Distance code doesn't work on all versions of pytorch because of some weird error with the batch norm layers. It has been tested on v1.12, v3 and a specific commit of v4.

Pytorch compatibility

Python/Pytorch v1.12 v2 v3.1 0.4.0a0+ea02833 0.4.x latest
2.7 βœ”οΈ πŸ‘ πŸ˜ƒ 🚫 πŸ‘Ž 😞 🚫 πŸ‘Ž 😞 βœ”οΈ πŸ‘ πŸ˜ƒ 🚫 πŸ‘Ž 😞
3.6 βœ”οΈπŸ‘ πŸ˜ƒ ? ? 🚫 πŸ‘Ž 😞 🚫 πŸ‘Ž 😞

Recommended : Python 2.7, Pytorch 0.4.0a0+ea02833

Install v4 : From pytorch' repo

source activate pytorch-sources
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch ; git reset --hard ea02833 #Go to this specific commit that works fine for the chamfer distance

# Then follow pytorch install instruction as usual
export CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" # [anaconda root directory]

# Install basic dependencies
conda install numpy pyyaml mkl mkl-include setuptools cmake cffi typing
conda install -c mingfeima mkldnn

# Add LAPACK support for the GPU
conda install -c pytorch magma-cuda80 # or magma-cuda90 if CUDA 9 or magma-cuda91 if CUDA 9.1

python setup.py install # I needed to use gcc-4.8

#Also install torchvision from sources in this case
git clone https://github.com/pytorch/vision.git
cd vision
python setup.py install

The whole code is developped in python 2.7, so might need a few adjustements for python 3.6.

Build chamfer distance

#use gcc-5 or higher (doesn't build with gcc-4.8)
cd AtlasNet/nndistance/src
nvcc -c -o nnd_cuda.cu.o nnd_cuda.cu -x cu -Xcompiler -fPIC -arch=sm_52
cd ..
python build.py
python test.py

Last advice

  • validate your install by running the demo below, and make sure your output match the expected one.

Using the Trained models πŸš†

The trained models and some corresponding results are also available online :

On the demo meshes

Require 3 GB of RAM on the GPU and 17 sec to run (Titan X Pascal).

python inference/correspondences.py

This script takes as input 2 meshes from data and compute correspondences in results. Reconstruction are saved in data

It should look like :

  • Initial guesses for example0 and example1:

  • Final reconstruction for example0 and example1:

On your own meshes

You need to make sure your meshes are preprocessed correctly :

  • The meshes are loaded with Trimesh, which should support a bunch of formats, but I only tested .ply files. Good converters include Assimp and Pymesh.

  • The trunk axis is the Y axis (visualize your mesh against the mesh in data to make sure they are normalized in the same way).

  • the scale should be about 1.7 for a standing human (meaning the unit for the point cloud is the cm). You can automatically scale them with the flag --scale 1

Options

'--HR', type=int, default=1, help='Use high Resolution template for better precision in the nearest neighbor step ?'
'--nepoch', type=int, default=3000, help='number of epochs to train for during the regression step'
'--model', type=str, default = 'trained_models/sup_human_network_last.pth',  help='your path to the trained model'
'--inputA', type=str, default =  "data/example_0.ply",  help='your path to mesh 0'
'--inputB', type=str, default =  "data/example_1.ply",  help='your path to mesh 1'
'--num_points', type=int, default = 6890,  help='number of points fed to poitnet'
'--num_angles', type=int, default = 100,  help='number of angle in the search of optimal reconstruction. Set to 1, if you mesh are already facing the cannonical 				direction as in data/example_1.ply'
'--env', type=str, default="CODED", help='visdom environment'
'--clean', type=int, default=0, help='if 1, remove points that dont belong to any edges'
'--scale', type=int, default=0, help='if 1, scale input mesh to have same volume as the template'
'--project_on_target', type=int, default=0, help='if 1, projects predicted correspondences point on target mesh'

Failure modes instruction : ⚠️

  • Sometimes the reconstruction is flipped, which break the correspondences. In the easiest case where you meshes are registered in the same orientation, you can just fix this angle in reconstruct.py line 86, to avoid the flipping problem. Also note from this line that the angle search only looks in [-90Β°,+90Β°].

  • Check the presence of lonely outliers that break the Pointnet encoder. You could try to remove them with the --clean flag.

Last comments

  • If you want to use inference/correspondences.py to process a hole dataset, like FAUST test set, make sure you don't load the same network in memory every time you compute correspondences between two meshes (which will happen with the naive and simplest way of doing it by calling inference/correspondences.py iteratively on all the pairs). A example of bad practice is in ./auxiliary/script.sh, for the FAUST inter challenge. Good luck :-)

Training the autoencoder TODO

Data

The dataset can't be shared because of copyrights issues. Since the generation process of the dataset is quite heavy, it has it's own README in data/README.md. Brace yourselve :-)

Install Pymesh

Follow the specific repo instruction here.

Pymesh is my favorite Geometry Processing Library for Python, it's developed by an Adobe researcher : Qingnan Zhou. It can be tricky to set up. Trimesh is good alternative but requires a few code edits in this case.

Options

'--batchSize', type=int, default=32, help='input batch size'
'--workers', type=int, help='number of data loading workers', default=8
'--nepoch', type=int, default=75, help='number of epochs to train for'
'--model', type=str, default='', help='optional reload model path'
'--env', type=str, default="unsup-symcorrect-ratio", help='visdom environment'
'--laplace', type=int, default=0, help='regularize towords 0 curvature, or template curvature'

Now you can start training

  • First launch a visdom server :
python -m visdom.server -p 8888
  • Launch the training. Check out all the options in ./training/train_sup.py .
export CUDA_VISIBLE_DEVICES=0 #whichever you want
source activate pytorch-atlasnet
git pull
env=3D-CODED
python ./training/train_sup.py --env $env  |& tee ${env}.txt

visdom

Acknowledgement

License

MIT

Analytics

3d-coded's People

Contributors

thibaultgroueix avatar

Watchers

 avatar  avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.