Coder Social home page Coder Social logo

regenerator / dpf-nets Goto Github PK

View Code? Open in Web Editor NEW
42.0 5.0 6.0 92 KB

Flow-based generative model for 3D point clouds.

Python 84.33% Shell 1.21% Makefile 1.89% C++ 4.16% Cuda 8.42%
deep-learning generative-modeling normalizing-flows point-clouds autoencoder reconstruction

dpf-nets's Introduction

Discrete Point Flow Networks

Roman Klokov, Edmond Boyer, Jakob Verbeek

This repository contains the code for the "Discrete Point Flow Networks for Efficient Point Cloud Generation" paper accepted to 16th European Conference on Computer Vision, 2020. It includes:

  • preprocessing scripts for ShapeNetCore55 and ShapeNetAll13 datasets,
  • implementation and training scripts for generative, autoencoding, and single-view reconstruction models presented in the paper.

Environment

The code requires python-3.6 and these packages, (and was run using according versions):

  • yaml-0.1.7
  • numpy-1.17.2
  • scipy-1.3.1
  • pandas-0.25.3
  • h5py-2.7.1
  • opencv3-3.1.0
  • pytorch-1.4.0
  • torchvision-0.5.0

Data preparation

Our point cloud sampler relies on data being stored in hdf5 format. So first of all the data should be converted to it.

ShapeNetCore55

The data can be prepared for use with:

python preprocess_ShapeNetCore.py data_dir save_dir

Here data_dir should be the path to directory with unpacked ShapeNetCore55.v2 dataset. The preprocessing script also relies on the official split file all.csv and on the data being organized in that directory as follows:

- data_dir
  - shapes
    - synsetId0
      - modelId0
        - models
          - model_normalized.obj
      - modelId1
      - ...
    - synsetId1
    - ...
  - all.csv

save_dir is the path to directory where repacked data is saved. There are mistakes in the official split file and the dataset such as missing shape directories and .obj files. Corresponding shapes are skipped during preprocessing and are not included in the repacked version of the dataset.

For reasons discussed in the paper we also randomly resplit the data into train/val/test sets with a separate script:

python resample_ShapeNetCore.py data_path

where data_path is the path to .h5 output file of the previous script. It creates a separate *_resampled.h5 file in the same directory.

ShapeNetAll13

The images for this data are found here. Instead of using voxel grids for these images we use original meshes from ShapeNetCore55.v1. The data for SVR is prepared with:

python preprocess_ShapeNetAll.py shapenetcore.v1_data_dir shapenetall13_data_dir save_dir

where shapenetcore.v1_data_dir is structured as:

- shapenetcore.v1_data_dir
  - synsetId0
    - modelId0
      - model.obj
    - modelId1
    ...
  - synsetId1
  - ...

shapenetall13_data_dir is structured as:

- shapenetall13_data_dir
  - ShapeNetRendering
    - synsetId0
      - modelId0
        - rendering
          - 00.png
          - 01.png
          - ...
      - modelId1
      - ...
    - synsetId1
    - ...

and save_dir is the path to directory where repacked data is saved. The script first copies meshes from ShapeNetCore.v1 corresponding to images in ShapeNetAll13 and then repacks both images and meshes into two separate hdf5 files.

Model usage

Training

Each task and data setup have a separate config file, storing all the optional parameters of the model which are situated in configs. To use the model, you need to modify these configs by specifying path2data and path2save fields to directories which store repacked .h5 data files and which will store checkpoints accordingly. path2save use the following structure:

- path2save
  - models
    - DPFNets

For class-conditional generative models use:

./scripts/train_airplane_gen.sh
./scripts/train_car_gen.sh
./scripts/train_chair_gen.sh

For generative model trained with the whole dataset for autoencoding use:

./scripts/train_all_original_gen.sh
./scripts/train_all_scaled_gen.sh

For single-view reconstruction:

./scripts/train_all_svr.sh

Evaluation

TBD

Citation

@InProceedings{klokov20eccv,
  author    = {R. Klokov and E. Boyer and J. Verbeek},
  title     = {Discrete Point Flow Networks for Efficient Point Cloud Generation},
  booktitle = {Proceedings of the 16th European Conference on Computer Vision (ECCV)},
  year      = {2020}
}

dpf-nets's People

Contributors

regenerator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

dpf-nets's Issues

Pretrain model

Providing a pretrain model is much much helpful for comparison and quick start in further research.

Evaluation code

Hello, thanks for your excellent work. I was wondering whether you could release the evaluation code, especially the one related to table 4. Did you follow Pix3d to calculate EMD and CD? Though you didn't compare your work with 3d-LMNet, I've tried to evaluate their official pretrained model and found that the results lies in x 10^-2 scale instead of 10^-3 scale as shown in table 4 in your paper. I guess you may remove the square root when calculating CD, right?

preprocessing data

Thanks for sharing your nice work!
Here I have a question. Could you please indicate how long did the preprocessing take? In my case, prepocessing ShapeNetAll took more than two days with n_processes=12, batch_size=1200, and finally it shut down due to some errors. I wonder how to set the n_processes and batch_size of it?

Error on processing ShapeNetCore

I tried to apply preprocess_ShapeNetCore.py to ShapeNetCore v2, but encountered following error:

02958343/d92a10c4db3974e14e88eef43f41dc4/models/ does not exist!
02958343/6885092b0d1fbb6a8db35146c0a9b3fb/models/ does not exist!
02958343/92d0fa7147696cf5ba531e418cb6cd7d/models/ does not exist!
02958343/c7bf88ef123ed4221694f51f0d69b70d/models/ does not exist!
02958343/bd8d7b8ad35df2d52470de2774d6099/models/ does not exist!
02958343/8843d862a7545d0d96db382b382d7132/models/ does not exist!
02958343/8b68f086176443b8128fe65339f3ddb2/models/ does not exist!
02958343/9e75756bd1bb8ebaafe1d4530f4c6e24/models/ does not exist!
02958343/dc0601024a535f5c51894d116f1c652/models/ does not exist!
02958343/79bf4c4574bc9c4552470de2774d6099/models/ does not exist!
02958343/6d619fcbceaa0327104b57ee967e352c/models/ does not exist!
02958343/e3c1213d68656ffb065f1478754df44/models/ does not exist!
02958343/5aa136c67d0a2a2852470de2774d6099/models/ does not exist!
02958343/4253a9aac998848f664839bbd828e448/models/ does not exist!
02958343/a81cb450ce415d45bdb32c3dfd2f01b5/models/ does not exist!
02958343/917de64538fb9f3afe1d4530f4c6e24/models/ does not exist!
02958343/c099c763ee6e485052470de2774d6099/models/ does not exist!
02958343/f4e25d681ad736eb52470de2774d6099/models/ does not exist!
02958343/64998426e6d48ae358dbdf2b5c6acfca/models/ does not exist!
02958343/29a4e6ae1f9cecab52470de2774d6099/models/ does not exist!
02958343/c9991032ff77fe8552470de2774d6099/models/ does not exist!
02958343/de1800e9ce6da9af52470de2774d6099/models/ does not exist!
02958343/a18fd5cb2a9d01c4158fe40320a23c2/models/ does not exist!
02958343/a8dde04ca72c5bdd6ca2b6e5474aad11/models/ does not exist!
02958343/f59a474f2ec175eb7cdba8f50ac8d46c/models/ does not exist!
02958343/7ee6884bb0bbf9e352470de2774d6099/models/ does not exist!
02958343/79e32e66bbf04191afe1d4530f4c6e24/models/ does not exist!
02958343/642b3dcc3e34ae3bafe1d4530f4c6e24/models/ does not exist!
02958343/5b04b836924fe955dab8f5f5224d1d8a/models/ does not exist!
04379243/619a795a84e2566ac22e965981351403/models/ does not exist!
04379243/3dadf67ebe6c29a3d291861d5bc3e7c8/models/ does not exist!
04379243/19e2321df1141bf3b76e29c9c43bc7aa/models/ does not exist!
04379243/4ebb653961d95dd075c67b3b1e763fcf/models/ does not exist!
04379243/2ab4b8a3fe51d2ba1b17743c18fb63dc/models/ does not exist!
04379243/6a977967aedbb60048b9747b6b395fc5/models/ does not exist!
04379243/2b9b2ece245dffbaaa11adad6b2a69c/models/ does not exist!
04379243/5a935225ccc57f09e6a4ada36a392e0/models/ does not exist!
04379243/5a60822959b28856920de219c00d1c3b/models/ does not exist!
04379243/7cb09d16e07e1d757e1dc03b595bd36c/models/ does not exist!
04379243/1a46011ef7d2230785b479b317175b55/models/ does not exist!
04379243/7206545c3f0a3070e8058cf23f6382c1/models/ does not exist!
04379243/a5b0aa0232ecc6dbd2f1945599cd5176/models/ does not exist!
Packing train meshes: [1/2233]
Packing train meshes: [2/2233]
Packing train meshes: [3/2233]
...

and

Packing train meshes: [513/2233]
Packing train meshes: [514/2233]
Packing train meshes: [515/2233]
Packing train meshes: [516/2233]
Packing train meshes: [517/2233]
Packing train meshes: [518/2233]
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/usr/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "preprocess_ShapeNetCore.py", line 26, in process_obj_file
    sample_obj = ObjMesh(sample)
  File "/workspace/Experiment/dpf-nets/lib/meshes/objmesh.py", line 12, in __init__
    with open(self.obj_filename, 'r', encoding='utf-8') as objf:
FileNotFoundError: [Errno 2] No such file or directory: '../../Datasets/ShapeNetCore.v2/shapes/02958343/7aa9619e89baaec6d9b8dfa78596b717/models/model_normalized.obj'
"""

It seems that this obj does not exist on original dataset, see here.
Is there some action I have to take before processing the the ShapeNetCore v2 dataset ?

ETA on code update

I recently read your work, I think it is very inspiring. I was wondering when would the code be released.

Thank you

Reproducing results in Table 2.

Hi,

thanks fro your great work. I am having trouble reproducing the results in Table 2 using the code in this repository. My results seem to be consistently a bit worse (in particular compared to JSD in the paper (On airplane: paper -> 0.94 +- 0.11 ; me -> 1.16 +- 0.04)).

Comparing the default configs (e.g. configs/generation/airplane.yml) to the experimental setup gives me the impression that a much larger network was used for the experiments in the paper.

Default: prior coupling layers -> 7 | decoder coupling layers -> 21
Experimental setup in the paper: prior coupling layers -> 14 | decoder coupling layers -> 63

Is it correct that the default config is not the same as the one used for obtaining the results in the paper and, if so, is there any chance I could receive the same config used for the results in the paper to make my life a little easier :) ?

Best,
Janis

About the loss.

Thank you very much for releasing the code of DPF-Net. I read it carefully and have a doubt for the loss. Here are the details:

  1. In lib/networks/losses.py, the class PointFlowNLL is used to compute the negative log-likelihood loss for point cloud flow.

    return 0.5 * torch.add(
            torch.sum(sum(logvars) + ((samples[0] - mus[0]) ** 2 / torch.exp(logvars[0]))) / samples[0].shape[0],
            np.log(2.0 * np.pi) * samples[0].shape[1] * samples[0].shape[2]
    )

    Why need to / samples[0].shape[0] and why the np.log(2.0 * np.pi) need to multiply samples[0].shape[1] * samples[0].shape[2]? The same doubt lies in the class GaussianFlowNLL.

  2. In class GaussianEntropy, you wrote (1.0 + np.log(2.0 * np.pi). Why need to add 1.0 here?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.