Coder Social home page Coder Social logo

kpconv-torch's Introduction

kpconv_torch

Intro figure

Created by Hugues THOMAS

Introduction

This repository contains the implementation of Kernel Point Convolution (KPConv) in PyTorch.

Another implementation of KPConv is available in PyTorch-Points-3D

Introduction

KPConv is a point convolution operator presented in the Hugues Thomas's ICCV2019 paper (arXiv). Consider citing:

@article{thomas2019KPConv,
    Author = {Thomas, Hugues and Qi, Charles R. and Deschaud, Jean-Emmanuel and Marcotegui, Beatriz and Goulette, Fran{\c{c}}ois and Guibas, Leonidas J.},
    Title = {KPConv: Flexible and Deformable Convolution for Point Clouds},
    Journal = {Proceedings of the IEEE International Conference on Computer Vision},
    Year = {2019}
}

Intro figure

Installation

This implementation has been tested on Ubuntu 18.04 and Windows 10. Details are provided in INSTALL.md.

Experiments

Scripts for three experiments are provided (ModelNet40, S3DIS and SemanticKitti). The instructions to run these experiments are in the doc folder.

โš ๏ธ Disclaimer: in this repo version, we only maintain the S3DIS material regarding Scene Segmentation. Instructions to train KP-FCNN on a scene segmentation task (S3DIS) can be found in the doc.

As a bonus, a visualization scripts has been implemented: the kernel deformations display.

Acknowledgment

Initial tribute to Hugues Thomas, this repo is a fork of KPConv-PyTorch repo.

The code uses the nanoflann library.

License

The code is released under MIT License (see LICENSE file for details).

kpconv-torch's People

Contributors

delhomer avatar huguesthomas avatar aubousquet avatar yarroudh avatar luchayward avatar

Forkers

aubousquet

kpconv-torch's Issues

[Inference] output file destination

When doing inference, one writes outputs near to the dataset folder (given by -d), while one could write them near to the input file (given by -f), or even elsewhere.

Additional point: separating the output folder from the dataset folder makes the dataset parameter (-d) useless in the inference command. (related issue: #9).

Dataset management

The most recent contributions are more focused on S3DIS. Anyway the library intends to support some other datasets, we should take care of them and generalize the changes done on S3DIS.

[CLI] Check the required parameters

The CLI defines kpconv preprocess, kpconv train, kpconv test, kpconv visualize and kpconv plotconv. These commands share parameters, some of them are required for a subset of commands. One should doublecheck the parameter list and if they should be required for each command.

Initialize a CI

This basic CI should contain at least:

  • syntaxic checks: flake8, isort, ...
  • unit tests (cf #7 )
  • ...

Refactor the dataset configuration

Define the dataset configuration into dedicated files instead of hardcoding them into modules.

The current hardcoded values could be stored as sample config files.

Enhance the code quality

Generalize flake8/black and any subsequent useful linter (pylint ? pyflakes ? ...) to the source code in order to increase the global quality level.

Clean the import mess

Some imports are done in the wrong module, and we are doing "from wrongmodule import myfunction" instead of doing "from thegoodthirdpartylib import myfunction". If "myfunction" is not used in "wrongmodule", we should just do the import in the right place.

Exemple :

  • in kpconv_torch/utils/mayavi_visu.py : from sklearn.neighbors import KDTree
  • in kpconv_torch/datasets/S3DIS.py : from kpconv_torch.utils.mayavi_visu import KDTree

Problem : when cleaning out the flake8 alerts (cf #16), the first import has been dropped (as KDTree is not used in mayavi_visu.py).

This issue denotes the need for an additional cleaning process...

Make "kpconv test" run on a CPU

Actually running the inference on a CPU is impossible, one gets:

Done in 1.2s
Traceback (most recent call last):
  File "/home/rdelhome/.virtualenvs/kpconv/bin/kpconv", line 33, in <module>
    sys.exit(load_entry_point('kpconv-torch', 'console_scripts', 'kpconv')())
  File "/home/rdelhome/Documents/projects/kpconv-torch/kpconv_torch/cli/__main__.py", line 132, in main
    args.func(args)
  File "/home/rdelhome/Documents/projects/kpconv-torch/kpconv_torch/test.py", line 231, in main
    tester.cloud_segmentation_test(net, test_loader, config, output_path)
  File "/home/rdelhome/Documents/projects/kpconv-torch/kpconv_torch/utils/tester.py", line 249, in cloud_segmentation_test
    torch.cuda.synchronize()
  File "/home/rdelhome/.virtualenvs/kpconv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 564, in synchronize
    _lazy_init()
  File "/home/rdelhome/.virtualenvs/kpconv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 229, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

We should take care of torch.device and check the calls to the torch API.

Refactor the inference result writing process

The current version of the program has a hard RAM limit during kpconv test command running.

The limit is related to the construction of resulting probabilities, which may produce very huge Python structures (number of points times the number of labels).

Result: we can't run inference process on point clouds larger than ~80M points (with 32Gb RAM).

Solution: refactor the writing process, by batching the output production and write result files in append mode.

Additional question: not sure .ply format is always relevant (do we really want to read .ply when focusing on output probabilities for each label?), can we move towards .csv? (impact in terms of disk space).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.