Coder Social home page Coder Social logo

mcg-nju / app-net Goto Github PK

View Code? Open in Web Editor NEW
10.0 2.0 0.0 22.12 MB

[TIP] APP-Net: Auxiliary-point-based Push and Pull Operations for Efficient Point Cloud Recognition

Home Page: https://arxiv.org/abs/2205.00847

Shell 0.03% Python 72.15% Cuda 18.32% C 0.76% C++ 8.74%
classification efficient-algorithm point-cloud

app-net's Introduction

APP-Net: Auxiliary-point-based Push and Pull Operations for Efficient Point Cloud Classification arxiv

Tao Lu, Chunxu Liu, Youxi Chen, Gangshan Wu, Limin Wang
Multimedia Computing Group, Nanjing University

Introduction

APP-Net is a fast and memory-efficient backbone for point cloud recognition. The efficiency comes from that the total computation complexity is linear to the input points. To achieve this goal, we abandon the FPS+kNN (or Ball Query) paradigm and propose a RandomSample+1NN aggregator. To the best of our knowledge, APP-Net is the first pure-cluster-based backbone for point cloud processing.

Setup

The version of below dependencies can be modified according to your machine. The tested system is Ubuntu 16.04.

conda create -n APPNet python=3.7
conda activate APPNet
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.12.1+cu113.html
pip install -e pointnet2_ops_lib/
pip install -r requirements.txt
python install python-pcl

Datasets

ModelNet40:

Download it from the official website and then unzip it to data folder:

mkdir -p classification/datasets
cd classification/datasets
wget -c https://shapenet.cs.stanford.edu/media/modelnet40_normal_resampled.zip --no-check-certificate
unzip modelnet40_normal_resampled.zip

ScanObjectNN

cd classification/datasets

Download data from Google Drive, Official or BaiduYun(cdn4).

unzip scanobjectnn.zip

S3DIS

We support two versions of S3DIS, with slight difference in splitting the scenes. The first is the adopted by PointNet2_Pytorch.

mkdir -p semseg/dataset
cd semseg/dataset
wget -c https://shapenet.cs.stanford.edu/media/indoor3d_sem_seg_hdf5_data.zip --no-check-certificate
unzip indoor3d_sem_seg_hdf5_data.zip

The second is adopted by the PointCNN and PVCNN. You can follow them to pre-process the data. We provide a processed one in BaiduYun(5t2n), which can be uncompressed with

cat pointcnn.tar.gz* | tar zx

The dataset file structure should be like

classification/
               datasets/
                        modelnet40_normal_resampled_cache/
                                                          train/
                                                          test/
                        scanobjectnn/
                                    h5_files/main_split/
                                                        training_objectdataset_augmentedrot_scale75.h5
                                                        test_objectdataset_augmentedrot_scale75.h5
-----------------------------------------------------------------------------------------------------------
semseg/
      dataset/
              indoor3d_sem_seg_hdf5_data/
                                         all_files.txt
                                         room_filelist.txt
                                         ply_data_all_0.h5
                                         *.h5
              pointcnn/
                       Area_1/
                       Area_2/
                       Area_3/
                       Area_4/
                       Area_5/
                       Area_6/
                                      
              
                                     
               

Usage

We provide scripts for simplifying the starting process. To train classification, type

sh cls_train.sh

To train segmentation (coming soon), type

sh segmseg_train.sh

To change the experiment dataset and backbone, modify the related keyword in

{task}/config/config_{task}.yaml

{task} refers to classification or semseg.

Citation

  @misc{lu2022appnet,
      title={APP-Net: Auxiliary-point-based Push and Pull Operations for Efficient Point Cloud Classification}, 
      author={Tao Lu and Chunxu Liu and Youxin Chen and Gangshan Wu and Limin Wang},
      year={2022},
      eprint={2205.00847},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

This project is based on the following repos, sincerely thanks to the efforts:

Pointnet2_PyTorch, pointMLP-pytorch, pvcnn.

app-net's People

Contributors

inspirelt avatar wanglimin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.