Coder Social home page Coder Social logo

spconv-plus's Introduction

spconv-plus

This project is based on the original spconv. We integrate several new sparse convolution types and operators that might be useful into this library.

1. Operators

Focals Conv

This is introduced in our CVPR 2022 (oral) paper. In this paper, we introduce a new type of sparse convolution that makes feature sparsity learnable with position-wise importance prediction.

The source code for this operator in this library is Focals Conv. The example for use this work is shown in its repo.

(left - submanifold sparse conv, mid - regular sparse conv, right - focal sparse conv)

Spatial Pruned Conv

This is introduced in our NeurIPS 2022 paper. In this paper, we propose two new convolution operators, spatial pruned submanifold sparse convolution (SPSS-Conv) and spatial pruned regular sparse convolution (SPRS-Conv), both of which are based on the idea of dynamically determining crucial areas for redundancy reduction.

The source codes for these two operators in this library are shown in SPSSConv3d and SPRSConv3d. The example for them can be found in this file and its repo.

Spatial-wise Group Conv

This is introduced in our Arxiv paper. In this paper, we introduce spatial-wise group (partition) convolution, that enables an efficient way to implement 3D large kernels.

The source code for this operators in this library is shown in SpatialGroupConv3d. The example for it is shown in this file.

Channel-wise Group Conv

This is the commonly-used group convolution. We implement this operator into this library. You can directly set "groups" in SparseConvolution.

Submanifold Sparse Max Pooling

We enable the submanifold version of sparse max pooling in this library. You can directly set "subm=True" when using SparseMaxPool3d. For example,

spconv.SparseMaxPool3d(3, 1, 1, subm=True, algo=ConvAlgo.Native, indice_key='max_pool')

2. Installation

This repo is based on cumm==0.2.8, pccm==0.3.4 This repo should be built from source. Following the readme file in the spconv library,

  • install build-essential, install CUDA
  • run export SPCONV_DISABLE_JIT="1"
  • run pip install pccm cumm wheel
  • run python setup.py bdist_wheel+pip install dists/xxx.whl

3. Citation

Please consider to cite our papers if this repo is helpful.

@inproceedings{focalsconv-chen,
  title={Focal Sparse Convolutional Networks for 3D Object Detection},
  author={Chen, Yukang and Li, Yanwei and Zhang, Xiangyu and Sun, Jian and Jia, Jiaya},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2022}
}
@inproceedings{liu2022spatial,
  title={Spatial Pruned Sparse Convolution for Efficient 3D Object Detection},
  author={Liu, Jianhui and Chen, Yukang and Ye, Xiaoqing and Tian, Zhuotao and Tan, Xiao and Qi, Xiaojuan},
  booktitle={Advances in Neural Information Processing Systems},
  year={2022}
}
@article{largekernel3d-chen,
  author    = {Chen, Yukang and Liu, Jianhui and Qi, Xiaojuan and Zhang, Xiangyu and Sun, Jian and Jia, Jiaya},
  title     = {Scaling up Kernels in 3D CNNs},
  journal   = {arxiv},
  year      = {2022},
}

spconv-plus's People

Contributors

yukang2017 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

spconv-plus's Issues

ValueError: /io/include/tensorview/tensor.h(171) don't compiled with cuda

Dear authors, I install spconv-plus and run the script examples.py, but there are something wrong.
here is my package info:
cuda=11.3
torch=1.10.0
torch.version.cuda=11.3

python examples.py
[Exception|indice_conv]feat=torch.Size([240000, 32]),w=torch.Size([3, 3, 3, 32, 32]),pair=torch.Size([2, 27, 240000]),pairnum=tensor([ 5527, 6017, 5605, 5576, 6206, 5602, 5575, 6049, 5587,
5673, 5994, 5628, 5615, 120000, 5518, 5550, 6006, 5685,
5542, 5851, 5650, 5649, 6037, 5648, 5643, 5995, 5489],
device='cuda:0', dtype=torch.int32),act=240000,algo=ConvAlgo.Native
SPCONV_DEBUG_SAVE_PATH not found, you can specify SPCONV_DEBUG_SAVE_PATH as debug data save path to save debug data which can be attached in a issue.
Traceback (most recent call last):
File "examples.py", line 313, in
main_spss(algo=spconv.ConvAlgo.Native, dtype=torch.float32, pruning_ratio=0.5)
File "examples.py", line 194, in main_spss
out = net(features_t, indices_t, bs, pruning_ratio=pruning_ratio)
File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "examples.py", line 59, in forward
return self.net(x, mask) # .dense()
File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in call_impl
return forward_call(*input, **kwargs)
File "/usr/local/miniconda3/lib/python3.8/site-packages/spconv/pytorch/conv.py", line 372, in forward
out_features = conv_func(
File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 94, in decorate_fwd
return fwd(*args, **kwargs)
File "/usr/local/miniconda3/lib/python3.8/site-packages/spconv/pytorch/functional.py", line 84, in forward
raise e
File "/usr/local/miniconda3/lib/python3.8/site-packages/spconv/pytorch/functional.py", line 69, in forward
return conv_func(features,
File "/usr/local/miniconda3/lib/python3.8/site-packages/spconv/pytorch/ops.py", line 767, in indice_conv
tuned_res, min_time = GEMM.tune_and_cache(
File "/usr/local/miniconda3/lib/python3.8/site-packages/spconv/algo.py", line 324, in tune_and_cache
c
= c.clone()
ValueError: /io/include/tensorview/tensor.h(171)
don't compiled with cuda

Dear developer, there is an error that appears when I try to run the command python setup.py bdist_wheel.

Is there something wrong with the version of setuptools? Here is the traceback of the error:

File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/init.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 973, in run_commands
self.run_command(cmd)
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 992, in run_command
cmd_obj.run()
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 325, in run
self.run_command("build")
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
self.distribution.run_command(command)
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 992, in run_command
cmd_obj.run()
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/_distutils/command/build.py", line 132, in run
self.run_command(cmd_name)
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
self.distribution.run_command(command)
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 992, in run_command
cmd_obj.run()
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/pccm/extension.py", line 59, in run
self.build_extension(ext)
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/pccm/extension.py", line 78, in build_extension
lib_path = pccm.builder.build_pybind(ext.cus,
File "/home/lyd/miniconda3/envs/pcr/lib/python3.8/site-packages/pccm/builder/pybind.py", line 120, in build_pybind
return ccimport.ccimport(
TypeError: ccimport() got multiple values for argument 'std'

usage of spconv-plus

Hello, I noticed that there were issues during the use of spconvplus. For example, in the example code, I did not find any corresponding files in the corresponding folders for many referenced libraries.
image

Thank you very much for your reply

please help me slove this problem

cuda 113,
python 3.8,
pytorch 1.11.0,
spconv-plus 2.1.21

import spconv.pytorch
Traceback (most recent call last):
File "", line 1, in
File "/home/suwei/anaconda3/envs/max_voxelnext/lib/python3.8/site-packages/spconv/pytorch/init.py", line 6, in
from spconv.pytorch.core import SparseConvTensor
File "/home/suwei/anaconda3/envs/max_voxelnext/lib/python3.8/site-packages/spconv/pytorch/core.py", line 21, in
from spconv.tools import CUDAKernelTimer
File "/home/suwei/anaconda3/envs/max_voxelnext/lib/python3.8/site-packages/spconv/tools.py", line 16, in
from spconv.cppconstants import CPU_ONLY_BUILD
File "/home/suwei/anaconda3/envs/max_voxelnext/lib/python3.8/site-packages/spconv/cppconstants.py", line 15, in
import spconv.core_cc as _ext
ImportError: arg(): could not convert default argument 'timer: tv::CUDAKernelTimer' in method '<class 'spconv.core_cc.cumm.gemm.main.GemmParams'>.init' into a Python object (type not registered yet?)

FocalsConv3d usage

Hello, it is amazing that your Focals Conv backbone is as the point cloud feature extraction! But I found that the backbone occupies too much GPU memory while I training on RTX 3090 and also slows down the training.

I am very glad to find that you implement FocalsConv with CUDA; I also follow your instruction to see examples in file focal_sparse_conv_cuda.py, but I found no use of FocalsConv3d. So I find the source code of FocalsConv3d in spconv package. But I wonder how I use it correctly in file spconv_backbone_focal.py. I follow the usage of spconv.SubMConv3d in VoxelBackBone8xFocal class to replace the special_spconv_fn which is for FocalSparseConv with FocalsConv3d block. But I found that its forward function has parameter ori_feat_num which I confuse about it, could you help me what the param stand for?

the source code is as follows:

class FocalsConv3d(SparseConvolution):
    def __init__(self,
                 in_channels,
                 out_channels,
                 kernel_size,
                 stride=1,
                 padding=0,
                 dilation=1,
                 groups=1,
                 bias=True,
                 indice_key=None,
                 algo: Optional[ConvAlgo] = None,
                 fp32_accum: Optional[bool] = None,
                 name=None):
        super(FocalsConv3d, self).__init__(3,
                                         in_channels,
                                         out_channels,
                                         kernel_size,
                                         stride,
                                         padding,
                                         dilation,
                                         groups,
                                         bias,
                                         subm=False,
                                         focal=True,
                                         indice_key=indice_key,
                                         algo=algo,
                                         fp32_accum=fp32_accum,
                                         name=name)


forward function in SparseConvolution:
def forward(self, input: SparseConvTensor, mask=None, ori_feat_num=-1, group_map=None):
if FocalsConv3d is called, ori_feat_num should > 0. In KITTI dataset, the param should be 1 or 3? it means orientation feature num? 

Another question is that is it right to replace special_spconv_fn with spconv.FocalsConv3d without any other parameters changed except kernel_size, in/out_channels, stride?

I would appreciate for your reply. Thank you again.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.