Coder Social home page Coder Social logo

itailang / samplenet Goto Github PK

View Code? Open in Web Editor NEW
356.0 9.0 39.0 1.06 MB

Differentiable Point Cloud Sampling (CVPR 2020 Oral)

Home Page: https://arxiv.org/abs/1912.03663

License: Other

Shell 1.24% Python 78.43% C++ 11.12% Cuda 9.10% Dockerfile 0.12%
point-cloud sampling neural-network deep-learning geometry-processing tensorlow pytorch cvpr2020

samplenet's Introduction

SampleNet: Differentiable Point Cloud Sampling

Created by Itai Lang, Asaf Manor, and Shai Avidan from Tel Aviv University.

teaser

Introduction

This work is based on our arXiv tech report. Please read it for more information. You are also welcome to watch the oral talk from CVPR 2020.

There is a growing number of tasks that work directly on point clouds. As the size of the point cloud grows, so do the computational demands of these tasks. A possible solution is to sample the point cloud first. Classic sampling approaches, such as farthest point sampling (FPS), do not consider the downstream task. A recent work showed that learning a task-specific sampling can improve results significantly. However, the proposed technique did not deal with the non-differentiability of the sampling operation and offered a workaround instead.

We introduce a novel differentiable relaxation for point cloud sampling. Our approach employs a soft projection operation that approximates sampled points as a mixture of points in the primary input cloud. The approximation is controlled by a temperature parameter and converges to regular sampling when the temperature goes to zero. During training, we use a projection loss that encourages the temperature to drop, thereby driving every sample point to be close to one of the input points.

This approximation scheme leads to consistently good results on various applications such as classification, retrieval, and geometric reconstruction. We also show that the proposed sampling network can be used as a front to a point cloud registration network. This is a challenging task since sampling must be consistent across two different point clouds. In all cases, our method works better than existing non-learned and learned sampling alternatives.

Citation

If you find our work useful in your research, please consider citing:

@InProceedings{lang2020samplenet,
  author = {Lang, Itai and Manor, Asaf and Avidan, Shai},
  title = {{SampleNet: Differentiable Point Cloud Sampling}},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages = {7578--7588},
  year = {2020}
}

Installation and usage

This project contains three sub-directories, each one is a stand-alone project with it's own instructions. Please see the README.md in classification, registration, and reconstruction directories.

Usage of SampleNet in another project

Classification and reconstruction projects are implemented in TensorFlow. Registration is implemented with PyTorch library. We provide here an example code snippet, for the usage of PyTorch implementation of SampleNet with any task. The source files of SampleNet for this example are in registration/src/ folder.

import torch
from src import SampleNet, sputils

"""This code bit assumes a defined and pretrained task_model(),
data, and an optimizer."""

"""Get SampleNet parsing options and add your own."""
parser = sputils.get_parser()
args = parser.parse_args()

"""Create a data loader."""
trainloader = torch.utils.data.DataLoader(
    DATA, batch_size=32, shuffle=True
)

"""Create a SampleNet sampler instance."""
sampler = SampleNet(
    num_out_points=args.num_out_points,
    bottleneck_size=args.bottleneck_size,
    group_size=args.projection_group_size,
    initial_temperature=1.0,
    input_shape="bnc",
    output_shape="bnc",
)

"""For inference time behavior, set sampler.training = False."""
sampler.training = True

"""Training routine."""
for epoch in EPOCHS:
    for pc in trainloader:
        # Sample and predict
        simp_pc, proj_pc = sampler(pc)
        pred = task_model(proj_pc)

        # Compute losses
        simplification_loss = sampler.get_simplification_loss(
                pc, simp_pc, args.num_out_points
        )
        projection_loss = sampler.get_projection_loss()
        samplenet_loss = args.alpha * simplification_loss + args.lmbda * projection_loss

        task_loss = task_model.loss(pred)

        # Equation (1) in SampleNet paper
        loss = task_loss + samplenet_loss

        # Backward + Optimize
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

License

This project is licensed under the terms of the MIT license (see LICENSE for details).

samplenet's People

Contributors

asafmanor avatar itailang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

samplenet's Issues

Question about reflectivity

Hello, your work has helped me a lot, but I encountered a problem in the process of using it. If I want to use the reflectivity information of the lidar, how can I preserve this feature during the SampleNet downsampling process?

Visualization

HI!
First of all your ideas are very good, and thank you very much for sharing the code . I have a question:how should the registration results be visualized? I hope you can help me! THx

registration result visualization

Hi!
Thank you very much for sharing the code . How should the registration results be visualized with point cloud and cad model (like Fig10 in your paper)?
Thank you again!

Improve README for registration task

I tried to simply run the docker, following the instructions in readme, while an error occur, telling me that the file /workspace does not exist.
Then a series of errors occured, so that I have to manually clone the Pointnet2_Pytorch repo.
However, after I clone it, and placed it in the correct path I suppose, which should be ./registration/workspace/Pointnet2_Pytorch, another error occured.
image

Maybe some improvements of the install instructions in readme can be improved. That would be helpful.

Can't run compile_ops.sh in classification folder and GPU usage

Hi Itailang, I'm trying to rework your paper, but I met some problems. To provide some context, now I've successfully downloaded the docker image and enter the container. However, when I tried to run the script sh compile_ops.sh I got an error

: not found.sh: 2: compile_ops.sh: 
compile_ops.sh: 3: cd: can't cd to ./grouping
: not found.sh: 4: compile_ops.sh: 
sh: 0: Can't open tf_grouping_compile.sh
: not found.sh: 7: compile_ops.sh: 
compile_ops.sh: 8: cd: can't cd to ../structural_losses/
: not found.sh: 9: compile_ops.sh: 
sh: 0: Can't open tf_nndistance_compile.sh
: not found.sh: 12: compile_ops.sh: 
sh: 0: Can't open tf_approxmatch_compile.sh

I use a test.sh to test if there is anything wrong

test.sh

cd ./grouping

My shell file can work normally. But I don't know why compile_ops.sh can't work.

So I decide to compile the ops separately. I go the the grouping directory, and tried to run tf_grouping_compile.sh, but failed again. I have ensured the cuda path is correct. Now the error is like

: not found_compile.sh: 2: tf_grouping_compile.sh: 
: not found_compile.sh: 5: tf_grouping_compile.sh: 
’; did you mean ‘-fPIC’?
: not found_compile.sh: 7: tf_grouping_compile.sh: 
g++: error: tf_grouping_g.cu.o: No such file or directory

To offer a better background, tf_grouping_compile.sh is

#!/usr/bin/env bash

TF_INC=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())')
TF_LIB=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())')

/usr/local/cuda-10.0/bin/nvcc tf_grouping_g.cu -o tf_grouping_g.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC

# TF1.13
g++ -std=c++11 tf_grouping.cpp tf_grouping_g.cu.o -o tf_grouping_so.so -shared -fPIC -I $TF_INC -I /usr/local/cuda-10.0/include -I $TF_INC/external/nsync/public -lcudart -L /usr/local/cuda-10.0/lib64/ -L$TF_LIB -ltensorflow_framework -O2 -D_GLIBCXX_USE_CXX11_ABI=0

I kindly request your assistance in troubleshooting the issue. I would appreciate any guidance or suggestions you can provide to help resolve the problem. Looking forward to your reply!

Best regards,
Zhirong

Reconstruction Use other data

Hi!
I implemented your reconstruction network, and the results are very good, but I want to use a model point cloud of my own for reconstruction sampling, that is to say, use your trained network to achieve downsampling of a point cloud? what should i do?

Classification Accuracy

Could you please show me the specific data about the classification when sampling with SampleNet,because I found SampleNet64 and SampleNet32 have the same Classification Accuracy.

The classification accuracy is eval accuracy during training or eval accuracy in the evaluate_samplenet.py?

I also found the eval accuracy during training is 3% higher than the eval accuracy in the evaluate_samplenet.py.

The paper says that after training the S-NET model or SampleNet model,it will use the trained model parameters to train the classification network, but the code does not,could you please give me some advices?

I'm looking forward to your reply! @itailang @asafmanor

Failure when downloading the ShapeNet Data

When I was trying to start the reconstructing task, it seems the auto downloading failed.
image
I downloaded the zip file from dropbox manually and it works. Is it due to the network issue or something is wrong with the code?

Can not train "SampleNet.py " in registration.

When I run "git checkout 5ff4382f56a8cbed2b5edd3572f97436271aba89", the error is as below:
fatal: reference is not a tree: 5ff4382f56a8cbed2b5edd3572f97436271aba89

However, when I skip the "git checkout 5ff4382f56a8cbed2b5edd3572f97436271aba89", the error is as below:
CUDA kernel failed : no kernel image is available for execution on the device
void group_points_kernel_wrapper(int, int, int, int, int, const float
, const int
, float*) at L:38 in pointnet2/_ext-src/src/group_points_gpu.cu**

Look forward to your help! Thank you very much!

How to process my point cloud file with the network

Hi,
I have trained and evaluated Sample Net for registration, and the result seems good.
QQ图片20230412170254
But i don't know how to process my point cloud file(.txt or .ply) with the network, nor do I know how to output it and save it. Could you tell me what should i do? How to modify codes?
This will be very helpful to me, thanks a lot!

problem with dataloader

What does 16 mean here(line 141)?Why 16?Shouldn't the num_points for each point cloud file feeded be 1024?
Thank you for your answer.

Have you applied this work to PointNet++?

In this paper, I notice that you first sample points and then apply PoinetNet for classification.
Can we replace the FPS stage of PointNet++ with the sampling strategy in this work?

Thanks.
Jiaheng.

classification

How to use code for classification task. Means i have a existing model, optimizer etc but just want to change farthest point sampling to samplenet.

problem with knn_cuda torch

Thank you for sharing great work. I wonder why this issue is occured

soft_projection.py


    def _get_distances(self, grouped_points, query_cloud):
        deltas = grouped_points - query_cloud.unsqueeze(-1).expand_as(grouped_points)
        dist = torch.sum(deltas ** 2, dim=_axis_to_dim(3), keepdim=True) / self.sigma()
        return dist

Traceback (most recent call last): File "soft_projection.py", line 262, in <module> projected_points = propagator.project(query_cloud_pl, point_cloud_pl) File "soft_projection.py", line 141, in project dist = self._get_distances(grouped_points, query_cloud) File "soft_projection.py", line 95, in _get_distances dist = torch.sum(deltas ** 2, dim=_axis_to_dim(3), keepdim=True) / self.sigma() TypeError: 'Tensor' object is not callable

help me! thank you

Compare with your first paper

Thanks for your great contributions on point cloud downsampling. I'd like to ask some questions if you don't mind.

In this paper, when training the sample net, you use the R (sub-point cloud) after softly projection as the input of the task network.
However, in your first paper, you use the generated point cloud but not the "hard projection" point cloud after nn_matching as the input of the task network.
Why not unify the two algorithms?

Thank you for your reply.

Some problems with converting Tensorflow framework to PyTorch framework

Question: Is there any difference of the network or loss between the registration task and the classification task.

Description: I want to change the classification task of SampleNet's code to PyTorch framework(like the example code snippet in README.md). PointNet classifier is working normally(get a 90 eval acc pretrained model). But I got a terrible training result in SampleNet. Like train acc (in last epoch=500) =23% and eval acc = 61%. I check my code and didn't found error in calculate acc part. I don't know Is there any difference of the network or loss between the registration task and the classification task , because I haven't used Tensorflow in the past.

Using the SampleNet for LiDAR pointcloud

Hi!
Thank you for this impressive work. I'm interested in using the SampleNet to sample the task-aware points for LiDAR pointcloud. As we know, LiDAR pointcloud is sparse, not uniform and complex. Do you think this is feasible, or do you have any suggestions for me?

Train two SampleNet simultaneously

Hi!
Thanks for sharing the great work. I am wondering that if two SampleNet can be trained on the same point cloud simultaneously?
Say I have one object point cloud with two shapes with very different features, the first SampleNet should only sample the points from shape1 and the second SampleNet should sample the points from shape2. And the task can be trained with some contrastive loss. Does it make any sense? I have tried a toy example but both SampleNet just samples the same points. Any comments are very welcome!

class FCN_sampler(nn.Module):
    def __init__(self, shape1_num_out_points=512, shape2_num_out_points=512):
        super(FCN_sampler, self).__init__() 
        self.sampler1 = SampleNet(
                    num_out_points=fg_num_out_points,
                    bottleneck_size=128,
                    group_size=8,
                    initial_temperature=1.0,
                    input_shape="bnc",
                    output_shape="bnc")  

        self.sampler2 = SampleNet(
                    num_out_points=bg_num_out_points,
                    bottleneck_size=128,
                    group_size=8,
                    initial_temperature=1.0,
                    input_shape="bnc",
                    output_shape="bnc") 

    def forward(self, x, shape1=True):
        if shape1:
            simp_pc, proj_pc = self.sampler1 (x)
        else:
            simp_pc, proj_pc = self.sampler2 (x)    
        return simp_pc, proj_pc

## Sample points
sampler = FCN_sampler()
simp_pc1, coord1 = sampler(coord)
simp_pc2, coord2 = sampler(coord, shape1=False)

# Compute losses
simplification_loss = sampler.sampler1.get_simplification_loss(
        coord, simp_pc1, 512
)
projection_loss = sampler.sampler1.get_projection_loss()
loss1 = 0.01 * simplification_loss + 0.01* projection_loss

simplification_loss = sampler.sampler2.get_simplification_loss(
        coord, simp_pc2, 512
)
projection_loss = sampler.sampler2.get_projection_loss()

loss2 = 0.01 * simplification_loss + 0.01* projection_loss

samplenet_loss = loss1 + loss2 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.