Coder Social home page Coder Social logo

chamferdistancepytorch's Introduction

pip install torch ninja

Pytorch Chamfer Distance.

Include a CUDA version, and a PYTHON version with pytorch standard operations. NB : In this depo, dist1 and dist2 are squared pointcloud euclidean distances, so you should adapt thresholds accordingly.

  • F - Score

CUDA VERSION

  • JIT compilation
  • Supports multi-gpu
  • 2D point clouds.
  • 3D point clouds.
  • 5D point clouds.
  • Contiguous() safe.

Python Version

  • Supports any dimension

Usage

import torch, chamfer3D.dist_chamfer_3D, fscore
chamLoss = chamfer3D.dist_chamfer_3D.chamfer_3DDist()
points1 = torch.rand(32, 1000, 3).cuda()
points2 = torch.rand(32, 2000, 3, requires_grad=True).cuda()
dist1, dist2, idx1, idx2 = chamLoss(points1, points2)
f_score, precision, recall = fscore.fscore(dist1, dist2)

Add it to your project as a submodule

git submodule add https://github.com/ThibaultGROUEIX/ChamferDistancePytorch

Benchmark: [forward + backward] pass

  • CUDA 10.1, NVIDIA 435, Pytorch 1.4
  • p1 : 32 x 2000 x dim
  • p2 : 32 x 1000 x dim
Timing (sec * 1000) 2D 3D 5D
Cuda Compiled 1.2 1.4 1.8
Cuda JIT 1.3 1.4 1.5
Python 37 37 37
Memory (MB) 2D 3D 5D
Cuda Compiled 529 529 549
Cuda JIT 520 529 549
Python 2495 2495 2495

What is the chamfer distance ?

Stanford course on 3D deep Learning

Aknowledgment

Original backbone from Fei Xia.

JIT cool trick from Christian Diller

Troubleshoot

  • Undefined symbol: Zxxxxxxxxxxxxxxxxx :

--> Fix: Make sure to import torch before you import chamfer. --> Use pytorch.version >= 1.1.0

wget https://github.com/ninja-build/ninja/releases/download/v1.8.2/ninja-linux.zip
sudo unzip ninja-linux.zip -d /usr/local/bin/
sudo update-alternatives --install /usr/bin/ninja ninja /usr/local/bin/ninja 1 --force 

TODO:

  • Discuss behaviour of torch.min() and tensor.min() which causes issues in some pytorch versions

chamferdistancepytorch's People

Contributors

chrdiller avatar thibaultgroueix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

chamferdistancepytorch's Issues

How to deal with the different dimension requirement of network and loss

Thanks for your work~ I have one question.
For pytorch codes, the input shape of a network is (B,C,N). B for benchmark, C for channles(for point clouds coordinates, C=3, just like here ) and N for N points.
So the output of network should be (B,C,N) too. But a (B,N,C)-shape input is required for ChamferLoss. I want to know what should I do to unify the dimensions of these two requirments?
For now, I do as follows:

input = input.transpose(2,1) # My input/gt is (B,N,C). so (B,N,C) => (B,C,N) and C = 3
output = net(input) # (B,C,N)
loss = ChamferLoss(output.transpose(2,1), groundtruth) # output (B,C,N) => (B,N,C)

Hope I discribed my confusion clearly.
Looking for your reply.
Best regards.

chamfer4D

Can you support 4D in this project?

Due to KITTI Semantic datasets' dimension is 4D(xyz+brightness),I tried to imitate a 4d version of chamfer,can you help me to inspect whether my code is correct?

Hi,@ThibaultGROUEIX Thanks for your work!My 4D version is as below by imitation.

#include <stdio.h>
#include <ATen/ATen.h>

#include <cuda.h>
#include <cuda_runtime.h>

#include <vector>



__global__ void NmDistanceKernel(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i){
	const int batch=2048;
	__shared__ float buf[batch*4];
	for (int i=blockIdx.x;i<b;i+=gridDim.x){
		for (int k2=0;k2<m;k2+=batch){
			int end_k=min(m,k2+batch)-k2;
			for (int j=threadIdx.x;j<end_k*4;j+=blockDim.x){
				buf[j]=xyz2[(i*m+k2)*4+j];
			}
			__syncthreads();
			for (int j=threadIdx.x+blockIdx.y*blockDim.x;j<n;j+=blockDim.x*gridDim.y){
				float x1=xyz[(i*n+j)*4+0];
				float y1=xyz[(i*n+j)*4+1];
				float r1=xyz[(i*n+j)*4+2];
				float g1=xyz[(i*n+j)*4+3];
				int best_i=0;
				float best=0;
				int end_ka=end_k-(end_k&4);
				if (end_ka==batch){
					for (int k=0;k<batch;k+=4){
						{
							float x2=buf[k*4+0]-x1;
							float y2=buf[k*4+1]-y1;
							float r2=buf[k*4+2]-r1;
							float g2=buf[k*4+3]-g1;
							float d=x2*x2+y2*y2+r2*r2+g2*g2;
							if (k==0 || d<best){
								best=d;
								best_i=k+k2;
							}
						}
						{
							float x2=buf[k*4+4]-x1;
							float y2=buf[k*4+5]-y1;
							float r2=buf[k*4+6]-r1;
							float g2=buf[k*4+7]-g1;
							float d=x2*x2+y2*y2+r2*r2+g2*g2;
							if (d<best){
								best=d;
								best_i=k+k2+1;
							}
						}
						{
							float x2=buf[k*4+8]-x1;
							float y2=buf[k*4+9]-y1;
							float r2=buf[k*4+10]-r1;
							float g2=buf[k*4+11]-g1;
							float d=x2*x2+y2*y2+r2*r2+g2*g2;
							if (d<best){
								best=d;
								best_i=k+k2+2;
							}
						}
						{
							float x2=buf[k*4+12]-x1;
							float y2=buf[k*4+13]-y1;
							float r2=buf[k*4+14]-r1;
							float g2=buf[k*4+15]-g1;
							float d=x2*x2+y2*y2+r2*r2+g2*g2;
							if (d<best){
								best=d;
								best_i=k+k2+3;
							}
						}
					}
				}else{
					for (int k=0;k<end_ka;k+=4){
						{
							float x2=buf[k*4+0]-x1;
							float y2=buf[k*4+1]-y1;
							float r2=buf[k*4+2]-r1;
							float g2=buf[k*4+3]-g1;
							float d=x2*x2+y2*y2+r2*r2+g2*g2;
							if (k==0 || d<best){
								best=d;
								best_i=k+k2;
							}
						}
						{
							float x2=buf[k*4+4]-x1;
							float y2=buf[k*4+5]-y1;
							float r2=buf[k*4+6]-r1;
							float g2=buf[k*4+7]-g1;
							float d=x2*x2+y2*y2+r2*r2+g2*g2;
							if (d<best){
								best=d;
								best_i=k+k2+1;
							}
						}
						{
							float x2=buf[k*4+8]-x1;
							float y2=buf[k*4+9]-y1;
							float r2=buf[k*4+10]-r1;
							float g2=buf[k*4+11]-g1;
							float d=x2*x2+y2*y2+r2*r2+g2*g2;
							if (d<best){
								best=d;
								best_i=k+k2+2;
							}
						}
						{
							float x2=buf[k*5+12]-x1;
							float y2=buf[k*5+13]-y1;
							float r2=buf[k*5+14]-r1;
							float g2=buf[k*5+15]-g1;
							float d=x2*x2+y2*y2+r2*r2+g2*g2;
							if (d<best){
								best=d;
								best_i=k+k2+3;
							}
						}
					}
				}
				for (int k=end_ka;k<end_k;k++){
					float x2=buf[k*5+0]-x1;
					float y2=buf[k*5+1]-y1;
					float r2=buf[k*5+2]-r1;
					float g2=buf[k*5+3]-g1;
					float d=x2*x2+y2*y2+r2*r2+g2*g2;
					if (k==0 || d<best){
						best=d;
						best_i=k+k2;
					}
				}
				if (k2==0 || result[(i*n+j)]>best){
					result[(i*n+j)]=best;
					result_i[(i*n+j)]=best_i;
				}
			}
			__syncthreads();
		}
	}
}
// int chamfer_cuda_forward(int b,int n,const float * xyz,int m,const float * xyz2,float * result,int * result_i,float * result2,int * result2_i, cudaStream_t stream){
int chamfer_cuda_forward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor dist1, at::Tensor dist2, at::Tensor idx1, at::Tensor idx2){

	const auto batch_size = xyz1.size(0);
	const auto n = xyz1.size(1); //num_points point cloud A
	const auto m = xyz2.size(1); //num_points point cloud B

	NmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, n, xyz1.data<float>(), m, xyz2.data<float>(), dist1.data<float>(), idx1.data<int>());
	NmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, m, xyz2.data<float>(), n, xyz1.data<float>(), dist2.data<float>(), idx2.data<int>());

	cudaError_t err = cudaGetLastError();
	  if (err != cudaSuccess) {
	    printf("error in nnd updateOutput: %s\n", cudaGetErrorString(err));
	    //THError("aborting");
	    return 0;
	  }
	  return 1;


}
__global__ void NmDistanceGradKernel(int b,int n,const float * xyz1,int m,const float * xyz2,const float * grad_dist1,const int * idx1,float * grad_xyz1,float * grad_xyz2){
	for (int i=blockIdx.x;i<b;i+=gridDim.x){
		for (int j=threadIdx.x+blockIdx.y*blockDim.x;j<n;j+=blockDim.x*gridDim.y){
			float x1=xyz1[(i*n+j)*5+0];
			float y1=xyz1[(i*n+j)*5+1];
			float r1=xyz1[(i*n+j)*5+2];
			float g1=xyz1[(i*n+j)*5+3];
			int j2=idx1[i*n+j];
			float x2=xyz2[(i*m+j2)*5+0];
			float y2=xyz2[(i*m+j2)*5+1];
			float r2=xyz2[(i*m+j2)*5+2];
			float g2=xyz2[(i*m+j2)*5+3];
			float g=grad_dist1[i*n+j]*2;
			atomicAdd(&(grad_xyz1[(i*n+j)*4+0]),g*(x1-x2));
			atomicAdd(&(grad_xyz1[(i*n+j)*4+1]),g*(y1-y2));
			atomicAdd(&(grad_xyz1[(i*n+j)*4+2]),g*(r1-r2));
			atomicAdd(&(grad_xyz1[(i*n+j)*4+3]),g*(g1-g2));
			atomicAdd(&(grad_xyz2[(i*m+j2)*4+0]),-(g*(x1-x2)));
			atomicAdd(&(grad_xyz2[(i*m+j2)*4+1]),-(g*(y1-y2)));
			atomicAdd(&(grad_xyz2[(i*m+j2)*4+2]),-(g*(r1-r2)));
			atomicAdd(&(grad_xyz2[(i*m+j2)*4+3]),-(g*(g1-g2)));
		}
	}
}
// int chamfer_cuda_backward(int b,int n,const float * xyz1,int m,const float * xyz2,const float * grad_dist1,const int * idx1,const float * grad_dist2,const int * idx2,float * grad_xyz1,float * grad_xyz2, cudaStream_t stream){
int chamfer_cuda_backward(at::Tensor xyz1, at::Tensor xyz2, at::Tensor gradxyz1, at::Tensor gradxyz2, at::Tensor graddist1, at::Tensor graddist2, at::Tensor idx1, at::Tensor idx2){
	// cudaMemset(grad_xyz1,0,b*n*3*4);
	// cudaMemset(grad_xyz2,0,b*m*3*4);

	const auto batch_size = xyz1.size(0);
	const auto n = xyz1.size(1); //num_points point cloud A
	const auto m = xyz2.size(1); //num_points point cloud B

	NmDistanceGradKernel<<<dim3(1,16,1),256>>>(batch_size,n,xyz1.data<float>(),m,xyz2.data<float>(),graddist1.data<float>(),idx1.data<int>(),gradxyz1.data<float>(),gradxyz2.data<float>());
	NmDistanceGradKernel<<<dim3(1,16,1),256>>>(batch_size,m,xyz2.data<float>(),n,xyz1.data<float>(),graddist2.data<float>(),idx2.data<int>(),gradxyz2.data<float>(),gradxyz1.data<float>());

	cudaError_t err = cudaGetLastError();
	  if (err != cudaSuccess) {
	    printf("error in nnd get grad: %s\n", cudaGetErrorString(err));
	    //THError("aborting");
	    return 0;
	  }
	  return 1;

}

Another question is there any good tutorial or book to learn how to write cuda op?Thanks a lot!

Error when running forward

Hi, my code is like this:

from chamfer3D.dist_chamfer_3D import chamfer_3DDist
import torch
nnd = chamfer_3DDist()
pc1 = torch.rand(4, 2048, 3).cuda()
pc2 = torch.rand(4, 2048, 3).cuda()
dist1, dist2, _, _ = nnd(pc1, pc2)

Then I get:

error in nnd updateOutput: no kernel image is available for execution on the device

I'm using pytorch 1.3.1, cuda 10.0 and cudnn 7.6.0

How can I solve this?

Problem about installation

Thanks for your great work,
G++ error occurred when I entered python setup.py install.
running install running bdist_egg running egg_info writing chamfer_3D.egg-info/PKG-INFO writing dependency_links to chamfer_3D.egg-info/dependency_links.txt writing top-level names to chamfer_3D.egg-info/top_level.txt listing git files failed - pretending there aren't any reading manifest file 'chamfer_3D.egg-info/SOURCES.txt' writing manifest file 'chamfer_3D.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_ext building 'chamfer_3D' extension creating /home/lq/New_p/VRCNet-main/chamfer3D/build creating /home/lq/New_p/VRCNet-main/chamfer3D/build/temp.linux-x86_64-3.7 Emitting ninja build file /home/lq/New_p/VRCNet-main/chamfer3D/build/temp.linux-x86_64-3.7/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) 1.10.2.git.kitware.jobserver-1 creating build/lib.linux-x86_64-3.7 g++ -pthread -shared -B /home/lq/anaconda3/envs/mvp/compiler_compat -L/home/lq/anaconda3/envs/mvp/lib -Wl,-rpath=/home/lq/anaconda3/envs/mvp/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/lq/New_p/VRCNet-main/chamfer3D/build/temp.linux-x86_64-3.7/chamfer_cuda.o /home/lq/New_p/VRCNet-main/chamfer3D/build/temp.linux-x86_64-3.7/chamfer3D.o -L/home/lq/anaconda3/envs/mvp/lib/python3.7/site-packages/torch/lib -L/usr/local/cuda-10.1/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.7/chamfer_3D.cpython-37m-x86_64-linux-gnu.so g++: error: /home/lq/New_p/VRCNet-main/chamfer3D/build/temp.linux-x86_64-3.7/chamfer_cuda.o: No such file or directory g++: error: /home/lq/New_p/VRCNet-main/chamfer3D/build/temp.linux-x86_64-3.7/chamfer3D.o: No such file or directory error: command 'g++' failed with exit status 1

"error in nnd updateOutput: invalid device function"

I use ubuntu 18.04, 2080ti and pytorch==1.5.0 CUDA==10.1

In [1]: import torch                                                                                                                                                                                      
In [2]: import externals.ChamferDistancePytorch.chamfer3D.dist_chamfer_3D as dist_chamfer_3D                                                                                                              
Loaded compiled 3D CUDA chamfer distance
In [3]: chamLoss = dist_chamfer_3D.chamfer_3DDist()                                                                                                                                                       
In [4]: points1 = torch.rand(32, 1000, 3).cuda()
In [5]: points2 = torch.rand(32, 1000, 3).cuda()                                                                                                                                                          
In [6]: dist1, dist2, idx1, idx2 = chamLoss(points1, points2)                                                                                                                                             
error in nnd updateOutput: invalid device function

'Segmentation fault (core dumped)' when moving a tensor to a device

Hi! Thanks for your work.

I have run python setup.py install in chamfer2D with no error.
However, when I am using this package in Python 3.6, I got the following problem:

>>> import torch
>>> import chamfer_2D
>>> torch.tensor(1,device='cuda')
Segmentation fault (core dumped)

If I don't import chamfer_2D, everything works fine.

I am using CUDA 10.1 and PyTorch 1.4.0

Do you have any idea on how to solve this problem?

Best regards

The results of the two runs are different

Hello,
Thanke you for your work! When I use it, I find that although I fixed the seed points of torch and numpy, the chamfer distance still makes the results inconsistent every time I run. Do you know how to deal with this situation?
Best,
diaojunqi

[Errno 2] No such file or directory: 'chamfer_cuda.cpp'

After cloning the repo at the same level as a Jupiter notebook I can't seem to use this repo because of the error in the title.

My Code is :```
sys.path.append("ChamferDistancePytorch")

import torch, chamfer3D.dist_chamfer_3D, fscore
chamLoss = chamfer3D.dist_chamfer_3D.chamfer_3DDist()
points1 = torch.rand(32, 1000, 3).cuda()
points2 = torch.rand(32, 2000, 3, requires_grad=True).cuda()
dist1, dist2, idx1, idx2 = chamLoss(points1, points2)
f_score, precision, recall = fscore.fscore(dist1, dist2)

ModuleNotFoundError: No module named 'chamfer3D.dist_chamfer_3D'; 'chamfer3D' is not a package

Hi, thanks for your wonderful work, but i have some problem when installing chamfer3D.

i tried to install it by the command pip install -U . and python setup.py install --user , but when i try to run the example code in your README it come out the error "ModuleNotFoundError".
image
Also, i try it with pytorch=1.7.1 which shows the same error.
Do you have any idea about it? thx!

test camfer failed

pc1.npy.zip
pc2.npy.zip

the test will be failed if using the above of two point cloud, which the shape is [1, 8192, 3]. i found the min index is different between ext.chamferDist() and chamfer_python.distChamfer.

test will be ok if using https://github.com/ThibaultGROUEIX/chamfer_pytorch/blob/master/test_chamfer.py

pytorch version : 1.2

test code:

import torch
import dist_chamfer_idx as ext
import chamfer_python
distChamfer = ext.chamferDist()
from torch.autograd import Variable
import time
import numpy as np


def test_chamfer():
    distChamfer = ext.chamferDist()
    p1 = torch.from_numpy(np.load("pc1.npy")).cuda()
    p2 = torch.from_numpy(np.load("pc2.npy")).cuda()
    points1 = Variable(p1, requires_grad=True)
    points2 = Variable(p2)
    dist1, dist2, idx1, idx2= distChamfer(points1, points2)

    loss = torch.sum(dist1)
    print(loss)
    loss.backward()
    print(points1.grad, points2.grad)

    mydist1, mydist2, myidx1, myidx2 = chamfer_python.distChamfer(points1, points2)
    d1 = (dist1 - mydist1) ** 2
    d2 = (dist2 - mydist2) ** 2
    xd1 = idx1 - myidx1
    xd2 = idx2 - myidx2
    print("d1 = \n", d1)
    print("d2 = \n", d2)
    print("diff sum = \n", torch.sum(d1) + torch.sum(d2))
    print("xd1 = \n", xd1)
    print("xd2 = \n", xd2)
    print("xdiff norm sum = \n", torch.norm(xd1.float()) + torch.norm(xd2.float()))

    assert (
        torch.sum(d1) + torch.sum(d2) < 0.00000001
    ), "distance : chamfer cuda and chamfer normal are not giving the same results"
    assert (
            torch.norm(xd1.float()) + torch.norm(xd2.float()) == 0
    ), "index : chamfer cuda and chamfer normal are not giving the same results"

test_chamfer()

@ThibaultGROUEIX

Compile problem

Hi,
I am using CUDA 10.2 and PyTorch 1.5.0
when I run python3 unit_test.py I got the error like:

Traceback (most recent call last):
File "/home/jiaming/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1400, in _run_ninja_build
check=True)
File "/usr/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "unit_test.py", line 2, in
import chamfer2D.dist_chamfer_2D
File "/home/jiaming/ChamferDistancePytorch/chamfer2D/dist_chamfer_2D.py", line 15, in
"/".join(os.path.abspath(file).split('/')[:-1] + ["chamfer2D.cu"]),
File "/home/jiaming/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 898, in load
is_python_module)
File "/home/jiaming/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1086, in jit_compile
with_cuda=with_cuda)
File "/home/jiaming/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1186, in write_ninja_file_and_build_library
error_prefix="Error building extension '{}'".format(name))
File "/home/jiaming/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1413, in run_ninja_build
raise RuntimeError(message)
RuntimeError: Error building extension 'chamfer_2D': [1/3] c++ -MMD -MF chamfer_cuda.o.d -DTORCH_EXTENSION_NAME=chamfer_2D -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/jiaming/.local/lib/python3.6/site-packages/torch/include -isystem /home/jiaming/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /home/jiaming/.local/lib/python3.6/site-packages/torch/include/TH -isystem /home/jiaming/.local/lib/python3.6/site-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer_cuda.cpp -o chamfer_cuda.o
[2/3] /usr/bin/nvcc -DTORCH_EXTENSION_NAME=chamfer_2D -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/jiaming/.local/lib/python3.6/site-packages/torch/include -isystem /home/jiaming/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /home/jiaming/.local/lib/python3.6/site-packages/torch/include/TH -isystem /home/jiaming/.local/lib/python3.6/site-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -std=c++14 -c /home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu -o chamfer2D.cuda.o
FAILED: chamfer2D.cuda.o
/usr/bin/nvcc -DTORCH_EXTENSION_NAME=chamfer_2D -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/jiaming/.local/lib/python3.6/site-packages/torch/include -isystem /home/jiaming/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /home/jiaming/.local/lib/python3.6/site-packages/torch/include/TH -isystem /home/jiaming/.local/lib/python3.6/site-packages/torch/include/THC -isystem /usr/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -std=c++14 -c /home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu -o chamfer2D.cuda.o
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorMethods.h:5293:139: required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
return _and<is_constructible<_Elements, _UElements&&>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorMethods.h:5293:139: required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
return _and<is_convertible<_UElements&&, _Elements>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor>&; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type > constexpr std::tuple< >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorMethods.h:5293:139: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor>&&; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type > constexpr std::tuple< >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorMethods.h:5293:139: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:1418:446: required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
return _and<is_constructible<_Elements, _UElements&&>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:1418:446: required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
return _and<is_convertible<_UElements&&, _Elements>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type > constexpr std::tuple< >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:1418:446: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (6, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type > constexpr std::tuple< >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:1418:446: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (6, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >}; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor >}]’:
/usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:1437:111: required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
return _and<is_constructible<_Elements, _UElements&&>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >}; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor >}]’:
/usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:1437:111: required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
return _and<is_convertible<_UElements&&, _Elements>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >&; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor >}]’:
/usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type > constexpr std::tuple< >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor >}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:1437:111: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >&&; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor >}]’:
/usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type > constexpr std::tuple< >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor >}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocatorat::Tensor > >::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:1437:111: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>}; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int}]’:
/usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:2315:280: required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
return _and<is_constructible<_Elements, _UElements&&>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>}; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int}]’:
/usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:2315:280: required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
return _and<is_convertible<_UElements&&, _Elements>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>&; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int}]’:
/usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type > constexpr std::tuple< >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:2315:280: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (6, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>&&; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int}]’:
/usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type > constexpr std::tuple< >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long int>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:2315:280: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (6, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:3228:268: required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
return _and<is_constructible<_Elements, _UElements&&>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:3228:268: required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
return _and<is_convertible<_UElements&&, _Elements>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type > constexpr std::tuple< >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:3228:268: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type > constexpr std::tuple< >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:3228:268: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, double, long int>}; bool = true; _Elements = {at::Tensor, at::Tensor, double, long int}]’:
/usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, double, long int>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, double, long int>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, double, long int>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:5162:99: required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
return _and<is_constructible<_Elements, _UElements&&>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, double, long int>}; bool = true; _Elements = {at::Tensor, at::Tensor, double, long int}]’:
/usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, double, long int>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, double, long int>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, double, long int>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:5162:99: required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
return _and<is_convertible<_UElements&&, _Elements>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, double, long int>&; bool = true; _Elements = {at::Tensor, at::Tensor, double, long int}]’:
/usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, double, long int>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type > constexpr std::tuple< >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, double, long int}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, double, long int>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:5162:99: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, double, long int>&&; bool = true; _Elements = {at::Tensor, at::Tensor, double, long int}]’:
/usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, double, long int>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type > constexpr std::tuple< >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, double, long int}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, double, long int>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, double, long int>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:5162:99: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:6418:303: required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
return _and<is_constructible<_Elements, _UElements&&>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type > constexpr std::tuple< >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:6418:303: required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
return _and<is_convertible<_UElements&&, _Elements>...>::value;
^~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&; bool = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type > constexpr std::tuple< >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:6418:303: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&&; bool = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type > constexpr std::tuple< >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type = ]’
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/Functions.h:6418:303: required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu: In function ‘int chamfer_cuda_forward(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor)’:
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:132:106: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, n, xyz1.data(), m, xyz2.data(), dist1.data(), idx1.data());
^
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:1: note: declared here
T * data() const {
^ ~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:132:131: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, n, xyz1.data(), m, xyz2.data(), dist1.data(), idx1.data());
^
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:1: note: declared here
T * data() const {
^ ~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:132:154: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, n, xyz1.data(), m, xyz2.data(), dist1.data(), idx1.data());
^
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:1: note: declared here
T * data() const {
^ ~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:132:174: warning: ‘T* at::Tensor::data() const [with T = int]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, n, xyz1.data(), m, xyz2.data(), dist1.data(), idx1.data());
^
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:1: note: declared here
T * data() const {
^ ~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:133:106: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, m, xyz2.data(), n, xyz1.data(), dist2.data(), idx2.data());
^
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:1: note: declared here
T * data() const {
^ ~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:133:131: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, m, xyz2.data(), n, xyz1.data(), dist2.data(), idx2.data());
^
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:1: note: declared here
T * data() const {
^ ~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:133:154: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, m, xyz2.data(), n, xyz1.data(), dist2.data(), idx2.data());
^
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:1: note: declared here
T * data() const {
^ ~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:133:174: warning: ‘T* at::Tensor::data() const [with T = int]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceKernel<<<dim3(32,16,1),512>>>(batch_size, m, xyz2.data(), n, xyz1.data(), dist2.data(), idx2.data());
^
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:1: note: declared here
T * data() const {
^ ~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu: In function ‘int chamfer_cuda_backward(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor)’:
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:170:109: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceGradKernel<<<dim3(1,16,1),256>>>(batch_size,n,xyz1.data(),m,xyz2.data(),graddist1.data(),idx1.data(),gradxyz1.data(),gradxyz2.data());
^
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:1: note: declared here
T * data() const {
^ ~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:170:134: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceGradKernel<<<dim3(1,16,1),256>>>(batch_size,n,xyz1.data(),m,xyz2.data(),graddist1.data(),idx1.data(),gradxyz1.data(),gradxyz2.data());
^
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:1: note: declared here
T * data() const {
^ ~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:170:161: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceGradKernel<<<dim3(1,16,1),256>>>(batch_size,n,xyz1.data(),m,xyz2.data(),graddist1.data(),idx1.data(),gradxyz1.data(),gradxyz2.data());
^
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:1: note: declared here
T * data() const {
^ ~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:170:181: warning: ‘T* at::Tensor::data() const [with T = int]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceGradKernel<<<dim3(1,16,1),256>>>(batch_size,n,xyz1.data(),m,xyz2.data(),graddist1.data(),idx1.data(),gradxyz1.data(),gradxyz2.data());
^
/home/jiaming/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:1: note: declared here
T * data() const {
^ ~~
/home/jiaming/ChamferDistancePytorch/chamfer2D/chamfer2D.cu:170:207: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations]
NmDistanceGradKernel<<<dim3(1,16,1),256>>>(batch_size,n,xyz1.data(),m,xyz2.data(),graddist1.data

larger 3D point clouds

Hello, I'm curious about 5D PCL, 6D PCL, ... what is it represented for? Do we have other axes than x, y, z?

How to calculate the loss during training.

I am not sure this question has been asked before. I searched the issues by keyword and could not find anything. I want to use this chamfer distance as the loss to train a network (more specifically, a pointnet-like autoencoder).
Currently, I am using it like this (based on the python version):

import dist_chamfer_2D
loss_chamfer = dist_chamfer_2D.chamfer_2DDist()
dist1, dist2, idx1, idx2 = loss_chamfer(x.permute(0, 2, 1), dec_y.permute(0, 2, 1))
loss = (dist1.min(dim=0)[0].mean()) + (dist2.min(dim=0)[0].mean())

Is this the correct way of using it?

However, the reconstructed result does not look good. I also tried to define the loss as:

loss = torch.sum(dist1) + torch.sum(dist2)

Which had a better overall qualitative result but still not as I expected.
The problem should not be hard, and I am trying to learn a representation for a simple 2D/3D point cloud (composed of squares and circles ).

Example below (blue is the original, red is the decoder output):

image

error:Command '['ninja', '-v']' returned non-zero exit status 1.

dear mr, waht can i do to solve this problem
may i get a advised version of python+torch+cuda from you?

running install
running bdist_egg
running egg_info
writing chamfer_3D.egg-info/PKG-INFO
writing dependency_links to chamfer_3D.egg-info/dependency_links.txt
writing top-level names to chamfer_3D.egg-info/top_level.txt
reading manifest file 'chamfer_3D.egg-info/SOURCES.txt'
writing manifest file 'chamfer_3D.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'chamfer_3D' extension
gcc -pthread -B /root/miniconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I:/usr/local/cuda-11.0/include -I/root/miniconda3/include/python3.8 -c chamfer_cuda.cpp -o build/temp.linux-x86_64-3.8/chamfer_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=chamfer_3D -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:149:0,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
from chamfer_cuda.cpp:1:
/root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)

:/usr/local/cuda-11.0/bin/nvcc -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I:/usr/local/cuda-11.0/include -I/root/miniconda3/include/python3.8 -c chamfer3D.cu -o build/temp.linux-x86_64-3.8/chamfer3D.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -DTORCH_EXTENSION_NAME=chamfer_3D -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14
unable to execute ':/usr/local/cuda-11.0/bin/nvcc': No such file or directory
error: command ':/usr/local/cuda-11.0/bin/nvcc' failed with exit status 1
(base) root@55987af7de94:~/chamfer3D# python setup.py install
running install
running bdist_egg
running egg_info
writing chamfer_3D.egg-info/PKG-INFO
writing dependency_links to chamfer_3D.egg-info/dependency_links.txt
writing top-level names to chamfer_3D.egg-info/top_level.txt
reading manifest file 'chamfer_3D.egg-info/SOURCES.txt'
writing manifest file 'chamfer_3D.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'chamfer_3D' extension
Emitting ninja build file /root/chamfer3D/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] :/usr/local/cuda-11.0/bin/nvcc -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I:/usr/local/cuda-11.0/include -I/root/miniconda3/include/python3.8 -c -c /root/chamfer3D/chamfer3D.cu -o /root/chamfer3D/build/temp.linux-x86_64-3.8/chamfer3D.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=chamfer_3D -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14
FAILED: /root/chamfer3D/build/temp.linux-x86_64-3.8/chamfer3D.o
:/usr/local/cuda-11.0/bin/nvcc -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I:/usr/local/cuda-11.0/include -I/root/miniconda3/include/python3.8 -c -c /root/chamfer3D/chamfer3D.cu -o /root/chamfer3D/build/temp.linux-x86_64-3.8/chamfer3D.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=chamfer_3D -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14
/bin/sh: 1: :/usr/local/cuda-11.0/bin/nvcc: not found
[2/2] c++ -MMD -MF /root/chamfer3D/build/temp.linux-x86_64-3.8/chamfer_cuda.o.d -pthread -B /root/miniconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I:/usr/local/cuda-11.0/include -I/root/miniconda3/include/python3.8 -c -c /root/chamfer3D/chamfer_cuda.cpp -o /root/chamfer3D/build/temp.linux-x86_64-3.8/chamfer_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=chamfer_3D -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:149:0,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
from /root/chamfer3D/chamfer_cuda.cpp:1:
/root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)

ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1533, in _run_ninja_build
subprocess.run(
File "/root/miniconda3/lib/python3.8/subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "setup.py", line 4, in
setup(
File "/root/miniconda3/lib/python3.8/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/root/miniconda3/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/root/miniconda3/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/root/miniconda3/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/root/miniconda3/lib/python3.8/site-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/root/miniconda3/lib/python3.8/site-packages/setuptools/command/install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "/root/miniconda3/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/root/miniconda3/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/root/miniconda3/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 167, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/root/miniconda3/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 153, in call_command
self.run_command(cmdname)
File "/root/miniconda3/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/root/miniconda3/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/root/miniconda3/lib/python3.8/site-packages/setuptools/command/install_lib.py", line 11, in run
self.build()
File "/root/miniconda3/lib/python3.8/distutils/command/install_lib.py", line 107, in build
self.run_command('build_ext')
File "/root/miniconda3/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/root/miniconda3/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/root/miniconda3/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/root/miniconda3/lib/python3.8/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 670, in build_extensions
build_ext.build_extensions(self)
File "/root/miniconda3/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "/root/miniconda3/lib/python3.8/distutils/command/build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "/root/miniconda3/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
_build_ext.build_extension(self, ext)
File "/root/miniconda3/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension
objects = self.compiler.compile(sources,
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 491, in unix_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1250, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1555, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

How to build with CUDA11+?

Thank you so much for your good work!
I have a A100 GPU, it only works for cuda11+ ,how to compile it in cuda11+. I tried to add "export TORCH_CUDA_ARCH_LIST="7.5"" on bashrc, it can compile, but did not work

CUDA ERROR

" no kernel image is available for execution on the device !"
but when I train other net ,this did not happen. so I think if there is a problem with the installation.

Compile error

Hi, when I tried to run the unit_test, it returned an error shows below. Have you ever met this problem?
I am using CUDA 10.1, PyTorch 1.3.1
Thank you!

Traceback (most recent call last):
  File "/home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1030, in _build_extension_module
    check=True)
  File "/usr/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "unit_test.py", line 1, in <module>
    import torch, chamfer3D.dist_chamfer_3D, fscore
  File "/home/zihao/Nutstore/4DCompletion/code/complete_pc/ChamferDistancePytorch/chamfer3D/dist_chamfer_3D.py", line 13, in <module>
    "/".join(os.path.abspath(__file__).split('/')[:-1] + ["chamfer3D.cu"]),
  File "/home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 661, in load
    is_python_module)
  File "/home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 830, in _jit_compile
    with_cuda=with_cuda)
  File "/home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 883, in _write_ninja_file_and_build
    _build_extension_module(name, build_directory, verbose)
  File "/home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1043, in _build_extension_module
    raise RuntimeError(message)
RuntimeError: Error building extension 'chamfer_3D': [1/3] :/usr/local/cuda-10.1:/usr/local/cuda-10.1/bin/nvcc -DTORCH_EXTENSION_NAME=chamfer_3D -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/TH -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/THC -isystem :/usr/local/cuda-10.1:/usr/local/cuda-10.1/include -isystem /home/zihao/Nutstore/4DCompletion/venv/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -std=c++11 -c /home/zihao/Nutstore/4DCompletion/code/complete_pc/ChamferDistancePytorch/chamfer3D/chamfer3D.cu -o chamfer3D.cuda.o
FAILED: chamfer3D.cuda.o
:/usr/local/cuda-10.1:/usr/local/cuda-10.1/bin/nvcc -DTORCH_EXTENSION_NAME=chamfer_3D -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/TH -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/THC -isystem :/usr/local/cuda-10.1:/usr/local/cuda-10.1/include -isystem /home/zihao/Nutstore/4DCompletion/venv/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=sm_61 --compiler-options '-fPIC' -std=c++11 -c /home/zihao/Nutstore/4DCompletion/code/complete_pc/ChamferDistancePytorch/chamfer3D/chamfer3D.cu -o chamfer3D.cuda.o
/bin/sh: 1: :/usr/local/cuda-10.1:/usr/local/cuda-10.1/bin/nvcc: not found
[2/3] c++ -MMD -MF chamfer_cuda.o.d -DTORCH_EXTENSION_NAME=chamfer_3D -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/TH -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/THC -isystem :/usr/local/cuda-10.1:/usr/local/cuda-10.1/include -isystem /home/zihao/Nutstore/4DCompletion/venv/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c /home/zihao/Nutstore/4DCompletion/code/complete_pc/ChamferDistancePytorch/chamfer3D/chamfer_cuda.cpp -o chamfer_cuda.o
FAILED: chamfer_cuda.o
c++ -MMD -MF chamfer_cuda.o.d -DTORCH_EXTENSION_NAME=chamfer_3D -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/TH -isystem /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/THC -isystem :/usr/local/cuda-10.1:/usr/local/cuda-10.1/include -isystem /home/zihao/Nutstore/4DCompletion/venv/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c /home/zihao/Nutstore/4DCompletion/code/complete_pc/ChamferDistancePytorch/chamfer3D/chamfer_cuda.cpp -o chamfer_cuda.o
In file included from /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/torch/csrc/Device.h:3:0,
                 from /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/python.h:8,
                 from /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/torch/extension.h:6,
                 from /home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:6,
                 from /home/zihao/Nutstore/4DCompletion/code/complete_pc/ChamferDistancePytorch/chamfer3D/chamfer_cuda.cpp:1:
/home/zihao/Nutstore/4DCompletion/venv/lib/python3.6/site-packages/torch/include/torch/csrc/python_headers.h:9:20: fatal error: Python.h: No such file or directory
compilation terminated.
ninja: build stopped: subcommand failed.

error in nnd updateOutput: invalid device function

I met a problem whlie trying to run unit_test.py.

If I do not compile maually, all tests can pass normally, and the output is below:

Jitting Chamfer 2D
Loaded JIT 2D CUDA chamfer distance
Jitting Chamfer 3D
Loaded JIT 3D CUDA chamfer distance
Jitting Chamfer 5D
Loaded JIT 5D CUDA chamfer distance
testing Chamfer 2D
fscore : (tensor([0.3527, 0.3912, 0.3908, 0.3707], device='cuda:0'), tensor([0.4500, 0.5300, 0.4900, 0.5300], device='cuda:0'), tensor([0.2900, 0.3100, 0.3250, 0.2850], device='cuda:0'))
Unit test passed
Timings : Start CUDA version
Ellapsed time forward backward is 0.1624271082878113 seconds.
Timings : Start Pythonic version
Ellapsed time  forward backward  is 0.026266088485717775 seconds.
testing Chamfer 3D
fscore : (tensor([0.0200, 0.0267, 0.0133, 0.0133], device='cuda:0'), tensor([0.0300, 0.0400, 0.0200, 0.0200], device='cuda:0'), tensor([0.0150, 0.0200, 0.0100, 0.0100], device='cuda:0'))
Unit test passed
Timings : Start CUDA version
Ellapsed time forward backward is 0.15791781902313232 seconds.
Timings : Start Pythonic version
Ellapsed time  forward backward  is 0.0261559534072876 seconds.
testing Chamfer 5D
fscore : (tensor([0., 0., 0., 0.], device='cuda:0'), tensor([0., 0., 0., 0.], device='cuda:0'), tensor([0., 0., 0., 0.], device='cuda:0'))
Unit test passed
Timings : Start CUDA version
Ellapsed time forward backward is 0.16233629941940309 seconds.
Timings : Start Pythonic version
Ellapsed time  forward backward  is 0.026321511268615722 seconds.

However, if I compile Chamfer3D maually(run python chamfer3D/setup.py install) and run unit_test.py, the test cannnot pass. The output is below:

Jitting Chamfer 2D
Loaded JIT 2D CUDA chamfer distance
Loaded compiled 3D CUDA chamfer distance
Jitting Chamfer 5D
Loaded JIT 5D CUDA chamfer distance
testing Chamfer 2D
fscore : (tensor([0.3377, 0.3523, 0.3442, 0.3389], device='cuda:0'), tensor([0.5000, 0.4900, 0.4600, 0.4700], device='cuda:0'), tensor([0.2550, 0.2750, 0.2750, 0.2650], device='cuda:0'))
Unit test passed
Timings : Start CUDA version
Ellapsed time forward backward is 0.16943942308425902 seconds.
Timings : Start Pythonic version
Ellapsed time  forward backward  is 0.026366772651672362 seconds.
testing Chamfer 3D
error in nnd updateOutput: invalid device function
error in nnd get grad: invalid device function
Traceback (most recent call last):
  File "unit_test.py", line 68, in <module>
    test_chamfer(cham, dims[i])
  File "unit_test.py", line 27, in test_chamfer
    ), "chamfer cuda and chamfer normal are not giving the same results"
AssertionError: chamfer cuda and chamfer normal are not giving the same results
Segmentation fault (core dumped)

I don't know how to solve this problem. It seems that something wrong happened when compiling extensions manually. I found there is another same issue in closed issues, but I checked my cuda and there is no probelm in cuda settings.
python version: 3.6
pytorch version: 1.6.0
cuda version: 10.0
os: linux

fp16 compatibility

Thanks for the great repo!
Can you point me to what needs to be done to enable fp16 compatibility? I suppose it will need some rewriting in the CUDA kernels. Is it just changing the types in the function declarations or more work that needs to go into it?

Unit Test

  1. in each of the folder like chamfer2D,3D,5D, and 6D

I run the below commands

python setup install

  1. Then I try to run unit_test.py

And I got the below errors, can you help with this?

image

error in nnd updateOutput: invalid device function ; error in nnd get grad: invalid device function

after I compile Chamfer 2D, 3D, 5D, when I run the "unit_test.py", it shows me such error:

Loaded compiled 2D CUDA chamfer distance
Loaded compiled 3D CUDA chamfer distance
Loaded compiled 5D CUDA chamfer distance
testing Chamfer 2D
error in nnd updateOutput: invalid device function
error in nnd get grad: invalid device function

AssertionError: chamfer cuda and chamfer normal are not giving the same results

my environment: python3.7 + cuda 10.1 + pytorch 1.6

can you help me? thanks

undefined symbol for Pytorch 1.0.1

Hi, thanks for sharing.
I use Pytorch 1.0.1 and python 3.6.8.
I tried this:

import torch
import dist_chamfer_idx

But I got:
undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationERKSs

Does your code support pytorch 1.0.1?

Doesn't work with cpuonly pytorch install

File "/home/***/miniconda3/envs/***/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1626, in _get_cuda_arch_flags
    arch_list[-1] += '+PTX'
IndexError: list index out of range

pytorch is at 1.11. This happens when trying to jit the kernel

https://github.com/ThibaultGROUEIX/ChamferDistancePytorch/blob/master/chamfer3D/dist_chamfer_3D.py#L15

calling load() with with_cuda=False causes the build to fail

Traceback (most recent call last):
  File "/home/***/miniconda3/envs/***/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1746, in _run_ninja_build
    env=env)
  File "/home/***/miniconda3/envs/***/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/***/miniconda3/envs/***/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/***/miniconda3/envs/***/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/***/lib/lightning.py", line 20, in <module>
    from lib.ChamferDistancePytorch.chamfer3D import dist_chamfer_3D
  File "/home/***/lib/ChamferDistancePytorch/chamfer3D/dist_chamfer_3D.py", line 19, in <module>
    ], build_directory=build_path, with_cuda=False)
  File "/home/***/miniconda3/envs/***/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1156, in load
    keep_intermediates=keep_intermediates)
  File "/home/***/miniconda3/envs/***/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1367, in _jit_compile
    is_standalone=is_standalone)
  File "/home/***/miniconda3/envs/***/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1472, in _write_ninja_file_and_build_library
    error_prefix=f"Error building extension '{name}'")
  File "/home/***/miniconda3/envs/***/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1756, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error building extension 'chamfer_3D': [1/2] c++ -MMD -MF chamfer3D.o.d -DTORCH_EXTENSION_NAME=chamfer_3D -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/***/miniconda3/envs/***/lib/python3.7/site-packages/torch/include -isystem /home/***/miniconda3/envs/***/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/***/miniconda3/envs/***/lib/python3.7/site-packages/torch/include/TH -isystem /home/***/miniconda3/envs/***/lib/python3.7/site-packages/torch/include/THC -isystem /home/***/miniconda3/envs/***/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/***/lib/ChamferDistancePytorch/chamfer3D/chamfer3D.cu -o chamfer3D.o 
c++: warning: /home/***/lib/ChamferDistancePytorch/chamfer3D/chamfer3D.cu: linker input file unused because linking not done
[2/2] c++ chamfer_cuda.o chamfer3D.o -shared -L/home/***/miniconda3/envs/***/lib/python3.7/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o chamfer_3D.so
FAILED: chamfer_3D.so 
c++ chamfer_cuda.o chamfer3D.o -shared -L/home/***/miniconda3/envs/***/lib/python3.7/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o chamfer_3D.so
c++: error: chamfer3D.o: No such file or directory
ninja: build stopped: subcommand failed.

I suppose this repo can only be used with CUDA?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.