Coder Social home page Coder Social logo

thibaultgroueix / atlasnet Goto Github PK

View Code? Open in Web Editor NEW
654.0 17.0 116.0 9.11 MB

This repository contains the source codes for the paper "AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation ". The network is able to synthesize a mesh (point cloud + connectivity) from a low-resolution point cloud, or from an image.

Home Page: http://imagine.enpc.fr/~groueixt/atlasnet/

License: MIT License

Python 96.50% GLSL 1.16% Shell 2.34%
3d 3d-deep-learning computer-vision geometry-processing cvpr2018 pytorch

atlasnet's People

Contributors

andrescolognesi avatar thibaultgroueix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

atlasnet's Issues

Memory Leak

I found that the unused self.dist1 and self.dist2 in the file "nndistance/functions/nnd.py" cause memory leaking in my environment. (Python 3.5.2 with Pytorch 0.4.0)

class NNDFunction(Function):
    def forward(self, xyz1, xyz2):
        dist1,dist2=cuda_compute_from(xyz1,xyz2)
        # following two lines cause memory leak
        self.dist1 = dist1
        self.dist2 = dist2
        return dist1, dist2

    def backward(self, graddist1, graddist2):
        gradxyz1,gradxyz2=grad_cuda_compute_from(graddist1,graddist2)
        return gradxyz1, gradxyz2

Mapping 2d to 3d

Hello,
I was wondering if there is any way to map part of a 2d image to the 3d mesh generated. like for the airplane, can i automatically map the left wing in the image to the left wing of the mesh etc..

Any help would be appreciated thanks.

Results

I ran the train_AE_AtlasNet script followed by run_AE_AtlasNet. The results do not seem quite right.

AtlasNet output is below
image

Ground-truth
image

RuntimeError: cuda runtime error (2) : out of memory

I try to run the script ./training/train_AE_AtlasNet.py but I get the error RuntimeError: cuda runtime error (2) : out of memory. It seems that my GPU runs out of memory. So I try tot set the arg batchSzie smaller but it still meets this error(with more epochs though). I use the code network = nn.DataParallel(network) trying to solve this problem but i got another error Expected more than 1 value per channel when training, got input size [1, 1024].
My device is GeForce GTX 1080 and I have 8 GPUS. So i don't think memory should be a problem.
HINT" I am a newer to DL and fresh to pytorch and i hope i don't disturb u

Vertices from shapenet models

ShapeNet models have large number of vertices[eg: 26000 vertices for car models] but your model generates much lesser vertices. So, are the ground truth meshes converted to low resolution before finding the chamfer distance?
Also, some of those vertices in the ground truth correspond to the interior of the car like below
car_interior
Do you just ignore those vertices or just let them be there for the chamfer distance calculation?
Thanks

Trouble downloading shapenet points data

Hi,
Anyone else having trouble downloading the custom shapenet point data?
I keep getting network error in my attempts to download the shapenet data.
I suspect it's because the size is too large. Could you upload them in parts?
Best,

Cannot clone on Windows

When trying to clone the repo to Windows I get the following error:

fatal: cannot create directory at 'aux': Invalid argument
warning: Clone succeeded, but checkout failed.

This is because aux is a reserved name on Windows.
I don't know of a solution to this problem (except from changing the name of the directory).

How is metro being used?

Hi,
Based on what I read on the metro website, it seems that it a windows software and the terminal usage of it for linux is not out yet. So, did you guys do the same or you found a work around for that to make it work on linux via command line?

Thanks

trouble about downloading trainer_models

hi~
I download the code and execute the command line 'python train.py --demo' but i run into some unkown trouble and could not download the model successfully, could you provide the direct link of models for me?
best wishes~

Using my own data

I would like to train the network on my own data, but am not finding many hints on how I might do it. If I were to do this (if it's possible), for each data example, would I need to have for all of: point clouds+normals, normalized meshes, and rendered views like is mentioned for ShapeNet in the README? Or do I only need some of these? I would appreciate it if you could tell me what exactly I would need to train on my own data and if you have any tips.

Dataset Split used for AtlasNet

Hi,

I am wondering which dataset split was used for training the AtlasNet model presented in the original paper. As per my understanding, the paper claims to be using the split earlier used by 3DR2N2 paper. However, I am not able to find the split file anywhere in their repository. I could only find the code here which seems to be dealing with any dataset split. However, I do not believe that this is the same as that of 3DR2N2. Additionally, the authors of Occupancy Network mention in their paper that they could only evaluate pretrained AtlasNet model for a part of the 3DR2N2 dataset. I have checked the overlap of dataset split used by MeshRCNN, Occupancy Network (both of which use 3DR2N2's data split and are consistent with each other) and the one generated from here and I believe that they do not have any significant overlap. I am therefore not sure which dataset split is used by AtlasNet. Is it possible to share a JSON file of the dataset split used?

Thanks.

Sharing training data on public storage?

It seems that the training data is stored on mega.nz, which has a quota of a few GB for downloading, while the training data is a 12GB zip file, so it's impossible to download it at once. Is there any chance that the data is put on public storage that can be freely downloaded?

Where is chamfer ?

I saw "import chamfer" in "extension/dist_chamfer.py",but I can't find where "chamfer" is.Could you help me?

Cannot build.py in nndistance

When I run python nndistance/build.py, I get the following error:
error: /home/user/AtlasNet/nndistance/src/nnd_cuda.cu.o: No such file or directory

Full log:
Including CUDA code.
/home/user/AtlasNet/nndistance
generating /tmp/tmpvnh57h5f/_my_lib.c
setting the current directory to '/tmp/tmpvnh57h5f'
running build_ext
building '_my_lib' extension
creating home
creating home/user
creating home/user/AtlasNet
creating home/user/AtlasNet/nndistance
creating home/user/AtlasNet/nndistance/src
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include -I/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c _my_lib.c -o ./_my_lib.o
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include -I/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c /home/user/AtlasNet/nndistance/src/my_lib.c -o ./home/user/AtlasNet/nndistance/src/my_lib.o
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include -I/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include/TH -I/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/usr/include/python3.5m -c /home/user/AtlasNet/nndistance/src/my_lib_cuda.c -o ./home/user/AtlasNet/nndistance/src/my_lib_cuda.o
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 ./_my_lib.o ./home/user/AtlasNet/nndistance/src/my_lib.o ./home/user/AtlasNet/nndistance/src/my_lib_cuda.o /home/user/AtlasNet/nndistance/src/nnd_cuda.cu.o -o ./_my_lib.so
x86_64-linux-gnu-gcc: error: /home/user/AtlasNet/nndistance/src/nnd_cuda.cu.o: No such file or directory
Traceback (most recent call last):
File "/usr/lib/python3.5/distutils/unixccompiler.py", line 207, in link
self.spawn(linker + ld_args)
File "/usr/lib/python3.5/distutils/ccompiler.py", line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/usr/lib/python3.5/distutils/spawn.py", line 36, in spawn
_spawn_posix(cmd, search_path, dry_run=dry_run)
File "/usr/lib/python3.5/distutils/spawn.py", line 159, in _spawn_posix
% (cmd, exit_status))
distutils.errors.DistutilsExecError: command 'x86_64-linux-gnu-gcc' failed with exit status 1

To be honest, the latest code is very hard to understand

I compare our method with AtlasNet several times. I need to edit the source code each time.
However, the latest code is very hard to understand because it is of high abstraction.
It takes me an hour to understand the relationship between each module.

train AtlasNet on own dataset

Hi,

Is it possible to train the AtlasNet on our own dataset? If so, what data should I generate if I have 2D RGB images and corresponding 3D objects? And I only want to synthesize a mesh from a single image. Thanks

Stuck after launching visdom server

I run the demo succesfully but after I launch the visdom server

python -m visdom.server -p 8888

I am stuck, I can't write any command anymore in my anaconda window. How do I continue? thanks!

normalized mesh

Hi Thibault,
another question regarding your data.
In the normalised mesh zip, there are only very few meshes available. Are these for demonstration purposes?
Did you do any preprocessing to the original shapenet meshes before sampling the points?

No such file or directory: './data/customShapeNet/02958343/ply/48caec6417d9303af96de5ee9a21bf30.points.ply'

I downloaded data from the provided URL.

When I run train_AE_AtlasNet.py, I got an error message saying No such file or directory: './data/customShapeNet/02958343/ply/48caec6417d9303af96de5ee9a21bf30.points.ply'.

Besides, here's the output of the data loader:

{'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'}
category  02691156 files 4044 0.999752781211372 %
category  02828884 files 1813 0.9983480176211453 %
category  02933112 files 1571 0.9993638676844784 %
category  02958343 files 3514 0.46872082166199813 %
category  03001627 files 6778 1.0 %
category  03211117 files 1093 0.9981735159817352 %
category  03636649 files 2309 0.9961173425366695 %
category  03691459 files 1597 0.9870210135970334 %
category  04090263 files 2373 1.0 %
category  04256520 files 3173 1.0 %
category  04379243 files 8436 0.9914208485133388 %
category  04401088 files 1050 0.9980988593155894 %
category  04530566 files 1939 1.0 %
{'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'}
category  02691156 files 4044 0.999752781211372 %
category  02828884 files 1813 0.9983480176211453 %
category  02933112 files 1571 0.9993638676844784 %
category  02958343 files 3514 0.46872082166199813 %
category  03001627 files 6778 1.0 %
category  03211117 files 1093 0.9981735159817352 %
category  03636649 files 2309 0.9961173425366695 %
category  03691459 files 1597 0.9870210135970334 %
category  04090263 files 2373 1.0 %
category  04256520 files 3173 1.0 %
category  04379243 files 8436 0.9914208485133388 %
category  04401088 files 1050 0.9980988593155894 %
category  04530566 files 1939 1.0 %
training set 30643
testing set 8770

I think you miss lots of samples in cars.

training with a single gpu did not work initially

I was training with the following command:
python train.py --class_choice=car --dir_name log/atlasnet_singleview_25_squares_tmp --nb_primitives 25 --template_type SQUARE --SVR --reload_decoder_path training/trained_models/atlasnet_autoencoder_25_squares/network.pth --train_only_encoder --multi_gpu 0
I got a map location error, when loading the weights. I had to make the following change to model/trainer_model.py, line 49

  •        network.module.load_state_dict(torch.load(opt.reload_decoder_path))
    
  •        network.module.load_state_dict(torch.load(opt.reload_decoder_path, map_location='cuda:0'))
    

Evaluate RGB image with pretrained model

Hi Iam actually try to evaluate SVR Atlas pretrained model on RGB image(chair), my parameters are really similar to the demo and i got wird result (by view in chrom 3D viewer ).
i used the demo grid generation.
when i run your demo plane.jpg im my network i got good results in the 3Dviewer .
Demo plain
wird_pic_1
wird_pic_2
wird_pic_3
can you please direct my how to evaluate RGB image?

jacobian_regularization branch: Demo fails, many bugs.

image
python train.py --demo --demo_input_path /media/vrlab/out/aimi/AtlasNet/demo/Chair-PNG-Transparent.png --reload_model_path /media/vrlab/out/aimi/AtlasNet/demo/trained_models/atlasnet_jacobian_noregul/
Jitting Chamfer 3D
Loaded JIT 3D CUDA chamfer distance
Launching new HTTP instance in port 8891
TMUX=0 tmux new-session -d -s http_server ; send-keys "/home/vrlab/.conda/envs/atlasnet/bin/python -m http.server -p 8891 > /dev/null 2>&1" Enter
duplicate session: http_server
Setting up a new session...
Traceback (most recent call last):
File "train.py", line 18, in
trainer = trainer.Trainer(opt)
File "/media/vrlab/out/aimi/AtlasNet/training/trainer.py", line 29, in init
os.mkdir(self.opt.training_media_path)
FileNotFoundError: [Errno 2] No such file or directory: 'log/02020-05-14T20:30:33.630591/training_media'

opt.dir_path is "log/02020-05-14T20:30:33.630591"
image
although change it to opt.reload_model_path could help, there are other similar problem.
Does anyone have the same problem?

Test set used as validation to choose best model

In train_AE_Atlasnet.py, the test set is used as the validation set to choose the best model. The test set should never be used during training and especially not to choose the best model as this biases the results. It's probably more appropriate to report the results on the last training epoch if there was no validation set.

UnboundLocalError: Caught UnboundLocalError in DataLoader worker process 0.

for i in range(15): #this for loop is because of some weird error that happens sometime during loading I didn't track it down and brute force the solution like this.
try:
mystring = my_get_n_random_lines(fn[1], n = self.npoints)
point_set = np.loadtxt(mystring).astype(np.float32)
break

UnboundLocalError: Caught UnboundLocalError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/brando/anaconda3/envs/pytorch-atlasnet/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/brando/anaconda3/envs/pytorch-atlasnet/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/brando/anaconda3/envs/pytorch-atlasnet/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "./auxiliary/dataset.py", line 260, in __getitem__
    point_set = point_set[:, 0:3]
UnboundLocalError: local variable 'point_set' referenced before assignment

So what's your "weird error"? I tried to move the point_set assignment out of the for loop but it did not work.

The corresponding normalized mesh

I downloaded the corresponding normalized mesh (Only 58Mb) from the link you provided. I found that the number of the mesh was much smaller than the corresponding point cloud.
Could you please provide the full dataset of the corresponding normalized mesh? Thank you!

about validation loss

image
I trained your code on python3.7/torch0.4.1 . It has this problem.The training loss becomes low but the validation loss increases.Do I have problem in torch vertion?

RuntimeError: CUDA error: out of memory

Thank you for the great work!
I get this error below when I run: ./training/train_AE_AtlasNet.py

I checked two more similar issues but this looks different. Any idea how to solve it? Any help appreciated!

File "./training/train_AE_AtlasNet.py", line 151, in
dist1, dist2 = distChamfer(points.transpose(2,1).contiguous(), pointsReconstructed) #loss function
File "./training/train_AE_AtlasNet.py", line 64, in distChamfer
P = (rx.transpose(2,1) + ry - 2*zz)
RuntimeError: CUDA error: out of memory

I run pytorch:0.4.1 / Ubuntu 18.04

FULL CODE:

(pytorch-atlasnet) user@user-FB-22866-One-Computer-Core-i5-46:~/AtlasNet$ python ./training/train_AE_AtlasNet.py --env $env --nb_primitives $nb_primitives |& tee ${env}.txt Setting up a new session... Namespace(accelerated_chamfer=0, batchSize=32, env='AE_AtlasNet', model='', nb_primitives=25, nepoch=120, num_points=2500, super_points=2500, workers=12) Random Seed: 314 {'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'} category 02691156 files 4044 0.999752781211372 % category 02828884 files 1813 0.9983480176211453 % category 02933112 files 1571 0.9993638676844784 % category 02958343 files 3514 0.46878335112059766 % category 03001627 files 6778 1.0 % category 03211117 files 1093 0.9981735159817352 % category 03636649 files 2309 0.9961173425366695 % category 03691459 files 1597 0.9870210135970334 % category 04090263 files 2373 1.0 % category 04256520 files 3173 1.0 % category 04379243 files 8436 0.9914208485133388 % category 04401088 files 1050 0.9980988593155894 % category 04530566 files 1939 1.0 % {'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'} category 02691156 files 4044 0.999752781211372 % category 02828884 files 1813 0.9983480176211453 % category 02933112 files 1571 0.9993638676844784 % category 02958343 files 3514 0.46878335112059766 % category 03001627 files 6778 1.0 % category 03211117 files 1093 0.9981735159817352 % category 03636649 files 2309 0.9961173425366695 % category 03691459 files 1597 0.9870210135970334 % category 04090263 files 2373 1.0 % category 04256520 files 3173 1.0 % category 04379243 files 8436 0.9914208485133388 % category 04401088 files 1050 0.9980988593155894 % category 04530566 files 1939 1.0 % training set 31747 testing set 7943 **Traceback (most recent call last): File "./training/train_AE_AtlasNet.py", line 151, in <module> dist1, dist2 = distChamfer(points.transpose(2,1).contiguous(), pointsReconstructed) #loss function File "./training/train_AE_AtlasNet.py", line 64, in distChamfer P = (rx.transpose(2,1) + ry - 2*zz) RuntimeError: CUDA error: out of memory**

RuntimeError: cuda runtime error (77) : an illegal memory access was encountered

My cuda is 9.0, pytorch is 1.0.0, and there is an illegal memory access when using chamfer distance.

sys.path.append("./extension/")
import dist_chamfer as ext

distChamfer =  ext.chamferDist()

x = torch.rand(32, 2500, 3)
y = torch.rand(32, 2500, 3)
x.cuda()
y.cuda()

dis1, dis2 = distChamfer(x, y)
#dis1 = x - y

print(dis1)

About the point cloud dataset

I found that some of your point cloud dataset provided are missing.
Could you provide all the point cloud dataset? or Could you tell me how to generate the point cloud dataset? Thank you!

poor performance

I have trained the autoencoder using the command 'python train.py --shapenet13',and didn't change any things of your code.I got a fscore of 0.819.in addition, I didn't train the single-view example.I attach a model of
VMTX2K`UG863AI@55 YKBE9 AtlasnetReconstruction to the attachment. I got this ply file through the command 'python train.py --demo' and I changed the path of 'network.pth' of my own.
I think I may have done the wrong step.
Looking forward to your reply.Thanks.

validation loss explodes

4cb6332fbe946eaa6a317f9f2ddc3b6
I directly run the script 'train_AE_Atlasnet.py' without any modification.
As you can see above, the performance is good on the training set, but quite poor on the validation set.
The validation loss increases quickly and doesn't decrease.

poor performance in my own trained model

Hi,Thank you for your great works and i benefit so much from it when I read your paper .when I run your code ,and trained the datasets of shapenet,I got the same fscore as you but got poor models compared to yours.I didn't change your parameter or anyohther codes. Can you tell me what's wrong witn my operate.I sincerely hope you can give me some help.
My Best wishes!

Paper visualizations

In Figure 3 of the paper, what is the difference between c) and e/f/g/h. Do you use PSR for e/f/g/h or not? If not, how are those meshes obtained?

Change The Encoder

Hello:
I have changed the part of encoder from pointNet to PointNet++.The Fscore is higher than orignal paper,but the 3d mesh
generated by AtlasNet is poorer.I wonder if you have tried PointNet++ before.
Any help would be appreciated, thanks.

[BUG] Chamfer Distance is not Correct

I tried to debug the chamfer.cu by printing the values of tensors.
I created two point clouds containing 3 and 5 points, respectively. The values are shown as below.

(1,.,.) = 
 0.01 *
  0.0000  0.0000  0.0000
  -20.4838  4.4935  6.1395
  -3.7283 -0.7629  1.7736

(2,.,.) = 
 0.01 *
  0.0000  0.0000  0.0000
  -17.4992  4.4902  5.0518
  -1.6003 -1.2430  0.8040
[ Variable[CUDAType]{2,3,3} ]
(1,.,.) = 
  0.0051  0.1850  0.0004
  0.0051  0.1850  0.0093
  0.0096  0.1850  0.0081
  0.0096  0.1850  0.0016
  0.0075  0.1850  0.0004

(2,.,.) = 
 -0.1486 -0.0932 -0.0014
 -0.0406 -0.0932 -0.0017
 -0.2057 -0.0932 -0.0001
 -0.0915 -0.0932 -0.0001
  0.0103 -0.0932 -0.0001
[ Variable[CUDAType]{2,5,3} ]

I also add print statements in CUDA functions, and I got the following output.

2i = 0, n = 3, j = 0, k = 0, d = 0.03425420, x = (0.00000000 0.00000000 0.00000000) y = (0.00511124 0.18500790 0.00038808)
2i = 0, n = 3, j = 1, k = 0, d = 0.06742091, x = (-0.20483765 0.04493479 0.06139540) y = (0.00511124 0.18500790 0.00038808)
2i = 0, n = 3, j = 2, k = 0, d = 0.03920735, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00511124 0.18500790 0.00038808)
2i = 1, n = 3, j = 0, k = 0, d = 0.03573948, x = (-0.08192606 0.01907521 0.02376382) y = (0.00749534 0.18500790 0.00928491)
2i = 1, n = 3, j = 1, k = 0, d = 0.03405631, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00749534 0.18500790 0.00928491)
2i = 1, n = 3, j = 2, k = 0, d = 0.03437031, x = (0.00000000 0.00000000 0.00000000) y = (0.00749534 0.18500790 0.00928491)
2i = 0, n = 3, j = 0, k = 1, d = 0.03434026, x = (0.00000000 0.00000000 0.00000000) y = (0.00511124 0.18500790 0.00928491)
2i = 0, n = 3, j = 1, k = 1, d = 0.06641452, x = (-0.20483765 0.04493479 0.06139540) y = (0.00511124 0.18500790 0.00928491)
2i = 0, n = 3, j = 2, k = 1, d = 0.03897782, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00511124 0.18500790 0.00928491)
2i = 1, n = 3, j = 0, k = 1, d = 0.03490656, x = (-0.08192606 0.01907521 0.02376382) y = (0.00231482 0.18500790 0.00713918)
2i = 1, n = 3, j = 1, k = 1, d = 0.03394968, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00231482 0.18500790 0.00713918)
2i = 1, n = 3, j = 2, k = 1, d = 0.03428425, x = (0.00000000 0.00000000 0.00000000) y = (0.00231482 0.18500790 0.00713918)
2i = 0, n = 3, j = 0, k = 2, d = 0.03438481, x = (0.00000000 0.00000000 0.00000000) y = (0.00955979 0.18500790 0.00809300)
2i = 0, n = 3, j = 1, k = 2, d = 0.06842789, x = (-0.20483765 0.04493479 0.06139540) y = (0.00955979 0.18500790 0.00809300)
2i = 0, n = 3, j = 2, k = 2, d = 0.03939636, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00955979 0.18500790 0.00809300)
2i = 1, n = 3, j = 0, k = 2, d = 0.03508088, x = (-0.08192606 0.01907521 0.02376382) y = (0.00231482 0.18500790 0.00253408)
2i = 1, n = 3, j = 1, k = 2, d = 0.03389502, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00231482 0.18500790 0.00253408)
2i = 1, n = 3, j = 2, k = 2, d = 0.03423970, x = (0.00000000 0.00000000 0.00000000) y = (0.00231482 0.18500790 0.00253408)
2i = 0, n = 3, j = 0, k = 3, d = 0.03432181, x = (0.00000000 0.00000000 0.00000000) y = (0.00955979 0.18500790 0.00158027)
2i = 0, n = 3, j = 1, k = 3, d = 0.06916460, x = (-0.20483765 0.04493479 0.06139540) y = (0.00955979 0.18500790 0.00158027)
2i = 0, n = 3, j = 2, k = 3, d = 0.03956439, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00955979 0.18500790 0.00158027)
2i = 1, n = 3, j = 0, k = 3, d = 0.03652760, x = (-0.08192606 0.01907521 0.02376382) y = (0.01075170 0.18500790 0.00364473)
2i = 1, n = 3, j = 1, k = 3, d = 0.03404036, x = (-0.00152916 0.00097788 -0.00109852) y = (0.01075170 0.18500790 0.00364473)
2i = 1, n = 3, j = 2, k = 3, d = 0.03435681, x = (0.00000000 0.00000000 0.00000000) y = (0.01075170 0.18500790 0.00364473)
3i = 0, n = 3, j = 0, k = 4, d = 0.03428425, x = (0.00000000 0.00000000 0.00000000) y = (0.00749534 0.18500790 0.00038808)
3i = 0, n = 3, j = 1, k = 4, d = 0.06842767, x = (-0.20483765 0.04493479 0.06139540) y = (0.00749534 0.18500790 0.00038808)
3i = 0, n = 3, j = 2, k = 4, d = 0.03941518, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00749534 0.18500790 0.00038808)
3i = 1, n = 3, j = 0, k = 4, d = 0.03643737, x = (-0.08192606 0.01907521 0.02376382) y = (0.01075170 0.18500790 0.00602855)
3i = 1, n = 3, j = 1, k = 4, d = 0.03406866, x = (-0.00152916 0.00097788 -0.00109852) y = (0.01075170 0.18500790 0.00602855)
3i = 1, n = 3, j = 2, k = 4, d = 0.03437987, x = (0.00000000 0.00000000 0.00000000) y = (0.01075170 0.18500790 0.00602855)
i = 0, n = 3, j = 0, best = 0.03425420, best_i = 0
i = 0, n = 3, j = 1, best = 0.06641452, best_i = 1
i = 0, n = 3, j = 2, best = 0.03897782, best_i = 1
i = 1, n = 3, j = 0, best = 0.03490656, best_i = 1
i = 1, n = 3, j = 1, best = 0.03389502, best_i = 2
i = 1, n = 3, j = 2, best = 0.03423970, best_i = 2

For batch 0 (i = 0), everything seems correct. However, for batch 1 (i = 1), the values of point clouds are not in the tensors.
Is there something wrong with the code?

Other trained models are not loading successfully

Thank you very much for posting this repo.
I used demo.py with different input image and it's generating meshes as well.
Also, I could use only svr_atlas_25.pth which is default. When I tried other trained models like ae_atlasnet_25.pth and other couldn't be loaded successfully.
It raising error as:

Traceback (most recent call last):
File "inference/demo.py", line 43, in
network.load_state_dict(torch.load(opt.model))
File "/home/wowexp/anaconda3/envs/kaolin/lib/python3.6/site-packages/torch/nn/modules/module.py", line 839, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for SVR_AtlasNet:
Missing key(s) in state_dict: "encoder.conv1.weight", "encoder.bn1.weight", "encoder.bn1.bias", "encoder.bn1.running_mean", "encoder.bn1.running_var", "encoder.layer1.0.conv1.weight", "encoder.layer1.0.bn1.weight", "encoder.layer1.0.bn1.bias", "encoder.layer1.0.bn1.running_mean", "encoder.layer1.0.bn1.running_var", "encoder.layer1.0.conv2.weight", "encoder.layer1.0.bn2.weight", "encoder.layer1.0.bn2.bias", "encoder.layer1.0.bn2.running_mean", "encoder.layer1.0.bn2.running_var", "encoder.layer1.1.conv1.weight", "encoder.layer1.1.bn1.weight", "encoder.layer1.1.bn1.bias", "encoder.layer1.1.bn1.running_mean", "encoder.layer1.1.bn1.running_var", "encoder.layer1.1.conv2.weight", "encoder.layer1.1.bn2.weight", "encoder.layer1.1.bn2.bias", "encoder.layer1.1.bn2.running_mean", "encoder.layer1.1.bn2.running_var", "encoder.layer2.0.conv1.weight", "encoder.layer2.0.bn1.weight", "encoder.layer2.0.bn1.bias", "encoder.layer2.0.bn1.running_mean", "encoder.layer2.0.bn1.running_var", "encoder.layer2.0.conv2.weight", "encoder.layer2.0.bn2.weight", "encoder.layer2.0.bn2.bias", "encoder.layer2.0.bn2.running_mean", "encoder.layer2.0.bn2.running_var", "encoder.layer2.0.downsample.0.weight", "encoder.layer2.0.downsample.1.weight", "encoder.layer2.0.downsample.1.bias", "encoder.layer2.0.downsample.1.running_mean", "encoder.layer2.0.downsample.1.running_var", "encoder.layer2.1.conv1.weight", "encoder.layer2.1.bn1.weight", "encoder.layer2.1.bn1.bias", "encoder.layer2.1.bn1.running_mean", "encoder.layer2.1.bn1.running_var", "encoder.layer2.1.conv2.weight", "encoder.layer2.1.bn2.weight", "encoder.layer2.1.bn2.bias", "encoder.layer2.1.bn2.running_mean", "encoder.layer2.1.bn2.running_var", "encoder.layer3.0.conv1.weight", "encoder.layer3.0.bn1.weight", "encoder.layer3.0.bn1.bias", "encoder.layer3.0.bn1.running_mean", "encoder.layer3.0.bn1.running_var", "encoder.layer3.0.conv2.weight", "encoder.layer3.0.bn2.weight", "encoder.layer3.0.bn2.bias", "encoder.layer3.0.bn2.running_mean", "encoder.layer3.0.bn2.running_var", "encoder.layer3.0.downsample.0.weight", "encoder.layer3.0.downsample.1.weight", "encoder.layer3.0.downsample.1.bias", "encoder.layer3.0.downsample.1.running_mean", "encoder.layer3.0.downsample.1.running_var", "encoder.layer3.1.conv1.weight", "encoder.layer3.1.bn1.weight", "encoder.layer3.1.bn1.bias", "encoder.layer3.1.bn1.running_mean", "encoder.layer3.1.bn1.running_var", "encoder.layer3.1.conv2.weight", "encoder.layer3.1.bn2.weight", "encoder.layer3.1.bn2.bias", "encoder.layer3.1.bn2.running_mean", "encoder.layer3.1.bn2.running_var", "encoder.layer4.0.conv1.weight", "encoder.layer4.0.bn1.weight", "encoder.layer4.0.bn1.bias", "encoder.layer4.0.bn1.running_mean", "encoder.layer4.0.bn1.running_var", "encoder.layer4.0.conv2.weight", "encoder.layer4.0.bn2.weight", "encoder.layer4.0.bn2.bias", "encoder.layer4.0.bn2.running_mean", "encoder.layer4.0.bn2.running_var", "encoder.layer4.0.downsample.0.weight", "encoder.layer4.0.downsample.1.weight", "encoder.layer4.0.downsample.1.bias", "encoder.layer4.0.downsample.1.running_mean", "encoder.layer4.0.downsample.1.running_var", "encoder.layer4.1.conv1.weight", "encoder.layer4.1.bn1.weight", "encoder.layer4.1.bn1.bias", "encoder.layer4.1.bn1.running_mean", "encoder.layer4.1.bn1.running_var", "encoder.layer4.1.conv2.weight", "encoder.layer4.1.bn2.weight", "encoder.layer4.1.bn2.bias", "encoder.layer4.1.bn2.running_mean", "encoder.layer4.1.bn2.running_var", "encoder.fc.weight", "encoder.fc.bias".
Unexpected key(s) in state_dict: "encoder.0.stn.conv1.weight", "encoder.0.stn.conv1.bias", "encoder.0.stn.conv2.weight", "encoder.0.stn.conv2.bias", "encoder.0.stn.conv3.weight", "encoder.0.stn.conv3.bias", "encoder.0.stn.fc1.weight", "encoder.0.stn.fc1.bias", "encoder.0.stn.fc2.weight", "encoder.0.stn.fc2.bias", "encoder.0.stn.fc3.weight", "encoder.0.stn.fc3.bias", "encoder.0.conv1.weight", "encoder.0.conv1.bias", "encoder.0.conv2.weight", "encoder.0.conv2.bias", "encoder.0.conv3.weight", "encoder.0.conv3.bias", "encoder.0.bn1.weight", "encoder.0.bn1.bias", "encoder.0.bn1.running_mean", "encoder.0.bn1.running_var", "encoder.0.bn2.weight", "encoder.0.bn2.bias", "encoder.0.bn2.running_mean", "encoder.0.bn2.running_var", "encoder.0.bn3.weight", "encoder.0.bn3.bias", "encoder.0.bn3.running_mean", "encoder.0.bn3.running_var", "encoder.1.weight", "encoder.1.bias", "encoder.2.weight", "encoder.2.bias", "encoder.2.running_mean", "encoder.2.running_var".

compile chamfer

hi, it is an awesome job.
My ubuntu16.04 , python3.5, pytorch1.0, cuda9.2, gcc 5.4
But I get a wrong when I compile chamfer.
import torch;import chamfer;
ImportError: /usr/local/lib/python3.5/dist-packages/chamfer-0.0.0-
py3.5-linux-x86-64.egg/chamfer.cpython-35m-x86_64-linux-gnu.so:
underfined symbol: _ZN2at5ErrorCLENs_14SourceLocationERKSs
Thanks in advance!

When I compile the nndistane

I used Python2.7 Pytorch0.1.12
but when I compile the build.py in file nndistance
I meet a strange error just like

/lib/python2.7/site-packages/torch/utils/ffi/../../lib/include/THC/THCGenerateHalfType.h:9:14:
error: unknown type name ‘half’
#define real half

I wanna know if anyone meet the same problem?????

Trying to create a VAE version

I'm trying to create a modified version of this model that is a variational autoencoder so I can perform random sampling using the learned weights, using a sum of the chamfer distance and KLD as the loss function. I've been running into trouble with getting good reconstructions with my implementation training on 1000 of the ShapeNet plane models, as all results tend to look the same no matter what input I give. I'm still trying to understand all the nuances of VAEs myself as well as parts of the AtlasNet architecture, but I just wanted to ask -- can you think of any reason why converting the architecture to a VAE wouldn't work? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.