Coder Social home page Coder Social logo

pf-net-point-fractal-network's People

Contributors

zztianzz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pf-net-point-fractal-network's Issues

show_FPNet.py 跑不通

第27行的引用
from model_recon_3layers_robustness import _netlocalD,_netG
没有来源,报错为
ModuleNotFoundError: No module named 'model_recon_3layers_robustness'
麻烦解决一下

collab example

hello, i was wondering if you could provide a collab example where i provide two perspectives of the same mesh (point cloud) and then this algorithm merges them to provide some sort of metric of how complete the object is?

Questions About Extracting Point Cloud Features Using 2D Convolution in the Code

In the network, using 2D convolution to extract latent features from point clouds seems to differ from the common MLP (Multi-Layer Perceptron) method of dimensionality expansion for single-point features. In MLP dimensionality expansion, each expansion step considers all the features of a single point. However, it appears that using 2D convolution only considers values within a single channel at a time and does not take into account the features across C channels after dimensionality expansion. I hope you can clarify this confusion. Thank you very much.

Additionally, I have noticed that many people are searching for trained models. Below is a model that I have trained.
链接: https://pan.baidu.com/s/1YiLQCd_0IOe4CIBGZF4Nlw?pwd=5rb8 提取码: 5rb8

Here are the trained weight files (modelG.pth)

The first is to thank and worship the leaders for their outstanding contributions.
Here is the model file I have trained with the boss's existing code for friends who like this application direction to test.
链接:https://pan.baidu.com/s/1PhqNv3pLKBNWJRPQR70XDQ
提取码:d6fk
You can optimize the data loading, increase the speed by about 5 times (single process), further increase the speed of multiple processes, and then train for 15 hours a night. Intel CPU i7 + SSD + 12g -- rtx3060.

Questions about dense regions in the predictions

Hi, thank you for sharing your work.

I am using the architecture on a different dataset for a research project and I noticed that there is sometimes a dense region in my final predictions after 60 epochs. The points in the dense region are mainly coming from the low level prediction (Y_primary).
Q1: Is it an expected behavior that occur when training the model?
Q2: Does it disappear with more epochs?

I saw that you change the alpha weights during the training by augmenting them.
Q3: Did you try by starting with higher weights and decrease them over time?
Q4: Can this last suggestion solve the problem mentioned above?

Example or the problem:
image

Thanks in advance.

FileNotFoundError: [Errno 2] No such file or directory: 'Trained_Model/point_netG0.pth'

Traceback (most recent call last):
File "Train_FPNet.py", line 340, in
'Trained_Model/point_netG'+str(epoch)+'.pth' )
File "/home/ouc/.conda/envs/PF-Net/lib/python3.7/site-packages/torch/serialization.py", line 218, in save
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "/home/ouc/.conda/envs/PF-Net/lib/python3.7/site-packages/torch/serialization.py", line 141, in _with_file_like
f = open(f, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'Trained_Model/point_netG0.pth'

I can't find the location of Trained_Model ,could you please tell me how to solve this problem.

Robustness Test issue

First of all, thank you for the code!

I got a question while running the code to check the Robustness test results.
I understand that I have to decide the missing ratio through crop_point_num for learning.
Therefore, I wonder if the crop_point_num was set as 512, 1024, and 1536, respectively, for the robustness test.
In other words, I wonder if you did the robustness test after each learning by setting several missing ratio.
If not, I would like to know if there is another way to set missing ratio.

I'd appreciate it if you could answer.

Second bug in Train_PFNet.py

/PF-Net-Point-Fractal-Network/shapenet_part_loader.py", line 24, in init
with open(self.catfile, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: './dataset/shapenetcore_partanno_segmentation_benchmark_v0/synsetoffset2category.txt'

solution: change './dataset/shapenetcore_partanno_segmentation_benchmark_v0/synsetoffset2category.txt' to 'dataset/shapenet_part/shapenetcore_partanno_segmentation_benchmark_v0/synsetoffset2category.txt'

Question for the paper.

​Hi, thanks for your work. I am really interested in it.

Besides, I have another question for the paper. Is this work category-specific or a single model for all the categories? I didn't get the point.

Thanks in advance.

About Chamfer Distance.

I trained a model only with catagorty 'Car' and test Chamfer Distance with your code 'show_CD'.
It reported an average CD: 0.003873.
Is that right? Because I think it is an unreasonble result. Some state-of-the-art work(e.g. PCN, Top-Net) report its 3D completion task CD around 0.0008.
I didn't see any quantitative results in your paper so I asked here.
Looking for your reply :)

消融实验

您好,我想问下Table4中的数据是被放大了多少,为什么PF-Net(vanilla)的table和chair数据与Table2中的数据不一样。麻烦告知,谢谢。

论文中4.2无监督

您好,首先感谢您的工作。在阅读论文过程中,我有一点不是很明白,训练过程中既然已经利用了GT来进行监督训练,4.2无监督的表述是来说明什么的?

Unable to access Model weights.

Hi,

I am not able to access the model weights from pan.baidu. Could you kindly share it with me using some other alternative. That would be of immense help to me.

Thank you

Test_csv infile

Hello! When restoring the complete point cloud in Test_csv, why does the restoration effect of the customized residual point cloud (I chose a fixed point direction, and the 512 points closest to the selected direction point in the 2048 point cloud are removed), be very bad? Are there any requirements for using Test_csv input?

there is no file ‘shapenet_part_overallid_to_catid_partid.json’

hello,when I run the code
bash download_shapenet_part16_catagories.sh
met the question of
there is no file ‘shapenet_part_overallid_to_catid_partid.json’
I cant find the file of ‘shapenet_part_overallid_to_catid_partid.json’ . Could you please tell how to get the file?

download dataset, there is no json file

step1, in the .sh file which to download dataset:

mv shapenet_part_overallid_to_catid_partid.json shapenet_part/shapenetcore_partanno_segmentation_benchmark_v0/

there is no such file ' shapenet_part_overallid_to_catid_partid.json'. how to get this file.

Are the input of MRE incomplete?

Thank you for your open source code. Could you answer a question to me?
I notice that there are 3 scales for the input of MRE. For one model in ShapeNet Parts, the number of points is 2048. However, one of your scale is 2048. This is complete point cloud. You utilize complete point cloud models to accomplish completion task. How to explain this or how do I understand it? Thanks again!

question about the dimension

Traceback (most recent call last):
File "Train_PFNet.py", line 221, in
fake_center1,fake_center2,fake =point_netG(input_cropped)
File "/home/ouc/.conda/envs/PF-Net/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/ouc/.conda/envs/PF-Net/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 143, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/ouc/.conda/envs/PF-Net/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 153, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/ouc/.conda/envs/PF-Net/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
raise output
File "/home/ouc/.conda/envs/PF-Net/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
output = module(*input, **kwargs)
File "/home/ouc/.conda/envs/PF-Net/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/ouc/l-dataset/PF-Net-Point-Fractal-Network-master/model_PFNet.py", line 161, in forward
pc3_xyz = pc2_xyz_expand+pc3_xyz
RuntimeError: The size of tensor a (12) must match the size of tensor b (20) at non-singleton dimension 0

when i run the Tranin_PFNet.py it meet this problem,but i only change the epoch,could you please find the problem?

How many episodes does it take to get peak performance?

Hello!

Just curious do I have to go through all 200 training episodes to get the reproduced metric? It seems like each epoch will take half an hour for me.

Besides, may I know why the chamfer distance used in training is divided by 2?

Thanks for your help!

def chamfer_distance_numpy(array1, array2):
    batch_size, num_point, num_features = array1.shape
    dist = 0
    for i in range(batch_size):
        av_dist1 = array2samples_distance(array1[i], array2[i])
        av_dist2 = array2samples_distance(array2[i], array1[i])
        dist = dist + (0.5*av_dist1+0.5*av_dist2)/batch_size # why the chamfer distance used in training is divided by 2
    return dist*100

def chamfer_distance_numpy_test(array1, array2):
    batch_size, num_point, num_features = array1.shape
    dist_all = 0
    dist1 = 0
    dist2 = 0
    for i in range(batch_size):
        av_dist1 = array2samples_distance(array1[i], array2[i])
        av_dist2 = array2samples_distance(array2[i], array1[i])
        dist_all = dist_all + (av_dist1+av_dist2)/batch_size
        dist1 = dist1+av_dist1/batch_size
        dist2 = dist2+av_dist2/batch_size
    return dist_all, dist1, dist2

input

hello!
I would like to ask a question.After the training is completed, should both incomplete and complete point clouds be used as input?

Question

Hi, first just to say great result. Super interesting.

I have a question about the robustness test on many holes. Is it the same number of point that in cropped(i.e. 25%)? And why do you think the model is robust to many holes even though it's was not trained on that?

Could this work on Windows or just only fit Linux?

Thanks a lot for your great work!! But there's something confusing me that could I use Windows system to run your code, or it just work on Linux like Ubuntu? I just found that the '.sh' file work for linux sys, so.. is there any way i can transform it or do something Cauze actually it isn't available for me to use Ubuntu now. ( i know this maybe a stupid question,but i'm really a novice. Thx a lot!!! :)

Robustness Test Info

Hi! First of all, thanks for the awesome work.
I was wondering which is the training configuration used for the robustness test on the Airplanes (Paper, Figure 9).
From my code understanding, in general, at training time you choose one viewpoint and crop points around to get the input_cropped1 shape and the respective completion GT...
Do I have to change the training procedure in order to be robust also in case of multiple holes around random positions?
If I have correctly understood, in fig 9 you're selecting two viewpoints and cropping around instead of only one, so I immagine that you also have to re-train the model coinstraining your network to output the correct number of missing points (crop_point_num * 2).
Last question, for this experiment did you train jointly on all classes or just on 'Airplane'?
Thank you very much in advance :)

FPS RAN parameter

Hi!
When you call the utils.farthest_point_sample function you've a boolean parameter named RAN. What does it mean and which is its purpose? Thank you!

PCN Implementation

Hi!
Which PCN repo have you used for comparison in your paper? I'm having trouble finding a reliable pytorch implementation to compare with. Can you share with me the model code of PCN you used? I see that in older version of this repo there was a 'comparison-test' folder with the training script used but not the model' code.
Thank you very much,
Antonio

training

hello, when you trained the model, Whether to train all classes together or separately?

Pred~gt

hello please tell me how to find the 'pred~gt' data ?

there is no file ‘shapenet_part_overallid_to_catid_partid.json‘

hello,when I run the code 'bash download_shapenet_part16_catagories.sh',
I met the question of "there is no file ‘shapenet_part_overallid_to_catid_partid.json’".
I cant find the file of ‘shapenet_part_overallid_to_catid_partid.json’ .
Could you please tell how to get the file?Thanks a lot.

First bug in Train_PFNet.py

Traceback (most recent call last):
File "Train_PFNet.py", line 17, in
from model_FPNet import _netlocalD,_netG
ModuleNotFoundError: No module named 'model_FPNet'

solution: change 'model_FPNet.py' to 'model_PFNet.py'

Training with data captured from real scenarios

Hi, thanks for your contribution.
As presented in your paper and codes, the cropped data were generated via random removing points (just set them as zeros). And in your codes, the missing part of each sample was stored as 'real_center', which also was used in data prepare part. I wonder if the model will be robust with the incomplete data from the real scenarios, such as the construction members? Because in these scenarios, the missing point has no XYZ coordinates.

About the accuracy

I use the code provided to train and evaluate without changing anything.
The max epoch is 200, and I use show_CD.py to evaluate.

During evaluation, I get ~2.6e-3 pred2gt loss, which is close enough to the numbers reported.
However, I also get ~2.6
e-3 gt2pred loss. Obviously higher than the numbers reported.

Is this normal?

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Hi,

I met a problem when I train the net:
[0/201][1516/1518] Loss_D: 1.2541 Loss_G: 0.7839 / 0.8329 / 0.8305/ 0.7787
Traceback (most recent call last):
File "Train_FPNet.py", line 231, in
output = point_netD(real_center)
File "/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/opt/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/scratch/cmpt_743/PF-Net-Point-Fractal-Network-master/model_FPNet.py", line 196, in forward
x = torch.cat(Layers,1)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Is that the data problem?

Because I when prepare the data using bash download_shapenet_part16_catagories.sh, I got:
mv: cannot stat 'shapenet_part_overallid_to_catid_partid.json': No such file or directory

Thanks in andvance!

The problem of DataSet?

at the 12th of the download_shapenet_part16_catagories.sh, there is a shapenet_part_overallid_to_catid_partid.json , but i can not find out where it is , so as terminter of ubuntu , is it that dataset what i download is incomplete or missing it?

Test Dataset

Test Dataset :
can you provide more Test Dataset?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.