Coder Social home page Coder Social logo

mprm's People

Contributors

plusmultiply avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

mprm's Issues

The ScanNet dataset

Hello, nice job!

I was sorry to bother you abount scannet dataset. I want to know more about the dataset, which version(v1 or v2) did you use? How much memory needs to be consumed? And another supporting documents?such as train/val split.

best wishs to you!

Questions about Subcloud-level Annotation

Thank you for sharing the code!

However, I have several questions about the Subcloud-level Annotation. Could you explain how to get the file subcloud_label.tar.gz or how to annotate subcloud in details?

Thank you!

The error

Traceback (most recent call last):
File "/home/data2/mprm-master/training_mprm.py", line 201, in
model = KernelPointFCNN(dataset.flat_inputs, config)
File "/data2/mprm-master/models/KPFCNN_mprm.py", line 89, in init
self.inputs['last_batch_ind'] = flat_inputs[ind]
IndexError: tuple index out of range
Creating Model


When I used your code, I met the error? Did you give me some advice?
Best wish to you!

loading scannet "keyerror"

hello!
Want to ask for some help while loading .ply data
I tried to load .ply data with "read_ply" function and it gave me keyerror

<ipython-input-17-c43e6f65c260> in parse_header(plyfile, ext)
     40             line = line.split()
     41             print(line)
---> 42             properties.append((line[2].decode(), ext + ply_dtypes[line[1]]))
     43 
     44     return num_points, properties

KeyError: b'list'

any help on this? thanks a lot!

Some issues (errors) about this work

Thanks for bringing the idea of attention to wsss in point cloud!
I met two issues in this work

In https://github.com/plusmultiply/mprm/blob/master/datasets/Scannet_subcloud.py#L798
You make the center of the sub-cloud different every time, does it mean that we still need point-level labels to get the sub-cloud-level labels?
because the sub-clouds generated by the same seeds might be slightly different.

I think your attention submodules are implemented incorrectly
In short, it should be stacked_length = inputs['stacked_length_out'] in
https://github.com/plusmultiply/mprm/blob/master/models/network_blocks_mprm.py#L968
https://github.com/plusmultiply/mprm/blob/master/models/network_blocks_mprm.py#L1026

Fluctuated loss with channel attention head

Dear plusmultiply,

Thank you very much for sharing the code.
During the training, PCAM, SA and PSA work well and the loss steadily decreases.
However, when I use channel attention head, the loss always fluctuates, resulting in low accuracy. I try it separately or combine it with other heads. Both get unstable losses.
Have you met the same problem or could you give any possible explanation for this?

I'm looking forward to your reply.

About training_segmentation.py and Scannet_on_pseudo_label.py

Through training I have been able to obtain the model and produce the result files of the .ply through generate_pseudo_label.py (the .ply files can be refined through crf_postprocess.py). But when I tried to use training_segmentation.py to use pseudo labels for segmentation model training, I found some errors in the code. First, there is no KPFCNN_model_original.py file under the Models folder, Then pseudo labels are read into the network in Scannet_on_pseudo_lable.py of the .npy format, but without producing (only the .ply files).
Because of these two problems, a new segmentation model cannot be retrained. I hope you can fix it. Thank you very much~

Question about train_mb.py and tester_cam.py

Thanks for sharing your code! But I have a question. When training and testing the code(train_mb.py and tester_cam.py), I found that dataset.input_labels[] was not found. It turns out that self.input_labels is commented out in Scannet_subcloud.py, there only exists self.subcloud_labels. After replacing dataset.input_labels in these two files with dataset.subcloud_labels, the following error will appear(Appear in the validation phase). What should I do?

File "/home/chn/Downloads/mprm-master/datasets/Scannet_subcloud.py", line 718, in spatially_regular_gen
cloud_labels = self.subcloud_labels[data_split][cloud_ind][point_ind][1:]
IndexError: index 2173 is out of bounds for axis 0 with size 30

I think the code in trainer_mb.py and tester_cam.py does not correspond to the code in Scannet_subcloud.py. Please fix this bug.

Questions about 6.2. pseudo label evaluation

I have a question about how the pseudo labels on validation sets in Table 3 evaluated in code? I observed that the validation data in the scannet_subcloud.py is 1201 training data, and through generate_pseudo_label.py, it is possible to generate and evaluate 1201 pseudo labels of Training (i.e., Training in Table 3). I would like to know what comments or statements to add if I want to get the validation set evaluation?
In scannet_subcloud.py, I try to comment out the following lines of code:(line 414~418)
# Get number of clouds
self.input_trees['validation'] = self.input_trees['training']
self.input_colors['validation'] = self.input_colors['training']
self.input_vert_inds['validation'] = self.input_colors['training']
self.input_labels['validation'] = self.input_labels['training']
I also change the following code (line 447,remove the "not"):
if (not self.load_test) and 'train' in cloud_folder and cloud_name not in self.validation_clouds:
In addition, in generate_pseudo_label.py, I change the validation_size=312. After making these changes, an error occurs when running the generate_pseudo_label.py, as follows:
Traceback (most recent call last):
File "generate_pseudo_label.py", line 183, in
test_caller(chosen_log, chosen_snapshot, on_val)
File "generate_pseudo_label.py", line 131, in test_caller
tester.test_cloud_segmentation_on_val(model, dataset)
File "/home/chn/Downloads/Weakly Supervised/mprm-master/utils/tester_cam.py", line 543, in test_cloud_segmentation_on_val
probs = self.test_probs[i_val][dataset.validation_proj[i_val], :]
IndexError: list index out of range

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.