Coder Social home page Coder Social logo

Comments (8)

daerduoCarey avatar daerduoCarey commented on July 17, 2024

Hi, Dmytro,

Thank you for your note. For the training procedure, missing some data point is okay since we do random shuffle at the beginning of every epoch. For the testing procedure in our code, we only use it for the validation set to tune the model hyperparameters. We use another program to calculate the accuracy across all data point in the testing set and report this number in the paper.

I think this is not so important because of the random shuffle we did at the beginning of every epoch training.

@charlesq34 , you can comment on this. What do you think?

Thank you very much!

Bests,
Kaichun

from pointnet.

DBobkov avatar DBobkov commented on July 17, 2024

Hi Kaichun,

I see. I am wondering, what batch size did you use for training? For example, in my case when training PointNet on Stanford dataset with 1900 objects and batch size of 1000, 900 objects are not seen for training in one epoch (~47% of the dataset). Or am I using too large batches? According to Keskar et al large batch sizes can lead to bad performance.

While we are on the topic of Stanford, did you also observe low classification accuracy of PointNet for noisy occluded datasets (e.g. 60% the highest for Stanford objects, where parts of the objects are often missing)?

Best
Dmytro

from pointnet.

daerduoCarey avatar daerduoCarey commented on July 17, 2024

We uses 64, 128, 256 for 3d CAD model experiments. As for scene semantic segmentation task, I refer to @charlesq34 .

I guess that in your case, you can use 1024 as batch size. I think it's fine if there is no memory issue for you. I think if you really have 1,900 data points and you are using batch size 1,000. You are basically doing random sampling 1,000 data from 1,900 data points at every batch. So, the concepts of epoch and batch are the same here.

For the stanford dataset, what is the dataset? Are you talking about the building parser dataset?

Thanks.
Kaichun

from pointnet.

DBobkov avatar DBobkov commented on July 17, 2024

Dear Kaichun,

got it, thank you.

Yes, building parser dataset with real point cloud data of indoor objects, like chairs, tables, doors, etc.

Best,
Dmytro

from pointnet.

daerduoCarey avatar daerduoCarey commented on July 17, 2024

We have done one experiment using Blensor to simulate partial Kinect-style scans from the ShapeNet 3D CAD model. Often time the models are ~30-50% occluded. And Kinect-style noise is added. The experiment shows that PointNet still works pretty well for both object classification and part segmentation task. The details can be found in the paper.

Thanks.
Kaichun

from pointnet.

DBobkov avatar DBobkov commented on July 17, 2024

Kaichun,

yes, but:

  1. you do not provide any quantitative results on Blensor-simulated results in Fig. 3, especially for object classification.

  2. it is unclear how exactly you generated data for Fig.8 in supplementary material. Is "one view of the point cloud" referring to Blensor simulations? How are the points exactly dropped, at random from uniform distribution? If yes, this does not represent realistic occlusion, rather subsampling.

  3. you do not provide any quantitative results for Stanford building parser object classification. My training gives around 55-60% accuracy of PointNet, was wondering whether this is reasonable.

Because the discussion goes beyond the issue topic, I will close the issue after your answer.

from pointnet.

daerduoCarey avatar daerduoCarey commented on July 17, 2024
  1. I think we provided the quantitative comparison for part segmentation task, please check Page 6 "3D Object Part Segmentation" session, last paragraph: "Results show that we lose only 5.3%
    mean IoU." We may not include the classification task numbers but I remember that I did one experiment showing that the performance does not drop so much. Maybe drops for 3-5% classification accuracy.
  2. Sorry for the confusion. But Fig. 8 is a totally different story than partial point cloud data. Fig. 8 is using full point cloud, but with randomly and uniformly dropped points. In Fig. 8, we randomly drop out the points from the input point cloud. The "One view point cloud" means the full point cloud (after randomly selected points dropped out) with no rotation applied. On contrary, the 12-view means we rotate the point cloud for 12 times, each time for 30 degrees.
  3. Yes, you are right. We didn't use the Stanford dataset for object classificaion. We used it mainly for scene semantic parsing task. I cannot say too much about the performance since I didn't personally try it. But I guess 60% sounds reasonable. Please make sure that you have pre-process the data correctly and normalize all objects into unit cubes. It is also quite important to re-train the network using the partial data.

Thanks.

Bests,
Kaichun

from pointnet.

DBobkov avatar DBobkov commented on July 17, 2024

Kaichun,

thank you, this was helpful!

Best,
Dmytro

from pointnet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.