Coder Social home page Coder Social logo

Comments (45)

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Thanks a lot!Now I have encountered another difficulty. When I ran the train.py on my own dataset, I obtained the model and ran the demo.py, result is as follows:2023-06-27 15-12-51 的屏幕截图
But in fact, the matching between the two frame point clouds is as follows:
2023-06-27 15-22-44 的屏幕截图 (2)
After modifying the following parameters, the program can run normally:
batch_size:2--->1,first_subsampling_dl:0.025--->0.2,base_lr:0.0001-->0.00005,epoch=60,num_workers=0
But the matching effect is not satisfactory. Could you give me some advice?

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

It’s hard to tell from this image alone. Does it happen on the training point clouds? If not, it might be an overfitting issue due to lack of data.
Also, the hierachical KPConv used in the work requires some amount of tuning. You might want to make sure that ample points fall within each ball cluster. Also, REGTR works well when the number of key points at the coarsest level is around 500.

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Thank you! Yes, this graph visualizes the alignment of the test dataset using the model obtained from the training dataset (i.e. running demo. py code). How many pairs of point clouds do I need to have at least to use this algorithm using my dataset?

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

So does the logarithm of the dataset need to reach the level of ten thousand?

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Hello, I noticed that when I delete the gt.log and gt-info files in ../RegTR-main/src/datasets/3dmatch/benchmarks/3DMatch/7-scenes-redkitchen ,running the test.py program will report an error, so I guess the gt.log and gt-info files are useful. But I don't know the function of them.

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

Hi, you need the groundtruth files when you're evaluating the algorithm.

For example, in here the groundtruth poses are loaded so you can compute the errors in the following lines.

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Hello, I found that when running the test.py, the terminal output format is defined by program benchmark_prepator.py, where the gt.log and gt-info files are required.
1690290504544

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Hello, the gt.log file represents the transformation matrix (i.e. groundtruth) of two point clouds,but when I visualized the 3DMatch.pkl file, I found that the corresponding transformation matrix and the transformation matrix of the same pair of point cloud keyframes in the gt.log file were not the same, shouldn't they be the same?

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

The following values of the Rotation matrix corresponding to the same pair of point cloud data in the same scene are gt.log file and test_ 3DMatch_ Info.pkl file.
1690374402687
1690374452476

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

Good observation. I don't have an answer to this since I took the files from Predator's repository.
However, this will not affect the 3DMatch benchmark results since the groundtruth poses are read from the gt.log files, as you noted above.

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Thank you for your answer. I want to know how to generate a gt.log file if I use my own dataset. Just need to know the transformation matrix of the two point clouds? Is the transformation matrix in the gt.log file groundtruth?

That is to say, if I want to verify this algorithm, all I need is a gt.log file, like the benchmark_3dmatch.py , and I need to modify the test.py file based on my benchmark. py program.

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

And I have another question : how can I get the est.log file?
Is the est.log file obtained from the trained model?

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Hello, I would like to consult: What is the comparison between the point cloud density and scale of the Kitti dataset and the 3DMatch dataset?

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

But shouldn't some parameters in kernel point convolutional networks be changed based on the density and scale of the point cloud? For example: conv_ radius, deform_ radius, KP_ extent, neighborhood_ limits, first_ subsampling_ dl and overlap_ radius

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Thanks a lot!But I have another question: how can I change the following parameters to fit my dataset, and what are the criteria for changing these parameters? For example :r_p and r_n.
1691284935593(1)

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

As stated in the comments, setting r_p and r_n to 1.0 and 2.0x of the voxel sizes at the coarsest level works well. feature_loss_on works well enough when we set it at the coarsest level alone. The training weightings wt_feature and wt_feature_un works well at its current setting so there's usually no need to tweak them.

For the kpconv parameters, I recommend reading its paper to get the intuition how to set. Nevertheless, RegTR works well when there's around 500 points at the coarsest level where attention is applied.

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Hi,How to check the quality of the training model during the training process?

During training processing ,should I need to check the changes in loss values and these metrics? In this paper, I only need to pay attention to total、reg_success_final、rot_err_deg_final and trans_err_final , right?
1692001625895

The higher 'totol', the better.The higher 'reg_success_final' , the better. The smaller 'rot_err_deg_final' and 'trans_err_final', the better . I have a question: Are these metrics(rot_err_deg_final and trans_err_final) in the best model during the training process need to less than reg_success_thresh_rot和reg_success_thresh_trans?
Which of these metrics and loss value do I need to prioritize?

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Hello, I have another question I would like to consult: could this algorithm only be applied to two point clouds with exactly the same frame size? Is it feasible if the size of two point clouds is not exactly the same?Looking forward to your reply.

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Thanks a lot!Is it feasible to replace the KPConv backbone network with a pointnet network?Looking forward to your reply.

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Hello, I found that the evaluation metric for this algorithm during the training process is only the loss function value, but during the validation process, there are both loss function values and other evaluation metrics, such as reg_success_final、rot_err_deg_final and trans_err_final, why is this?

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

I didn’t try that out, but shouldn’t be a problem. Of course you’ll have to retrain the network in this case.
Thanks!But I'm worried that the matching results may not be good, because theoretically, the performance of pointnet network is not as good as kernel convolution. And the pointnet network only learns global features without local features. Perhaps the pointnet++ network will perform better?

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Yes, I would like to run the pointnet network on both the source and target point clouds to extract the features of the point clouds, instead of the kernel point convolution part in your algorithm. In this case, are pointnet and pointnet++similar? Could I use one of them?
In addition, I have found that kernel point convolutional networks not only extract point cloud features but also downsample point clouds. Therefore, do I need to use pointnet or pointnet++networks with pooling layers instead of kernel point convolutional networks?

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

OK!Thanks a lot!

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Hello, I found that when I load the trained model, I can output the transformation matrix of two point clouds. However, when the model is loaded with two point clouds that do not overlap, the transformation matrix will also be output. So, how can I determine whether two point clouds overlap by simply looking at the output transformation matrix?

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Hello, I have a question: in this algorithm, the role of Kernel Point Convolution is to downsample the point cloud and extract its local features. Is the role of Transformer to extract the global features of the point cloud?Looking forward to your reply.

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Could the traditional convolutional networks not enhance the individual point cloud features by letting them interact with each other? Or is the performance of traditional convolutions slightly inferior to that of Transformers?

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Sorry to bother you again, but I would like to cousult why this algorithm cannot directly use transformer to extract features, but instead adds a kernel convolution.

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Thanks a lot! That is to say, do we only use a network that downsamples the point cloud before using the transformer? Or is it a network that requires both point cloud downsampling and feature extraction?

from regtr.

yewzijian avatar yewzijian commented on August 21, 2024

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Thanks a lot!

from regtr.

lcxiha avatar lcxiha commented on August 21, 2024

Hello, I have found that the 3DMatch dataset is approximately 3m * 2m. If the dataset is 200m * 100m or even larger, can this algorithm framework still be applicable?Looking forward to your reply.

from regtr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.