Coder Social home page Coder Social logo

regtr's People

Contributors

st235 avatar yewzijian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

regtr's Issues

How to get the 3DLoMatch dataset?

I searched on the Internet and only saw the 3DMatch dataset, but did not find 3DLoMatch. Where can I find the 3DLoMatch dataset?
Thanks in advance.

Docker image available?

Hey everyone!

Has anyone by any chance created a Docker Image to run RegTR? I have major issues setting up the environment and even getting the demo to run.

Thank you.

Setting parameter values for training of custom dataset

Hi @yewzijian! Thanks for sharing the codebase for your work. I am trying to train the network on custom data. As I went through the configuration, I found that for feature loss config., I need to set(r_p, r_n) which according to the paper are(m,2m), where m being the "voxel distance used in the final downsampling layer in the KPConv backbone". How do I figure out m for my dataset?

About some output data issues

Hello, the dataset I am using is Modenet40. May I ask if there is a significant difference in the output parameters? Is it a configuration issue? The RRE is too high, but the RTE is too low.

20230916140851

About bugs in the reproduction process

hello,The environment I use is
Python 3.8.8
PyTorch 1.9.1 with torchvision 0.10.1 (Cuda 11.1)
[PyTorch3D]0.6.0
[MinkowskiEngine] 0.5.4
A bug in the reproduction process is from MinkowskiEngineBackend._C import (
ImportError: /home/deep/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/MinkowskiEngineBackend/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZNK2at10TensorBase8data_ptrIdEEPT_v
How should I solve this problem?

Extension of algorithm

Hello,Is this algorithm suitable for underwater point cloud map registration? If appropriate, how should the dataset be processed?Is the dataset unlabeled?(For unsupervised learning)

compute overlap ratio from source point cloud to target point cloud

Hello.Is the 'overlap' in the train_info/val_info/test_3DLoMatch_info/test_3DMatch_info.pkl file refer to the overlap score from the source point cloud to the target point cloud? Is the 'overlap' necessary in training? Or is this overlap score only used to determine whether a point cloud is a low overlap point cloud?I am very confused and looking forward to your reply.

Question about calculating the overlapping regions of source cloud and target cloud

Your work is great, I've recently been debugging your code to compare algorithms, but I've noticed that there seems to be an ambiguity. In pointcloud.py, when you calculate the overlap region between the source and target point clouds, you first fill tgt_corr, src_corr with a value of -1, and then calculate the correspondence: src_corr_is_mutual = np.logical_and(tgt_corr[src_corr] == np.arange(len(src_corr)), src_corr > 0). Shouldn't "src_corr > 0" be adjusted to "src_corr >= 0", because the index value starts from 0, and it's correct if for a point in the source point cloud, it has to be indexed to 0 for the target point.

A sparse tensor bug

ubuntu18.04
RTX3090
cuda11.1
MinkowskiEngine 0.5.4

The following error occurred when I tried to run your model。

(RegTR) ➜ src git:(main) ✗ python test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DMatch

/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/init.py:36: UserWarning: The environment variable OMP_NUM_THREADS not set. MinkowskiEngine will automatically set OMP_NUM_THREADS=16. If you want to set OMP_NUM_THREADS manually, please export it on the command line before running a python script. e.g. export OMP_NUM_THREADS=12; python your_program.py. It is recommended to set it below 24.
warnings.warn(
/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/_distutils_hack/init.py:30: UserWarning: Setuptools is replacing distutils.
warnings.warn("Setuptools is replacing distutils.")
04/23 20:06:22 [INFO] root - Output and logs will be saved to ../logdev
04/23 20:06:22 [INFO] cvhelpers.misc - Command: test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DMatch
04/23 20:06:22 [INFO] cvhelpers.misc - Source is from Commit 64e5b3f (2022-03-28): Fixed minor typo in Readme.md and demo.py
04/23 20:06:22 [INFO] cvhelpers.misc - Arguments: benchmark: 3DMatch, config: None, logdir: ../logs, dev: True, name: None, num_workers: 0, resume: ../trained_models/3dmatch/ckpt/model-best.pth
04/23 20:06:22 [INFO] root - Using config file from checkpoint directory: ../trained_models/3dmatch/config.yaml
04/23 20:06:22 [INFO] data_loaders.threedmatch - Loading data from ../data/indoor
04/23 20:06:22 [INFO] RegTR - Instantiating model RegTR
04/23 20:06:22 [INFO] RegTR - Loss weighting: {'overlap_5': 1.0, 'feature_5': 0.1, 'corr_5': 1.0, 'feature_un': 0.0}
04/23 20:06:22 [INFO] RegTR - Config: d_embed:256, nheads:8, pre_norm:True, use_pos_emb:True, sa_val_has_pos_emb:True, ca_val_has_pos_emb:True
04/23 20:06:25 [INFO] CheckPointManager - Loaded models from ../trained_models/3dmatch/ckpt/model-best.pth
0%| | 0/1623 [00:00<?, ?it/s] ** On entry to cusparseSpMM_bufferSize() parameter number 1 (handle) had an illegal value: bad initialization or already destroyed

Traceback (most recent call last):
File "test.py", line 75, in
main()
File "test.py", line 71, in main
trainer.test(model, test_loader)
File "/home/lileixin/work/Point_Registration/RegTR/src/trainer.py", line 204, in test
test_out = model.test_step(test_batch, test_batch_idx)
File "/home/lileixin/work/Point_Registration/RegTR/src/models/generic_reg_model.py", line 132, in test_step
pred = self.forward(batch)
File "/home/lileixin/work/Point_Registration/RegTR/src/models/regtr.py", line 117, in forward
kpconv_meta = self.preprocessor(batch['src_xyz'] + batch['tgt_xyz'])
File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/lileixin/work/Point_Registration/RegTR/src/models/backbone_kpconv/kpconv.py", line 489, in forward
pool_p, pool_b = batch_grid_subsampling_kpconv_gpu(
File "/home/lileixin/work/Point_Registration/RegTR/src/models/backbone_kpconv/kpconv.py", line 232, in batch_grid_subsampling_kpconv_gpu
sparse_tensor = ME.SparseTensor(
File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/MinkowskiSparseTensor.py", line 275, in init
coordinates, features, coordinate_map_key = self.initialize_coordinates(
File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/MinkowskiSparseTensor.py", line 338, in initialize_coordinates
features = spmm_avg.apply(self.inverse_mapping, cols, size, features)
File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/sparse_matrix_functions.py", line 183, in forward
result, COO, vals = spmm_average(
File "/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/sparse_matrix_functions.py", line 93, in spmm_average
result, COO, vals = MEB.coo_spmm_average_int32(
RuntimeError: CUSPARSE_STATUS_INVALID_VALUE at /home/lileixin/MinkowskiEngine/src/spmm.cu:590
(RegTR) ➜ src git:(main) ✗ python test.py --dev --resume ../trained_models/3dmatch/ckpt/model-best.pth --benchmark 3DMatch

/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/MinkowskiEngine-0.5.4-py3.8-linux-x86_64.egg/MinkowskiEngine/init.py:36: UserWarning: The environment variable OMP_NUM_THREADS not set. MinkowskiEngine will automatically set OMP_NUM_THREADS=16. If you want to set OMP_NUM_THREADS manually, please export it on the command line before running a python script. e.g. export OMP_NUM_THREADS=12; python your_program.py. It is recommended to set it below 24.
warnings.warn(
/home/lileixin/anaconda3/envs/RegTR/lib/python3.8/site-packages/_distutils_hack/init.py:30: UserWarning: Setuptools is replacing distutils.
warnings.warn("Setuptools is replacing distutils.")

But when I cross out this line of code, the program can run.
sparse_tensor = ME.SparseTensor( features=points, coordinates=coord_batched, #quantization_mode=ME.SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE )

Training for custom dataset

Hi @yewzijian,

Thanks for sharing your work. I would like to ask you whether you could elaborate with some details about how someone could train the model for a custom dataset.

Thanks.

A CUDA Error

Dear Yew & other friends:
I have run code on (just like in readme):
Python 3.8.8
PyTorch 1.9.1 with torchvision 0.10.1 (Cuda 11.1)
PyTorch3D 0.6.0
MinkowskiEngine 0.5.4
RTX 3090

    But I got following error:

    recent call last):
      File "train.py", line 88, in <module>
        main()
      File "train.py", line 84, in main
        trainer.fit(model, train_loader, val_loader)
      File "/home/***/codes/RegTR-main/src/trainer.py", line 119, in fit
        losses['total'].backward()
      File "/home/***/enter/envs/regtr/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/***/enter/envs/regtr/lib/python3.8/site-packages/torch/autograd/__init__.py", line 147, in backward
        Variable._execution_engine.run_backward(
    RuntimeError: merge_sort: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered
    


    I have already tried to set os.environ['CUDA_LAUNCH_BLOCKING'] = '1', but it did not work.

Train BUG, please help me

When I execute the following command:
python train.py --config conf/modelnet.yaml
I got a Bug:


Traceback (most recent call last):
  File "train.py", line 85, in <module>
    main()
  File "train.py", line 81, in main
    trainer.fit(model, train_loader, val_loader)
  File "/home/zsy/Code/RegTR-main/src/trainer.py", line 79, in fit
    self._run_validation(model, val_loader, step=global_step,
  File "/home/zsy/Code/RegTR-main/src/trainer.py", line 249, in _run_validation
    val_out = model.validation_step(val_batch, val_batch_idx)
  File "/home/zsy/Code/RegTR-main/src/models/generic_reg_model.py", line 83, in validation_step
    pred = self.forward(batch)
  File "/home/zsy/Code/RegTR-main/src/models/regtr.py", line 117, in forward
    kpconv_meta = self.preprocessor(batch['src_xyz'] + batch['tgt_xyz'])
  File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zsy/Code/RegTR-main/src/models/backbone_kpconv/kpconv.py", line 489, in forward
    pool_p, pool_b = batch_grid_subsampling_kpconv_gpu(
  File "/home/zsy/Code/RegTR-main/src/models/backbone_kpconv/kpconv.py", line 232, in batch_grid_subsampling_kpconv_gpu
    sparse_tensor = ME.SparseTensor(
  File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/MinkowskiEngine/MinkowskiSparseTensor.py", line 275, in __init__
    coordinates, features, coordinate_map_key = self.initialize_coordinates(
  File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/MinkowskiEngine/MinkowskiSparseTensor.py", line 338, in initialize_coordinates
    features = spmm_avg.apply(self.inverse_mapping, cols, size, features)
  File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/MinkowskiEngine/sparse_matrix_functions.py", line 183, in forward
    result, COO, vals = spmm_average(
  File "/home/zsy/anaconda3/envs/REG/lib/python3.8/site-packages/MinkowskiEngine/sparse_matrix_functions.py", line 93, in spmm_average
    result, COO, vals = MEB.coo_spmm_average_int32(
RuntimeError: CUSPARSE_STATUS_INVALID_VALUE at /tmp/pip-req-build-h0w4jzhp/src/spmm.cu:591

My environment is configured as required.
I think the problem might be with the code below:

        features=points,
        coordinates=coord_batched,
        quantization_mode=ME.SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE
    )

I can't solve it , please help me, thx

gt-info, gt.log, and gt_overlap.log in benchmark

Hello,
1.I would like to know what are the functions of 3DMatch and 3DLoMatch in the src/datasets/3dmatch/benchmarks folder? Is it used for testing (in the test. py file)?
2.What are the role of gt-info, gt.log, and gt_overlap.log in the dataset folder of 3DMatch and 3DLoMatch (../RegTR main/src/datasets/3dmatch/benchmarks/3DLoMatch/7 scenes redkitchen) ?How to obtain them when I build my own dataset?

I want to build my own dataset to use the REGTR , but I am stumped by this difficulty. Could you help me ?Thanks a lot!

Questions about test results

hello, I would like to commend you on your excellent work !
However, I am writing to you because I encountered some errors when running tests using the checkpoint ' trained_models/3dmatch/ckpt/model-best.pth' that was provided in here .

image

I was wondering if you could kindly provide some assistance in resolving this matter?

Thank you in advance for your help.

Best regards

About the influence of the weak data augmentation

Thanks for the great work. I notice that RegTR adopts a much weaker augmentation than the commonly used augmentation in [1, 2, 3]. How does this affect the convergence of RegTR? And will the weak augmentation affect the robustness to large transformation perturbation? Thank you.

[1] Bai, X., Luo, Z., Zhou, L., Fu, H., Quan, L., & Tai, C. L. (2020). D3feat: Joint learning of dense detection and description of 3d local features. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6359-6367).
[2] Huang, S., Gojcic, Z., Usvyatsov, M., Wieser, A., & Schindler, K. (2021). Predator: Registration of 3d point clouds with low overlap. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition (pp. 4267-4276).
[3] Yu, H., Li, F., Saleh, M., Busam, B., & Ilic, S. (2021). Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration. Advances in Neural Information Processing Systems, 34, 23872-23884.

Simulation data

Hello,

I am trying to use your work in a simulation data registration.
I started with a simple example. I am working with Webots and ROS.
After getting a pointcloud from a Lidar, I convert it to an Open3D PointCloud object and save it to a ply file.
I then use the demo file (and the pretrained models that you provided) to register the cloud to itself (expecting an identity transformation)
However, I get this result:
image

Do you have any idea why this is happening ?

I want test your own data, how many points can you enter at most each time?

I am now using the pre-trained model to predict transformation.

My data are 8000 and 7000 respectively, but when using the model for inference, but it shows that the memory is out of bounds. How should I solve this problem in the code or data.?

My graphics card is RTX3090, changing the graphics card is too expensive for me.

CUDA out of memory

Hi. When using the model you trained to validate my dataset in demo. py and train my own dataset , the same error occurred: CUDA out of memory .Tried to allocate 874.00 MiB(GPU 0; 7.79 GiB total capacity; 3.85GiB already allocated; 695.06GiB free; 5.04 GiB reserved in total by PyTorch) .My dataset only has 17 pairs of point clouds, and I have reduced the batch_size and base_ Lr still has this error. Is there any other way besides replacing the graphics card. Looking forward to your reply.

Is it possible to remove Minkowski Engine?

The last release date of minkowski is in May 2021. The dependencies of it might not be easily met with new software and hardware. I found it impossible to make RegTR train on my machine because of a cuda memory problem to which I found no solution. Without Minkowski, I would have more freedom when choosing the versions of pytorch and everything, so that I cound have more chance to solve this problem.
I am a slam/c++ veteran and deep learning/python newbie(starting learning deep learning 2 weeks ago), so its hard for me to modify it myself for now. I was wondering if you could be so kind to release a version of RegTR without Minkowski.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.