Coder Social home page Coder Social logo

nerf-loam's People

Contributors

chen-xieyuanli avatar junyuandeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nerf-loam's Issues

Sharing CUDA tensors

hi
Thanks for your excellent work. when I run the code " python demo/run.py configs/kitti/kitti_00.yaml" I met some issue ,i set "end frame =20" in the /config/kitti/kitti_00.yaml. my pytorch version is 1.10. cuda versio is 11.1. could you tell me how to solve this issue?

PatchWorkpp::PatchWorkpp() - INITIALIZATION COMPLETE
Decoder(
  (pe): Same()
  (pts_linears): ModuleList(
    (0): Linear(in_features=16, out_features=256, bias=True)
    (1): Linear(in_features=256, out_features=256, bias=True)
  )
  (sdf_out): Linear(in_features=256, out_features=1, bias=True)
)
******* initializing first_frame: 0
initializing the first frame ...
mapping process started!!!!!!!!!
frame id 1
trans  tensor([0., 0., 0.], device='cuda:0')
insert keyframe
PatchWorkpp::PatchWorkpp() - INITIALIZATION COMPLETE
******* tracking process started! *******
tracking frame:   5%|█▍                          | 1/20 [00:10<03:18, 10.47s/it]/home/jyzhang/anaconda3/envs/nerfloam/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /opt/conda/conda-bld/pytorch_1634272128894/work/aten/src/ATen/native/TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
(1, 4, 4)
tracking frame:  10%|██▊                         | 2/20 [00:12<01:40,  5.59s/it]frame id 2
trans  tensor([0.6877, 0.0027, 0.0078], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  15%|████▏                       | 3/20 [00:14<01:05,  3.86s/it]frame id 3
trans  tensor([1.3820, 0.0112, 0.0105], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  20%|█████▌                      | 4/20 [00:16<00:49,  3.09s/it]frame id 4
trans  tensor([2.1094, 0.0249, 0.0096], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  25%|███████                     | 5/20 [00:18<00:40,  2.73s/it]frame id 5
trans  tensor([2.8489, 0.0413, 0.0128], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  30%|████████▍                   | 6/20 [00:20<00:35,  2.53s/it]frame id 6
trans  tensor([3.5876, 0.0570, 0.0211], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  35%|█████████▊                  | 7/20 [00:22<00:31,  2.40s/it]frame id 7
trans  tensor([4.3582, 0.0895, 0.0347], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  40%|███████████▏                | 8/20 [00:24<00:28,  2.35s/it]frame id 8
trans  tensor([5.1436, 0.1161, 0.0417], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  45%|████████████▌               | 9/20 [00:27<00:25,  2.34s/it]frame id 9
trans  tensor([5.9294, 0.1443, 0.0319], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  50%|█████████████▌             | 10/20 [00:29<00:23,  2.35s/it]frame id 10
trans  tensor([6.7445, 0.1803, 0.0405], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  55%|██████████████▊            | 11/20 [00:31<00:20,  2.33s/it]frame id 11
trans  tensor([7.5590, 0.2162, 0.0498], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  60%|████████████████▏          | 12/20 [00:34<00:19,  2.38s/it]frame id 12
trans  tensor([8.3817, 0.2561, 0.0583], device='cuda:0', grad_fn=<SubBackward0>)
insert keyframe
********** current num kfs: 2 **********
tracking frame:  65%|█████████████████▌         | 13/20 [00:36<00:16,  2.39s/it]frame id 13
trans  tensor([9.2186, 0.2980, 0.0541], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  70%|██████████████████▉        | 14/20 [00:39<00:14,  2.41s/it]frame id 14
trans  tensor([10.0634,  0.3389,  0.0636], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  75%|████████████████████▎      | 15/20 [00:41<00:12,  2.42s/it]frame id 15
trans  tensor([10.9102,  0.3870,  0.0693], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  80%|█████████████████████▌     | 16/20 [00:44<00:09,  2.47s/it]frame id 16
trans  tensor([11.7786,  0.4310,  0.0806], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  85%|██████████████████████▉    | 17/20 [00:46<00:07,  2.53s/it]frame id 17
trans  tensor([12.6559,  0.4845,  0.0818], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  90%|████████████████████████▎  | 18/20 [00:49<00:05,  2.57s/it]frame id 18
trans  tensor([13.5310,  0.5446,  0.0908], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame:  95%|█████████████████████████▋ | 19/20 [00:52<00:02,  2.58s/it]frame id 19
trans  tensor([14.4132,  0.6029,  0.0900], device='cuda:0', grad_fn=<SubBackward0>)
tracking frame: 100%|███████████████████████████| 20/20 [00:54<00:00,  2.75s/it]
========== stop_mapping set ==========
******* tracking process died *******
frame id 20
trans  tensor([15.3066,  0.6636,  0.1000], device='cuda:0', grad_fn=<SubBackward0>)
frame id 21
trans  tensor([16.2136,  0.7190,  0.1097], device='cuda:0', grad_fn=<SubBackward0>)
******* extracting mesh without replay *******
********** post-processing steps **********
 post-processing steps: 100%|█████████████████████| 3/3 [00:06<00:00,  2.17s/it]
******* extracting final mesh *******
(21, 4, 4)
******* mapping process died *******
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
/home/jyzhang/anaconda3/envs/nerfloam/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 3 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

AttributeError: module 'grid' has no attribute 'svo_intersect'


PatchWorkpp::PatchWorkpp() - INITIALIZATION COMPLETE
Decoder(
  (pe): Same()
  (pts_linears): ModuleList(
    (0): Linear(in_features=16, out_features=256, bias=True)
    (1): Linear(in_features=256, out_features=256, bias=True)
  )
  (sdf_out): Linear(in_features=256, out_features=1, bias=True)
)
******* initializing first_frame: 0
initializing the first frame ...
mapping process started!!!!!!!!!
frame id 1
trans  tensor([0., 0., 0.], device='cuda:0')
insert keyframe
Process Process-2:
Traceback (most recent call last):
  File "/home/spacex/Data/noetic_cudagl_workspace/Study/anaconda3/envs/nerfloam/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/home/spacex/Data/noetic_cudagl_workspace/Study/anaconda3/envs/nerfloam/lib/python3.9/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/spacex/Data/noetic_cudagl_workspace/Study/NeRF-LOAM/src/mapping.py", line 110, in spin
    self.do_mapping(share_data, tracked_frame, selection_method='current')
  File "/home/spacex/Data/noetic_cudagl_workspace/Study/NeRF-LOAM/src/mapping.py", line 184, in do_mapping
    bundle_adjust_frames(
  File "/home/spacex/Data/noetic_cudagl_workspace/Study/NeRF-LOAM/src/variations/render_helpers.py", line 394, in bundle_adjust_frames
    final_outputs = render_rays(
  File "/home/spacex/Data/noetic_cudagl_workspace/Study/NeRF-LOAM/src/variations/render_helpers.py", line 211, in render_rays
    intersections, hits = ray_intersect(
  File "/home/spacex/Data/noetic_cudagl_workspace/Study/anaconda3/envs/nerfloam/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/home/spacex/Data/noetic_cudagl_workspace/Study/NeRF-LOAM/src/variations/voxel_helpers.py", line 534, in ray_intersect
    pts_idx, min_depth, max_depth = svo_ray_intersect(
  File "/home/spacex/Data/noetic_cudagl_workspace/Study/NeRF-LOAM/src/variations/voxel_helpers.py", line 110, in forward
    inds, min_depth, max_depth = _ext.svo_intersect(
AttributeError: module 'grid' has no attribute 'svo_intersect'
^CTraceback (most recent call last):
  File "/home/spacex/Data/noetic_cudagl_workspace/Study/NeRF-LOAM/demo/run.py", line 26, in <module>
    slam.start()
  File "/home/spacex/Data/noetic_cudagl_workspace/Study/NeRF-LOAM/src/nerfloam.py", line 45, in start
    sleep(20)
KeyboardInterrupt

Hi, I met this error, what's your version for the lib grid?

Training procedure

The idea of neural SDF for localisation and mapping is interesting. Can you please elaborate more on how the training process is performed? Which dataset you used to train the network, and then when you say "without pre-training," do you mean the network trained on dataset X was used for testing on the other datasets? I couldn't find this part in the paper.

python demo/run.py configs/kitti/kitti_00.yaml

您好,我在运行run.py时遇到如下报错,请问您知道具体原因麻

(torch) abin@abin:~/NeRF-LOAM$ python demo/run.py configs/kitti/kitti_00.yaml
PatchWorkpp::PatchWorkpp() - INITIALIZATION COMPLETE
Decoder(
(pe): Same()
(pts_linears): ModuleList(
(0): Linear(in_features=16, out_features=256, bias=True)
(1): Linear(in_features=256, out_features=256, bias=True)
)
(sdf_out): Linear(in_features=256, out_features=1, bias=True)
)
******* initializing first_frame: 0
Bus error (core dumped)

Continuity

请问您的工作利用nerf连续化的**了吗

The issue of CUDA out of memory when running on the sequence00 of maicity

The OOM error happens when I run the command: python demo/run.py configs/maicity/maicity_00.yaml. Since I have ran the code on the GPU of RTX 4090 with 24GB memory available, the OOM shouldn't happen as you mentioned in your README. And which confused me greatly is that there is still memory avaliable when OOM happens. The outputs are shown as follows:

RuntimeError: CUDA out of memory. Tried to allocate 5.54 GiB (GPU 0; 23.65 GiB total capacity; 1.91 GiB already allocated; 5.50 GiB free; 1.92 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

And it is worth noting that the OOM tends to happen in the process of 246/699 in sequence00 of maicity.

cuda out of memory

Hello, thank you for your kind words. To run the demo on a TiTan XP graphics card without encountering a CUDA OOM (Out of Memory) issue at 20 frames, you would need a graphics card with at least how much memory?

The Process Freezes When Dealing with a Large Number of Frames

你好,
我尝试在kitti数据集的一条完整的序列上(超过1000frames)运行demo,发现经常在运行一段时间后,整个进程无缘无故的卡住,当所需处理的frames很少时则不会出现这样的情况。
这样的情况主要发生在post-processing steps时,卡住后并没有相关报错信息,用ps -ef查看进程本身也没有结束。以下是运行时log的最后几行。

********** current num kfs: 20 **********
frame id 613
trans  tensor([-68.3958,  15.3260,  -1.3550], device='cuda:0', grad_fn=<SubBackward0>)
frame id 614
trans  tensor([-69.7998,  15.3240,  -1.3688], device='cuda:0', grad_fn=<SubBackward0>)
frame id 615
trans  tensor([-71.2139,  15.3187,  -1.3821], device='cuda:0', grad_fn=<SubBackward0>)
frame id 616
trans  tensor([-72.5929,  15.3241,  -1.3938], device='cuda:0', grad_fn=<SubBackward0>)
frame id 617
trans  tensor([-73.9971,  15.3187,  -1.4026], device='cuda:0', grad_fn=<SubBackward0>)
frame id 618
trans  tensor([-75.4045,  15.3136,  -1.4108], device='cuda:0', grad_fn=<SubBackward0>)
insert keyframe
********** current num kfs: 21 **********
frame id 619
trans  tensor([-76.7938,  15.3104,  -1.4320], device='cuda:0', grad_fn=<SubBackward0>)
frame id 620
trans  tensor([-78.1841,  15.2990,  -1.4462], device='cuda:0', grad_fn=<SubBackward0>)
frame id 621
trans  tensor([-79.5537,  15.2961,  -1.4579], device='cuda:0', grad_fn=<SubBackward0>)
********** post-processing steps **********

  0%|          | 0/22 [00:00<?, ?it/s]
 post-processing steps:   0%|          | 0/22 [00:00<?, ?it/s]

以下是我运行的代码。

python demo/run.py configs/kitti/kitti_06.yaml

为了可视化,我对kitti.yaml做了以下更改。

debug_args:
  mesh_freq: 10

感谢您的帮助!

AttributeError: module 'grid' has no attribute 'svo_intersect'

PatchWorkpp::PatchWorkpp() - INITIALIZATION COMPLETE Decoder( (pe): Same() (pts_linears): ModuleList( (0): Linear(in_features=16, out_features=256, bias=True) (1): Linear(in_features=256, out_features=256, bias=True) ) (sdf_out): Linear(in_features=256, out_features=1, bias=True) ) ******* initializing first_frame: 0 initializing the first frame ... mapping process started!!!!!!!!! frame id 1 trans tensor([0., 0., 0.], device='cuda:0') insert keyframe Process Process-2: Traceback (most recent call last): File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/NeRF-LOAM/src/mapping.py", line 107, in spin self.do_mapping(share_data, tracked_frame, selection_method='current') File "/home/NeRF-LOAM/src/mapping.py", line 179, in do_mapping bundle_adjust_frames( File "/home/NeRF-LOAM/src/variations/render_helpers.py", line 394, in bundle_adjust_frames final_outputs = render_rays( File "/home/NeRF-LOAM/src/variations/render_helpers.py", line 211, in render_rays intersections, hits = ray_intersect( File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/NeRF-LOAM/src/variations/voxel_helpers.py", line 534, in ray_intersect pts_idx, min_depth, max_depth = svo_ray_intersect( File "/home/NeRF-LOAM/src/variations/voxel_helpers.py", line 110, in forward inds, min_depth, max_depth = _ext.svo_intersect( AttributeError: module 'grid' has no attribute 'svo_intersect'
Hi, I have encountered this problem when running python demo/run.py configs/kitti/kitti_00.yaml. And I have seen the same problem in github while that solution didn't work for me. Is there another possible reason for this?

RuntimeError: CUDA out of memory.

你好,我使用的是6G的GPU,我修改了chunk_size为chunk_size//10 ~ chunk_size//10000之后依然报错,请问还有其他解决方案吗?

$ python demo/run.py configs/maicity/maicity_01.yaml
PatchWorkpp::PatchWorkpp() - INITIALIZATION COMPLETE
Decoder(
  (pe): Same()
  (pts_linears): ModuleList(
    (0): Linear(in_features=16, out_features=256, bias=True)
    (1): Linear(in_features=256, out_features=256, bias=True)
  )
  (sdf_out): Linear(in_features=256, out_features=1, bias=True)
)
******* initializing first_frame: 0
initializing the first frame ...
mapping process started!!!!!!!!!
frame id 1
trans  tensor([0., 0., 0.], device='cuda:0')
insert keyframe
PatchWorkpp::PatchWorkpp() - INITIALIZATION COMPLETE
******* tracking process started! *******
tracking frame:   0%|                                    | 0/99 [00:00<?, ?it/s]Process Process-2:
Traceback (most recent call last):
  File "/home/brosy/anaconda3/envs/NERF_LOAM/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/home/brosy/anaconda3/envs/NERF_LOAM/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/brosy/projects/NERF_LOAM/NeRF-LOAM/src/mapping.py", line 112, in spin
    self.do_mapping(share_data, tracked_frame, selection_method='current')
  File "/home/brosy/projects/NERF_LOAM/NeRF-LOAM/src/mapping.py", line 184, in do_mapping
    bundle_adjust_frames(
  File "/home/brosy/projects/NERF_LOAM/NeRF-LOAM/src/variations/render_helpers.py", line 395, in bundle_adjust_frames
    final_outputs = render_rays(
  File "/home/brosy/projects/NERF_LOAM/NeRF-LOAM/src/variations/render_helpers.py", line 211, in render_rays
    intersections, hits = ray_intersect(
  File "/home/brosy/anaconda3/envs/NERF_LOAM/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/home/brosy/projects/NERF_LOAM/NeRF-LOAM/src/variations/voxel_helpers.py", line 534, in ray_intersect
    pts_idx, min_depth, max_depth = svo_ray_intersect(
  File "/home/brosy/projects/NERF_LOAM/NeRF-LOAM/src/variations/voxel_helpers.py", line 108, in forward
    children = children.expand(S * G, *children.size()).contiguous()
RuntimeError: CUDA out of memory. Tried to allocate 516.00 MiB (GPU 0; 5.81 GiB total capacity; 189.92 MiB already allocated; 519.38 MiB free; 204.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
tracking frame:   1%|▎                           | 1/99 [00:07<11:55,  7.30s/it]
$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.223.02   Driver Version: 470.223.02   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   35C    P8    10W /  N/A |   4700MiB /  5946MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1457      G   /usr/lib/xorg/Xorg                  4MiB |
|    0   N/A  N/A      2178      G   /usr/lib/xorg/Xorg                  4MiB |
|    0   N/A  N/A     43816      C   python                           1505MiB |
|    0   N/A  N/A     43877      C   ...envs/NERF_LOAM/bin/python     1505MiB |
|    0   N/A  N/A     47081      C   ...envs/NERF_LOAM/bin/python     1677MiB |
+-----------------------------------------------------------------------------+

NaN Result in OptimizablePose Class with Orthogonal Matrix of Determinant -1

Hi,
I encountered an issue with the OptimizablePose class in the se3pose.py file, where it returns a NaN result for the rotation matrix. This occurs even though the input matrix's first 3x3 upper-left part is orthogonal with a determinant of -1, which should be theoretically valid for rotation matrices.

Steps to Reproduce :
I used the following matrix as input with the code provided in se3pose.py file

before = torch.tensor([[-0.6376737, -0.07767385, -0.76638047, 2.1000],
[-0.2046487, -0.94206723, 0.26575975, 2.0000],
[ 0.74262451, -0.32630677, -0.58483565, 0.8900],
[ 0.0000, 0.0000, 0.0000, 1.0000]])

Upon passing this matrix to the OptimizablePose class, the output for the rotation matrix is NaN, which is unexpected given the input's properties.
Expected Behavior
The class should process an orthogonal 3x3 matrix with a determinant of -1 without resulting in NaN values, considering the mathematical properties of rotation and transformations.

关于patchwork++的问题

您好,非常感谢您出色的工作。我在部署该项目的时候出了一些问题。我已经安装好了patchwork++,运行演示代码也可以出现分割的结果,并且我替换了src/dataset/所有.py文件的patchwork_module_path ="/disk2/wh/3rd/patchwork-plusplus/python_wrapper",(因为我并未在build中发现python_wrapper,python_wrapper在patchwork-plusplus中。) 我不知道patchwork++是应该装在base环境还是nerf-loam的虚拟环境中,我目前是装在了base环境中,因为我使用nerf-loam虚拟环境安装patchwork++时会出现X11不存在的问题,但我系统里确实安装了X11,我不知道会不会是这个问题,我使用的ubuntu22。希望您能帮助我解决这个问题。
(nerf_loam) wh@CFN-Titan-Server:~/code/NeRF-LOAM$ python demo/run.py configs/kitti/kitti_00.yaml
Traceback (most recent call last):
File "demo/run.py", line 25, in
slam = nerfloam(args)
File "/disk2/wh/code/NeRF-LOAM/src/nerfloam.py", line 31, in init
self.data_stream = get_dataset(args)
File "/disk2/wh/code/NeRF-LOAM/src/utils/import_util.py", line 5, in get_dataset
Dataset = import_module("dataset."+args.dataset)
File "/disk2/wh/anaconda3/envs/nerf_loam/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 671, in _load_unlocked
File "", line 843, in exec_module
File "", line 219, in _call_with_frames_removed
File "/disk2/wh/code/NeRF-LOAM/src/dataset/kitti.py", line 12, in
import pypatchworkpp
ModuleNotFoundError: No module named 'pypatchworkpp'

What dose the parameter "step size ratio " mean?

Hi there, Thank you for your impressive work which attracts me a lot.
I am reading your paper and have some confusion about the sampling strategy along the ray.
Specifically in Section 5.1 Implemental details,
"For sampling, we set the step size ratio to 0.2 for odometry and 0.5 for mapping"
What dose the parameter "step size ration " mean here?
Could you provide more details about sampling strategy? Thank you in advance : )

AttributeError: 'NoneType' object has no attribute 'cuda'

你好,请问我这个问题是什么原因? 显卡是3060 12G,torch1.10,CUDA11.1,但报错并不是因为爆显存

PatchWorkpp::PatchWorkpp() - INITIALIZATION COMPLETE
Decoder(
(pe): Same()
(pts_linears): ModuleList(
(0): Linear(in_features=16, out_features=256, bias=True)
(1): Linear(in_features=256, out_features=256, bias=True)
)
(sdf_out): Linear(in_features=256, out_features=1, bias=True)
)
******* initializing first_frame: 0
initializing the first frame ...
PatchWorkpp::PatchWorkpp() - INITIALIZATION COMPLETE
******* tracking process started! *******
tracking frame: 0%| | 0/99 [00:02<?, ?it/s]
Process Process-3:
Traceback (most recent call last):
File "/home/hlldy/anaconda3/envs/nerf/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/hlldy/anaconda3/envs/nerf/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/hlldy/NeRF-LOAM/src/tracking.py", line 83, in spin
self.do_tracking(share_data, current_frame, kf_buffer)
File "/home/hlldy/NeRF-LOAM/src/tracking.py", line 101, in do_tracking
decoder = share_data.decoder.cuda()
AttributeError: 'NoneType' object has no attribute 'cuda'
^C^CTraceback (most recent call last):

关于编译patchwork-plusplus的问题

您好,我想请问一下,在编译完patchwork-plusplus后,其中的build文件夹下并不会生成python_wrapper,我将CMakeList文件中的set(INCLUDE_PYTHON_WRAPPER OFF CACHE BOOL "Build Python wrapper"),OFF修改为ON,但是无法成功编译
请问这应该怎么解决呢,非常感谢您!!

可视化

你好 我在kitti数据集上只跑了20帧,下面图片里是生成的文件 我该如何可视化它们
1702887440300

Releasing INR dataset

Hi All,
Thank you for your interesting and impressive work.
Do you plan to release the learned INR dataset?

IndexError: select(): index 108605 out of range for tensor of size [69829, 4] at dimension 0

你好,
由于 #17 的原因,我尝试使用subscene的branch运行demo,但是遇到了如下报错。

/usr/local/lib/python3.8/dist-packages/torch/functional.py:599: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2315.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Process Process-2:
Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/data/NeRF-LOAM-subscene/src/mapping.py", line 113, in spin
    self.create_voxels(tracked_frame)
  File "/data/NeRF-LOAM-subscene/src/mapping.py", line 333, in create_voxels
    self.update_grid_features()
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/data/NeRF-LOAM-subscene/src/mapping.py", line 363, in update_grid_features
    voxels, children, features = self.svo.get_centres_and_children()
IndexError: select(): index 108605 out of range for tensor of size [69829, 4] at dimension 0

我运行时使用的代码是

python demo/run.py configs/kitti/kitti_09.yaml

感谢您的帮助!

Errors about the installation of patchwork-plusplus in your projects

Thanks for your share of impressive work! When I install the patchwork-plusplus, some errors happen when I only run the pip install .in patchwork-plusplus directory from the original project. I just wonder if you have install the Open3D in C++ version as Prerequisite packages for patchwork-plusplus, because I just skip them for simplification.

svo init

Thanks for your outstanding work. Could you please elaborate on the rationale behind setting the first parameter in svo.init() to 2562564?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.