Coder Social home page Coder Social logo

kangpeilun / vastgaussian Goto Github PK

View Code? Open in Web Editor NEW
296.0 296.0 27.0 45.1 MB

This is an unofficial Implementation

License: Apache License 2.0

Python 17.51% Shell 0.10% CMake 11.53% C++ 64.00% Cuda 2.41% C 0.54% GLSL 3.66% Gnuplot 0.13% Batchfile 0.12%

vastgaussian's Introduction




vastgaussian's People

Contributors

kangpeilun avatar livioni avatar versewei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vastgaussian's Issues

bad quality in some regions

hello, thanks for your excellent work!

I am confused about the result in the Mill19/building dataset , I implemented the render.py and also execute the SIBR_gaussianViewer_app. It seems in some regions the quality is really satisfactory yet some regions the render quality is really bad, as shows in the picture. could you give me some suggestions about this issue or some way to solve it?
00019

Are normals necessary?

The original implementation doest need normals in the PLY. Here it seems mandatory. Can I pass zero array or do I need to estimate them?

ValueError: need at least one array to concatenate

Hello, thank you for your excellent work, I encountered the following problems during operation, could you please give me some advice?

C:\ProgramData\anaconda3\envs\3dgsvast\python.exe D:\ZWD\3DGS\VastGaussian-refactor\train_images.py
Output folder: ./output\exp1_16 [11/08 11:55:22]
Tensorboard not available: not logging progress [11/08 11:55:22]
Reading camera 406/406 [11/08 11:55:23]
Partition 1_1 ori_camera_bbox [-3.5779102, -1.1316292, -5.5089555, -3.4412055] extend_camera_bbox [-4.067166376113891, -0.6423730373382568, -5.922505474090576, -3.027655506134033] [11/08 11:55:32]
Partition 1_2 ori_camera_bbox [-4.037946, -1.1316292, -3.4412055, -0.27672225] extend_camera_bbox [-4.619209623336792, -0.5503658294677735, -4.074102163314819, 0.35617440938949585] [11/08 11:55:32]
Partition 1_3 ori_camera_bbox [-4.242634, -1.1316292, -0.27672225, 3.034645] extend_camera_bbox [-4.86483473777771, -0.509428310394287, -0.9389957070350647, 3.6969185352325438] [11/08 11:55:33]
Partition 2_1 ori_camera_bbox [-1.1316292, 0.82256436, -5.2049623, -2.3322802] extend_camera_bbox [-1.5224679470062257, 1.213403081893921, -5.779498672485351, -1.757743740081787] [11/08 11:55:33]
Partition 2_2 ori_camera_bbox [-1.1316292, 0.82256436, -2.3322802, -0.42812225] extend_camera_bbox [-1.5224679470062257, 1.213403081893921, -2.713111734390259, -0.04729067683219906] [11/08 11:55:33]
Partition 2_3 ori_camera_bbox [-1.1316292, 0.82256436, -0.42812225, 3.082469] extend_camera_bbox [-1.5224679470062257, 1.213403081893921, -1.1302405059337617, 3.7845872402191163] [11/08 11:55:33]
Partition 3_1 ori_camera_bbox [0.82256436, 5.353721, -5.3446984, -3.2705412] extend_camera_bbox [-0.0836669445037842, 6.259952449798584, -5.759529876708984, -2.855709743499756] [11/08 11:55:33]
Partition 3_2 ori_camera_bbox [0.82256436, 5.335647, -3.2705412, -0.08891543] extend_camera_bbox [-0.08005213737487793, 6.2382636070251465, -3.906866359710693, 0.5474097386002541] [11/08 11:55:33]
Partition 3_3 ori_camera_bbox [0.82256436, 4.920017, -0.08891543, 3.3565602] extend_camera_bbox [0.0030739307403564453, 5.73950719833374, -0.7780105456709863, 4.045655345916748] [11/08 11:55:34]
Total ori point number: 364978 [11/08 11:55:34]
Total before extend point number: 296381 [11/08 11:55:34]
Total extend point number: 576671 [11/08 11:55:34]
[11/08 11:55:34]
Now processing partition i:1_1 and j:1_2 [11/08 11:55:37]
Now processing partition i:1_1 and j:1_3 [11/08 11:55:37]
Now processing partition i:1_1 and j:2_1 [11/08 11:55:37]
Now processing partition i:1_1 and j:2_2 [11/08 11:55:37]
Now processing partition i:1_1 and j:2_3 [11/08 11:55:38]
Now processing partition i:1_1 and j:3_1 [11/08 11:55:38]
Now processing partition i:1_1 and j:3_2 [11/08 11:55:38]
Now processing partition i:1_1 and j:3_3 [11/08 11:55:38]
Now processing partition i:1_2 and j:1_1 [11/08 11:55:38]
Now processing partition i:1_2 and j:1_3 [11/08 11:55:38]
Now processing partition i:1_2 and j:2_1 [11/08 11:55:38]
Now processing partition i:1_2 and j:2_2 [11/08 11:55:38]
Now processing partition i:1_2 and j:2_3 [11/08 11:55:38]
Now processing partition i:1_2 and j:3_1 [11/08 11:55:38]
Now processing partition i:1_2 and j:3_2 [11/08 11:55:38]
Now processing partition i:1_2 and j:3_3 [11/08 11:55:39]
Now processing partition i:1_3 and j:1_1 [11/08 11:55:39]
Now processing partition i:1_3 and j:1_2 [11/08 11:55:39]
Now processing partition i:1_3 and j:2_1 [11/08 11:55:40]
Now processing partition i:1_3 and j:2_2 [11/08 11:55:40]
Now processing partition i:1_3 and j:2_3 [11/08 11:55:40]
Now processing partition i:1_3 and j:3_1 [11/08 11:55:40]
Now processing partition i:1_3 and j:3_2 [11/08 11:55:40]
Now processing partition i:1_3 and j:3_3 [11/08 11:55:40]
Now processing partition i:2_1 and j:1_1 [11/08 11:55:41]
Now processing partition i:2_1 and j:1_2 [11/08 11:55:41]
Now processing partition i:2_1 and j:1_3 [11/08 11:55:41]
Now processing partition i:2_1 and j:2_2 [11/08 11:55:41]
Now processing partition i:2_1 and j:2_3 [11/08 11:55:41]
Now processing partition i:2_1 and j:3_1 [11/08 11:55:41]
Now processing partition i:2_1 and j:3_2 [11/08 11:55:41]
Now processing partition i:2_1 and j:3_3 [11/08 11:55:41]
Now processing partition i:2_2 and j:1_1 [11/08 11:55:42]
Now processing partition i:2_2 and j:1_2 [11/08 11:55:42]
Now processing partition i:2_2 and j:1_3 [11/08 11:55:43]
Now processing partition i:2_2 and j:2_1 [11/08 11:55:43]
Now processing partition i:2_2 and j:2_3 [11/08 11:55:43]
Now processing partition i:2_2 and j:3_1 [11/08 11:55:43]
Now processing partition i:2_2 and j:3_2 [11/08 11:55:43]
Now processing partition i:2_2 and j:3_3 [11/08 11:55:43]
Now processing partition i:2_3 and j:1_1 [11/08 11:55:44]
Now processing partition i:2_3 and j:1_2 [11/08 11:55:44]
Now processing partition i:2_3 and j:1_3 [11/08 11:55:44]
Now processing partition i:2_3 and j:2_1 [11/08 11:55:44]
Now processing partition i:2_3 and j:2_2 [11/08 11:55:44]
Now processing partition i:2_3 and j:3_1 [11/08 11:55:44]
Now processing partition i:2_3 and j:3_2 [11/08 11:55:45]
Now processing partition i:2_3 and j:3_3 [11/08 11:55:45]
Now processing partition i:3_1 and j:1_1 [11/08 11:55:46]
Now processing partition i:3_1 and j:1_2 [11/08 11:55:46]
Now processing partition i:3_1 and j:1_3 [11/08 11:55:46]
Now processing partition i:3_1 and j:2_1 [11/08 11:55:46]
Now processing partition i:3_1 and j:2_2 [11/08 11:55:46]
Now processing partition i:3_1 and j:2_3 [11/08 11:55:47]
Now processing partition i:3_1 and j:3_2 [11/08 11:55:47]
Now processing partition i:3_1 and j:3_3 [11/08 11:55:47]
Now processing partition i:3_2 and j:1_1 [11/08 11:55:47]
Now processing partition i:3_2 and j:1_2 [11/08 11:55:48]
Now processing partition i:3_2 and j:1_3 [11/08 11:55:48]
Now processing partition i:3_2 and j:2_1 [11/08 11:55:48]
Now processing partition i:3_2 and j:2_2 [11/08 11:55:48]
Now processing partition i:3_2 and j:2_3 [11/08 11:55:48]
Now processing partition i:3_2 and j:3_1 [11/08 11:55:48]
Now processing partition i:3_2 and j:3_3 [11/08 11:55:48]
Now processing partition i:3_3 and j:1_1 [11/08 11:55:50]
Now processing partition i:3_3 and j:1_2 [11/08 11:55:50]
Now processing partition i:3_3 and j:1_3 [11/08 11:55:50]
Now processing partition i:3_3 and j:2_1 [11/08 11:55:50]
Now processing partition i:3_3 and j:2_2 [11/08 11:55:50]
Now processing partition i:3_3 and j:2_3 [11/08 11:55:50]
Now processing partition i:3_3 and j:3_1 [11/08 11:55:50]
Now processing partition i:3_3 and j:3_2 [11/08 11:55:51]
Found 1 CUDA devices [11/08 11:55:53]
train partition 1_1 on gpu 0 [11/08 11:55:53]
Output folder: ./output\exp1_16
Tensorboard not available: not logging progress
Process Partition_1_1:
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self.kwargs)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training
training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training
scene = PartitionScene(dataset, gaussians)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene_init
.py", line 119, in init
scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path,
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast
nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm
center, diagonal = get_center_and_diag(cam_centers)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag
cam_centers = np.hstack(cam_centers)
File "<array_function internals>", line 200, in hstack
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack
return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting)
File "<array_function internals>", line 200, in concatenate
ValueError: need at least one array to concatenate

train partition 1_2 on gpu 0 [11/08 11:55:56]
Output folder: ./output\exp1_16
Tensorboard not available: not logging progress

Process Partition_1_2:
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self.kwargs)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training
training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training
scene = PartitionScene(dataset, gaussians)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene_init
.py", line 119, in init
scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path,
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast
nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm
center, diagonal = get_center_and_diag(cam_centers)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag
cam_centers = np.hstack(cam_centers)
File "<array_function internals>", line 200, in hstack
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack
return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting)
File "<array_function internals>", line 200, in concatenate
ValueError: need at least one array to concatenate
train partition 1_3 on gpu 0 [11/08 11:55:59]
Output folder: ./output\exp1_16
Tensorboard not available: not logging progress
Process Partition_1_3:
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self.kwargs)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training
training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training
scene = PartitionScene(dataset, gaussians)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene_init
.py", line 119, in init
scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path,
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast
nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm
center, diagonal = get_center_and_diag(cam_centers)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag
cam_centers = np.hstack(cam_centers)
File "<array_function internals>", line 200, in hstack
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack
return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting)
File "<array_function internals>", line 200, in concatenate
ValueError: need at least one array to concatenate

train partition 2_1 on gpu 0 [11/08 11:56:03]
Output folder: ./output\exp1_16
Tensorboard not available: not logging progress
Process Partition_2_1:
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self.kwargs)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training
training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training
scene = PartitionScene(dataset, gaussians)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene_init
.py", line 119, in init
scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path,
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast
nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm
center, diagonal = get_center_and_diag(cam_centers)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag
cam_centers = np.hstack(cam_centers)
File "<array_function internals>", line 200, in hstack
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack
return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting)
File "<array_function internals>", line 200, in concatenate
ValueError: need at least one array to concatenate

train partition 2_2 on gpu 0 [11/08 11:56:07]
Output folder: ./output\exp1_16
Tensorboard not available: not logging progress

Process Partition_2_2:
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self.kwargs)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training
training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training
scene = PartitionScene(dataset, gaussians)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene_init
.py", line 119, in init
scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path,
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast
nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm
center, diagonal = get_center_and_diag(cam_centers)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag
cam_centers = np.hstack(cam_centers)
File "<array_function internals>", line 200, in hstack
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack
return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting)
File "<array_function internals>", line 200, in concatenate
ValueError: need at least one array to concatenate
train partition 2_3 on gpu 0 [11/08 11:56:11]
Output folder: ./output\exp1_16
Tensorboard not available: not logging progress
Process Partition_2_3:
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self.kwargs)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training
training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training
scene = PartitionScene(dataset, gaussians)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene_init
.py", line 119, in init
scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path,
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast
nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm
center, diagonal = get_center_and_diag(cam_centers)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag
cam_centers = np.hstack(cam_centers)
File "<array_function internals>", line 200, in hstack
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack
return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting)
File "<array_function internals>", line 200, in concatenate
ValueError: need at least one array to concatenate

train partition 3_1 on gpu 0 [11/08 11:56:14]
Output folder: ./output\exp1_16
Tensorboard not available: not logging progress
Process Partition_3_1:
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self.kwargs)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training
training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training
scene = PartitionScene(dataset, gaussians)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene_init
.py", line 119, in init
scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path,
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast
nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm
center, diagonal = get_center_and_diag(cam_centers)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag
cam_centers = np.hstack(cam_centers)
File "<array_function internals>", line 200, in hstack
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack
return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting)
File "<array_function internals>", line 200, in concatenate
ValueError: need at least one array to concatenate

train partition 3_2 on gpu 0 [11/08 11:56:17]
Output folder: ./output\exp1_16
Tensorboard not available: not logging progress

Process Partition_3_2:
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self.kwargs)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training
training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training
scene = PartitionScene(dataset, gaussians)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene_init
.py", line 119, in init
scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path,
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast
nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm
center, diagonal = get_center_and_diag(cam_centers)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag
cam_centers = np.hstack(cam_centers)
File "<array_function internals>", line 200, in hstack
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack
return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting)
File "<array_function internals>", line 200, in concatenate
ValueError: need at least one array to concatenate
train partition 3_3 on gpu 0 [11/08 11:56:21]
Output folder: ./output\exp1_16
Tensorboard not available: not logging progress

Process Partition_3_3:
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self.kwargs)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training
training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger)
File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training
scene = PartitionScene(dataset, gaussians)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene_init
.py", line 119, in init
scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path,
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast
nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm
center, diagonal = get_center_and_diag(cam_centers)
File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag
cam_centers = np.hstack(cam_centers)
File "<array_function internals>", line 200, in hstack
File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack
return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting)
File "<array_function internals>", line 200, in concatenate
ValueError: need at least one array to concatenate

Training complete. [11/08 11:56:24]
Merging Partitions... [11/08 11:56:24]
All Done! [11/08 11:56:24]

Process finished with exit code 0

Floaters after partitioning

Hi,

Thank you for the code.

I simply run the partitioning code with default settings on the Mill-19 dataset, scene Building. However, it seems that your partitioning code generates many floating artifacts after partitioning.
image
As shown in the above image, the green point clouds are the SFM points before partitioning, and the black ones are the partition 1_1. You can see there are many black points in the air.
image
It is more obvious when I zoom out, more points blow the ground.

I guess that there is probably a bug in the code.

Visibility-based camera selection

I found 2 problems with the visibility-based camera selection.

First, the method point_in_image in scene/vastgs/data_partition.py may be incorrect, camera.image_height and camera.image_width on line 403 & 404 and line 417 & 419 should be swapped.

Second, on dataset Mill-19, almost all of the points in the whole point cloud are included in one partition. The initial point cloud of dataset Mill-19 downloaded from https://vastgaussian.github.io/ is noisy and has many noise points under the ground, this makes the bounding box of points very large and may be the cause of this problem.

the same file format but not the same output, so why?

100MSDCF
truck
The file directory hierarchies behind the folder "truck" and the folder "100MSDCF/mvc/" are the same, but "truck" runs successfully, while "100MSDCF/mvc/" reports an error "ValueError: min() arg is an empty sequence". Why?

ValueError: need at least one array to concatenate

Training complete. [01/07 11:54:20]
Merging Partitions... [01/07 11:54:20]
region: ./output/dream_home/point_cloud/iteration_7000/3_1_point_cloud.ply [01/07 11:54:33]
x_min:1.0858807563781738, x_max:inf, z_min:-inf, z_max:-0.6639560461044312 [01/07 11:54:33]
/home/mohsen/anaconda3/envs/vast-gaussian/lib/python3.8/site-packages/matplotlib/patches.py:739: RuntimeWarning: invalid value encountered in scalar add
y1 = self.convert_yunits(self._y0 + self._height)
/home/mohsen/anaconda3/envs/vast-gaussian/lib/python3.8/site-packages/matplotlib/transforms.py:2050: RuntimeWarning: invalid value encountered in scalar add
self._mtx[1, 2] += ty
point_cloud_path: ./output/scan001/point_cloud/iteration_7000/3_1_point_cloud.ply [01/07 11:54:34]
[01/07 11:54:34]
Traceback (most recent call last):
File "train_vast.py", line 377, in
seamless_merge(lp.model_path, point_cloud_dir)
File "/home/mohsen/code/VastGaussian/seamless_merging.py", line 173, in seamless_merge
points = np.concatenate(xyz_list, axis=0)
File "<array_function internals>", line 200, in concatenate
ValueError: need at least one array to concatenate
/home/mohsen/anaconda3/envs/vast-gaussian/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '

Multi machines

Hi, thanks for your work,
how can I use multiple machines and gpus for training?
thank u!

Can you provide image sequences?

Hi, thank you for your great work!

I'm trying to reproduce your results.
However, I have some problems in training.
I downloaded the scene residence and unzipped it.
It has image files like A/DJI_0413.JPG ......
But dataset reader requires image file formats like images/000330.jpg

can you provide image sequences for training?

OutOfMemoryError

Hello author, may I ask what can be modified to solve this problem?
屏幕截图 2024-08-11 222548
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 908.00 MiB. GPU 0 has a total capacity of 23.99 GiB of which 0 bytes is free. Of the allocated memory 17.96 GiB is allocated by PyTorch, and 4.18 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

How to modify the plane formed by the xy-axis?

My data is in a plane consisting of x and y and the z axis is perpendicular to the plane. If this is the case how to modify your code. Or do you have a tool to convert my cameras, points3D, images to a plane with x and z axes?

loading cameras occupies too much ram?

I got stuck at this step when I first loaded the Rubble dataset, which took up 100+G of RAM. I'm wondering if I did the right thing.

I set resolution = 4

https://github.com/kangpeilun/VastGaussian/blob/2d03641643fcc23e78557d543b1fa9b09e901e0a/scene/datasets.py#L82C1-L88C81

        for resolution_scale in resolution_scales:
            print("Loading Training Cameras")
            self.train_cameras[resolution_scale] = cameraList_from_camInfos(scene_info.train_cameras, resolution_scale,
                                                                            args)
            print("Loading Test Cameras")
            self.test_cameras[resolution_scale] = cameraList_from_camInfos(scene_info.test_cameras, resolution_scale,
                                                                           args)

results of your reproduce

hi, thanks for your work, could you post your benchmark results compare with origin VastGaussian paper? @kangpeilun
Looking forward to your reply, thanks

Thank you for the reproduction!

Hi,

I was also interested in the VastGaussian but failed to have their official implementation. Thank you for reproducing their codes!

Besides, could you have an English version of README so that everyone can read it?

the same file format but not the same output, so why?

100MSDCF
truck
The file directory hierarchies behind the folder "truck" and the folder "100MSDCF/mvc/" are the same, but "truck" runs successfully, while "100MSDCF/mvc/" reports an error "ValueError: min() arg is an empty sequence". Why?

AssertionError: would build wheel with unsupported tag ('cp37', 'cp38', 'linux_x86_64')

Thank you for your great work,How can I solve the following problem when I install simple_knn

    File "<string>", line 36, in <module>
    File "<pip-setuptools-caller>", line 34, in <module>
    File "/home/disk2/wyz/VastGaussian/submodules/simple-knn/setup.py", line 33, in <module>
      'build_ext': BuildExtension
    File "/home/wyz/miniconda3/envs/Geo3DGS/lib/python3.7/site-packages/setuptools/__init__.py", line 103, in setup
      return distutils.core.setup(**attrs)
    File "/home/wyz/miniconda3/envs/Geo3DGS/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 185, in setup
      return run_commands(dist)
    File "/home/wyz/miniconda3/envs/Geo3DGS/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
      dist.run_commands()
    File "/home/wyz/miniconda3/envs/Geo3DGS/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
      self.run_command(cmd)
    File "/home/wyz/miniconda3/envs/Geo3DGS/lib/python3.7/site-packages/setuptools/dist.py", line 963, in run_command
      super().run_command(command)
    File "/home/wyz/miniconda3/envs/Geo3DGS/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
      cmd_obj.run()
    File "/home/wyz/miniconda3/envs/Geo3DGS/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 405, in run
      impl_tag, abi_tag, plat_tag = self.get_tag()
    File "/home/wyz/miniconda3/envs/Geo3DGS/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 356, in get_tag
      ), f"would build wheel with unsupported tag {tag}"
  AssertionError: would build wheel with unsupported tag ('cp37', 'cp38', 'linux_x86_64')

License

Hi!
I was wondering under which license do you publish this code?

Image file suffix case issue

Function data_partition in file utils/partition_utils.py hardcodes image file suffix as .jpg, which will cause problem when the image file suffix in the COLMAP images.bin file is uppercase .JPG.

How to use building scene from Mill-19 ?

Hi, I found the the structure of raw building scene from Mill-19 has following directory, which is different from tanks and temples.

building (Mill-19)
--train
--metadata
--rgbs
--val
--metadata
--rgbs

Should I convert its format ?

ValueError: cannot reshape array of size 1 into shape (3,3)

2024-06-03 02:34:20.940364: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2024-06-03 02:34:21.013904: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "/home/xzj/project/VastGaussian-master/train_vast.py", line 323, in
train_main()
File "/home/xzj/project/VastGaussian-master/train_vast.py", line 283, in train_main
rot = np.array(lp.rot).reshape([3, 3])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: cannot reshape array of size 1 into shape (3,3)

用的是tandt_db数据集

Wrong platform name in train_vast example

Hello! Thanks for the great job. I want to inform you that I found an error in your code (or README).
In the README you give an example

python train_vast.py -s datasets/xxx --exp_name xxx --manhattan --plantform threejs --pos "xx xx xx" --rot "xx xx xx"

But in get_man_trans function, the platform name is assumed to be different. Therefore, this function returns None and the point cloud does not transform correctly.

def get_man_trans(lp):
    lp.pos = [float(pos) for pos in lp.pos.split(" ")]
    lp.rot = [float(rot) for rot in lp.rot.split(" ")]

    man_trans = None
    if lp.manhattan and lp.plantform == "tj":  # threejs
        man_trans = create_man_rans(lp.pos, lp.rot)
        lp.man_trans = man_trans
    elif lp.manhattan and lp.plantform == "cc":  # cloudcompare 如果处理平台为cloudcompare,则rot为旋转矩阵
        rot = np.array(lp.rot).reshape([3, 3])
        man_trans = np.zeros((4, 4))
        man_trans[:3, :3] = rot
        man_trans[:3, -1] = np.array(lp.pos)
        man_trans[3, 3] = 1

    return man_trans

For example, you can raise an error in this function if the platform name does not match anything

...
else:
    raise ValueError(f"Incorrect platform name")

Is JAX needed?

Hi, thank you for your implementation! We noticed that you used JAX but only for image_utils.py. Could it be replaced with pytorch code? Could it even be replaced by numpy completely?

I am not familiar with JAX, so please let me know if I am mistaken.

Thank you!

Can someone help me solve the problem of error caused by multi-GPU training?

I implemented the multi-GPU version of train_multigpu.py under the develop branch but ran into a problem when using multi-GPU training and I'm not sure what the reason is. On the other hand, training on one GPU in turn did not have any problems. I was very confused. The error is shown in the following image:
bbb9321eb546ee3070d982ba15b231c5

python train_vast.py : ValueError: min() arg is an empty sequence

python train_vast.py ....

  File "E:\train_vast.py", line 291, in <module>
    train_main()
  
  File "E:\train_vast.py", line 283, in train_main
    training(lp, op, pp, args.test_iterations, args.save_iterations,
  
  File "E:\train_vast.py", line 44, in training
    big_scene = BigScene(dataset)  # 这段代码整个都是加载数据集,同时包含高斯模型参数的加载
  
  File "E:\VastGaussian_scene\datasets.py", line 77, in __init__
    DataPartitioning = ProgressiveDataPartitioning(scene_info, self.train_cameras[resolution_scales[0]],
  
  File "E:\VastGaussian_scene\data_partition.py", line 61, in __init__
    self.run_DataPartition(train_cameras)
  
  File "E:\VastGaussian_scene\data_partition.py", line 66, in run_DataPartition
    partition_list = self.Position_based_data_selection(partition_dict)
  
  File "E:\VastGaussian_scene\data_partition.py", line 176, in Position_based_data_selection
    ori_point_bbox=self.get_point_range(points),
  
  File "E:\VastGaussian_scene\data_partition.py", line 138, in get_point_range
    return [min(x_list), max(x_list),

ValueError: min() arg is an empty sequence

Can you show the program environment

我在试着运行您的程序遇见了环境问题:
(vastgaussian) F:\VastGaussian>python train_vast.py -s data\castle3
Traceback (most recent call last):
File "F:\VastGaussian\train_vast.py", line 19, in
from gaussian_renderer import render, network_gui
File "F:\VastGaussian\gaussian_renderer_init_.py", line 14, in
from diff_gaussian_rasterization import GaussianRasterizationSettings, GaussianRasterizer
File "C:\Users\Administrator\anaconda3\envs\vastgaussian\lib\site-packages\diff_gaussian_rasterization_init_.py", line 15, in
from . import _C
ImportError: DLL load failed while importing _C: 找不到指定的程序。
可以展示一下你的运行环境吗?
I'm trying to run your program and I'm having problems. Can you show me your running environment?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.