thended / torch-ash Goto Github PK
View Code? Open in Web Editor NEW[PAMI 2022, CVPR 2023] ASH: Parallel Spatial Hashing for Fast Scene Reconstruction
Home Page: https://dongwei.info/publication/ash-mono/
License: MIT License
[PAMI 2022, CVPR 2023] ASH: Parallel Spatial Hashing for Fast Scene Reconstruction
Home Page: https://dongwei.info/publication/ash-mono/
License: MIT License
Hi!Thanks for your impressive work! I found that in your code, you first fuse all images into TSDF, and then train the sdf field.
I want to know whether it supports online TSDF fusion and sdf field training. For example, add the img one by one into tsdf fusion, and the train process only sample in existed img.
Hi! Thanks for your inspiring work! I plan to try torch-ash
in outdoor large-scale/unbounded scene. But there is no instruction for custom dataset. So I'm going to go out on a limb and ask you when these instructions will be released?
best wishes
Hi, thank you for sharing the excellent code. The code looks really clean to start with! Regarding the evaluation in the CVPR 23 paper, 4 sequences from ScanNet and another 4 from 7-Scenes datasets are evaluated for the reconstruction quality. For future comparison, I wonder if you could kindly provide the ground truth used for the evaluation, so that we can make sure the sames are compared against. Thanks a lot!
Is the coordinate system from OpenCV or Blender?
Thanks for your work, What is the problem?
python demo/train_scene_recon.py --path datatset/scene0050_00/samples/ --voxel_size 0.015 --depth_type learned
Loading frame-004640.color.npy: 100%|████████████████████████████| 465/465 [00:08<00:00, 53.80it/s]
Generating rays for image 464: 100%|█████████████████████████████| 465/465 [00:13<00:00, 35.17it/s]
Transforming normals for image 464: 100%|███████████████████████| 465/465 [00:04<00:00, 115.21it/s]
voxel_size=0.015
Fuse frame 0: 0%| | 0/465 [00:00<?, ?it/s]/opt/anaconda3/envs/sdfstudio/lib/python3.8/site-packages/ash/core.py:145: UserWarning: keys are not int32, conversion might reduce precision.
warnings.warn("keys are not int32, conversion might reduce precision.")
Fuse frame 464: 100%|████████████████████████████████████████████| 465/465 [00:22<00:00, 20.54it/s]
after pruning: 15052
hash map size after pruning: 15052
0%| | 0/20001 [00:00<?, ?it/s]
Traceback (most recent call last):
File "demo/train_scene_recon.py", line 302, in
result = model(rays_o, rays_d, ray_norms, near=0.3, far=5.0, jitter=jitter)
File "/opt/anaconda3/envs/sdfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "demo/train_scene_recon.py", line 111, in forward
weights = nerfacc.render_weight_from_density(
File "/opt/anaconda3/envs/sdfstudio/lib/python3.8/site-packages/nerfacc/volrend.py", line 358, in render_weight_from_density
trans, alphas = render_transmittance_from_density(
File "/opt/anaconda3/envs/sdfstudio/lib/python3.8/site-packages/nerfacc/volrend.py", line 261, in render_transmittance_from_density
trans = torch.exp(-exclusive_sum(sigmas_dt, packed_info))
File "/opt/anaconda3/envs/sdfstudio/lib/python3.8/site-packages/nerfacc/scan.py", line 94, in exclusive_sum
assert inputs.dim() == 1, "inputs must be flattened."
AssertionError: inputs must be flattened.
Hi, thanks for sharing your code. I can compile the ash module, but it fails the tests you provided. The error is always something like: stdgpu::vector::size : Size out of bounds: -14 not in [0, 111]. I guess it may be an environmental issue. My environment is CUDA 11.7 + pytorch 1.12.1 + python 3.8. Hope you can provide some information about your development environment.
I have a problem, Image quality by rendering VS monosdf and other nerf based method ? In replica room0 dataset , is it possible to achieve PSNR > 30 such as Monosdf?
when I use my data, the problem occur
CUDA_LAUNCH_BLOCKING=1 CUDA_VISIBLE_DEVICES=3 python demo/train_scene_recon.py --path dataset/l435_scan1/samples --voxel_size 0.015 --depth_type sensor dataset/l435_scan1/samples image png dataset/l435_scan1/samples image jpg dataset/l435_scan1/samples depth png dataset/l435_scan1/samples omni_normal npy Loading frame-000238.color.npy: 100%|███████████████████████████████████████████████████████████████████████████████| 239/239 [00:02<00:00, 81.56it/s] Generating rays for image 238: 100%|████████████████████████████████████████████████████████████████████████████████| 239/239 [00:04<00:00, 55.52it/s] Transforming normals for image 238: 100%|██████████████████████████████████████████████████████████████████████████| 239/239 [00:01<00:00, 150.08it/s] voxel_size=0.015 Fuse frame 0: 0%| | 0/183 [00:00<?, ?it/s]/opt/anaconda3/envs/sdfstudio/lib/python3.8/site-packages/ash/core.py:145: UserWarning: keys are not int32, conversion might reduce precision. warnings.warn("keys are not int32, conversion might reduce precision.") Fuse frame 182: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 183/183 [00:03<00:00, 46.14it/s] after pruning: 5667 hash map size after pruning: 5667 loss: 0.2058,rgb: 0.0671,normal_l1: 0.3065,normal_cos: 0.5428,depth: 1470074.7500,eikonal_ray: 0.3921: 2%|▏ | 498/20001 [00:24<14:36, 22.25it/s][Open3D WARNING] Write Ply clamped color value to valid range /opt/anaconda3/envs/sdfstudio/lib/python3.8/site-packages/ash/core.py:179: UserWarning: empty keys warnings.warn("empty keys") loss: 0.2058,rgb: 0.0671,normal_l1: 0.3065,normal_cos: 0.5428,depth: 1470074.7500,eikonal_ray: 0.3921: 2%|▏ | 500/20001 [00:24<15:56, 20.40it/s] Traceback (most recent call last): File "demo/train_scene_recon.py", line 372, in <module> result = model(rays_o, rays_d, ray_norms, near=0.0, far=4.0) File "/opt/anaconda3/envs/sdfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "demo/train_scene_recon.py", line 113, in forward sigmas = self.sdf_to_sigma(sdfs) * masks.float() File "/opt/anaconda3/envs/sdfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "demo/train_scene_recon.py", line 31, in forward beta = self.min_beta + torch.abs(self.beta) RuntimeError: CUDA error: invalid configuration argument
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.