Coder Social home page Coder Social logo

vita-group / fsgs Goto Github PK

View Code? Open in Web Editor NEW
243.0 11.0 16.0 88.2 MB

"FSGS: Real-Time Few-Shot View Synthesis using Gaussian Splatting", Zehao Zhu, Zhiwen Fan, Yifan Jiang, Zhangyang Wang

License: Other

Python 67.79% C++ 6.20% Cuda 25.45% C 0.18% CMake 0.39%

fsgs's Introduction

FSGS: Real-Time Few-Shot View Synthesis using Gaussian Splatting

Paper Project Page Video Hits


demo

Environmental Setups

We provide install method based on Conda package and environment management:

conda env create --file environment.yml
conda activate FSGS

CUDA 11.7 is strongly recommended.

Data Preparation

In data preparation step, we reconstruct the sparse view inputs using SfM using the camera poses provided by datasets. Next, we continue the dense stereo matching under COLMAP with the function patch_match_stereo and obtain the fused stereo point cloud from stereo_fusion.

cd FSGS
mkdir dataset 
cd dataset

# download LLFF dataset
gdown 16VnMcF1KJYxN9QId6TClMsZRahHNMW5g

# run colmap to obtain initial point clouds with limited viewpoints
python tools/colmap_llff.py

# download MipNeRF-360 dataset
wget http://storage.googleapis.com/gresearch/refraw360/360_v2.zip
unzip -d mipnerf360 360_v2.zip

# run colmap on MipNeRF-360 dataset
python tools/colmap_360.py

We use the latest version of colmap to preprocess the datasets. If you meet issues on installing colmap, we provide a docker option.

# if you can not install colmap, follow this to build a docker environment
docker run --gpus all -it --name fsgs_colmap --shm-size=32g  -v /home:/home colmap/colmap:latest /bin/bash
apt-get install pip
pip install numpy
python3 tools/colmap_llff.py

We provide both the sparse and dense point cloud after we proprecess them. You may download them through this link. We use dense point cloud during training but you can still try sparse point cloud on your own.

Training

Train FSGS on LLFF dataset with 3 views

python train.py  --source_path dataset/nerf_llff_data/horns --model_path output/horns --eval  --n_views 3 --sample_pseudo_interval 1

Train FSGS on MipNeRF-360 dataset with 24 views

python train.py  --source_path dataset/mipnerf360/garden --model_path output/garden  --eval  --n_views 24 --depth_pseudo_weight 0.03  

Rendering

Run the following script to render the images.

python render.py --source_path dataset/nerf_llff_data/horns/  --model_path  output/horns --iteration 10000

You can customize the rendering path as same as NeRF by adding video argument

python render.py --source_path dataset/nerf_llff_data/horns/  --model_path  output/horns --iteration 10000  --video  --fps 30

Evaluation

You can just run the following script to evaluate the model.

python metrics.py --source_path dataset/nerf_llff_data/horns/  --model_path  output/horns --iteration 10000

Acknowledgement

Special thanks to the following awesome projects!

Citation

If you find our work useful for your project, please consider citing the following paper.

@misc{zhu2023FSGS, 
title={FSGS: Real-Time Few-Shot View Synthesis using Gaussian Splatting}, 
author={Zehao Zhu and Zhiwen Fan and Yifan Jiang and Zhangyang Wang}, 
year={2023},
eprint={2312.00451},
archivePrefix={arXiv},
primaryClass={cs.CV} 
}

fsgs's People

Contributors

henrypearce4d avatar zehaozhu avatar zhiwenfan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fsgs's Issues

Blurry results with mipnerf360 dataset

Thanks for the great work! But after training with mipnerf360 bicycle dataset, with command:
python train.py --source_path dataset/mipnerf360/bicycle --model_path output/bicycle2 --eval --n_views 24 --depth_pseudo_weight 0.03,
I got this blurry training view rendered result:
_DSC8814

Pearson Correlation Loss in paper or code

Hi, I found that the Pearson Correlation Loss of paper and code is inconsistent.
in paper:
Snipaste_2024-01-19_17-28-02
in code:

FSGS/train.py

Lines 105 to 108 in 8c2e181

depth_loss = min(
(1 - pearson_corrcoef( - midas_depth, rendered_depth)),
(1 - pearson_corrcoef(1 / (midas_depth + 200.), rendered_depth))
)

How to organize the preprocessed data?

Hi, I notice that the subfolder organization of the data provided in the download link is not compatible with the code. How to organize the downloaded data? Thanks!

environment.yml invalid

When running conda env I get the following error:

failed
Pip subprocess error:
ERROR: Invalid requirement: 'matplotlib=3.5.3' (from line 3 of /FSGS/condaenv.l8ya92xz.requirements.txt)
Hint: = is not a valid operator. Did you mean == ?


CondaEnvException: Pip failed

The solution is in the error description I believe. I will check that it works and push a PR

TimeoutError: [Errno 110] Connection timed out

Thank you for your nice work!
I used cuda 11.6 pytorch 1.12.1, but I got a CUDA error: an illegal memory access was encountered.
So I use cuda11.7 pytorch 1.13.0 , torchvision 0.14.0, torchaudio 0.13.0.
and I get a TimeoutError , I wonder what's going on?

(FSGS) root@autodl-container-7ac146ab4e-87edd0e7:~/autodl-tmp/FSGS-main# python train.py --source_path bicycle --model_path output/bicycle --eval --n_views 24 --depth_pseudo_weight 0.03
Traceback (most recent call last):
File "/root/miniconda3/envs/FSGS/lib/python3.8/urllib/request.py", line 1354, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/root/miniconda3/envs/FSGS/lib/python3.8/http/client.py", line 1256, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/root/miniconda3/envs/FSGS/lib/python3.8/http/client.py", line 1302, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/root/miniconda3/envs/FSGS/lib/python3.8/http/client.py", line 1251, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/root/miniconda3/envs/FSGS/lib/python3.8/http/client.py", line 1011, in _send_output
self.send(msg)
File "/root/miniconda3/envs/FSGS/lib/python3.8/http/client.py", line 951, in send
self.connect()
File "/root/miniconda3/envs/FSGS/lib/python3.8/http/client.py", line 1418, in connect
super().connect()
File "/root/miniconda3/envs/FSGS/lib/python3.8/http/client.py", line 922, in connect
self.sock = self._create_connection(
File "/root/miniconda3/envs/FSGS/lib/python3.8/socket.py", line 808, in create_connection
raise err
File "/root/miniconda3/envs/FSGS/lib/python3.8/socket.py", line 796, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 25, in
from utils.depth_utils import estimate_depth
File "/root/autodl-tmp/FSGS-main/utils/depth_utils.py", line 3, in
midas = torch.hub.load("intel-isl/MiDaS", "DPT_Hybrid")
File "/root/miniconda3/envs/FSGS/lib/python3.8/site-packages/torch/hub.py", line 539, in load
repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, trust_repo, "load",
File "/root/miniconda3/envs/FSGS/lib/python3.8/site-packages/torch/hub.py", line 180, in _get_cache_or_reload
repo_owner, repo_name, ref = _parse_repo_info(github)
File "/root/miniconda3/envs/FSGS/lib/python3.8/site-packages/torch/hub.py", line 134, in _parse_repo_info
with urlopen(f"https://github.com/{repo_owner}/{repo_name}/tree/main/"):
File "/root/miniconda3/envs/FSGS/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/root/miniconda3/envs/FSGS/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/root/miniconda3/envs/FSGS/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/root/miniconda3/envs/FSGS/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/root/miniconda3/envs/FSGS/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/root/miniconda3/envs/FSGS/lib/python3.8/urllib/request.py", line 755, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "/root/miniconda3/envs/FSGS/lib/python3.8/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/root/miniconda3/envs/FSGS/lib/python3.8/urllib/request.py", line 542, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/root/miniconda3/envs/FSGS/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/root/miniconda3/envs/FSGS/lib/python3.8/urllib/request.py", line 1397, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "/root/miniconda3/envs/FSGS/lib/python3.8/urllib/request.py", line 1357, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 110] Connection timed out>

argument "train_bg"

Thank you for this great work! I noticed there's an argument "train_bg" in train.py that wasn't used. Out of curiosity, is there any case where optimizing background would be useful? Thank you!

Quality issues

Hi,
thank you for your code and the paper.
I got it running although it has some bugs where it crashed. I can upstream the fixes later today.
Unfortunately, the quality of the output is very bad. Not sure if there are more substantial bugs in the code. I have been running it following your instructions.

Thank you.
MrNeRF

Screenshot from 2023-12-13 11-20-24
Screenshot from 2023-12-13 11-16-32

Urgent Request for Update on proximity Component in gaussian_model Module

Thank you for your great work!

I am writing to inquire about the upcoming update for the proximity component within the gaussian_model module mentioned in #5 . My team and I are currently conducting research that heavily relies on this particular feature, and we are eager to compare our experimental methods with yours.

Could you please provide an estimated timeline for when this update might be available? Understanding your schedule will greatly assist us in planning our research. If the update is expected to take a considerable amount of time, we might need to proceed with implementing this component ourselves.

Thank you for your attention to this matter.

Question about generating random poses

Hi, thanks for the fancy work.
I have tried running the code and it works well. However, I am confused that It seems that the generation of pseudo poses is the average of all training view poses with noise, but the description on Page 4 of this paper is "The synthesized view is sampled from the two closest training views in Euclidean space, calculating the averaged camera orientation and interpolating a virtual one between them." I can't find the corresponding code related to generating random poses described in the paper.
Please confirm if my understanding is correct, or if there is any misunderstanding of the code.

point3d.ply

Could you please provide the point cloud file after you processed it? Because it cannot be used due to an exception when I installed colmap

How can I reduce the GPU memory need?

Thank you for your great work!
My GPU is A5500 desktop,so I only have 16GB VM.
When I run the code, it remain me I need 35+GB VM, so how can I reduce the VM need ?

RuntimeError: numel: integer multiplication overflow

Using cache found in /root/.cache/torch/hub/intel-isl_MiDaS_master
/opt/conda/envs/FSGS/lib/python3.8/site-packages/timm/models/_factory.py:117: UserWarning: Mapping deprecated model name vit_base_resnet50_384 to current vit_base_r50_s16_384.orig_in21k_ft_in1k.
model = create_fn(
Using cache found in /root/.cache/torch/hub/intel-isl_MiDaS_master
[1000, 2000, 3000, 5000, 10000]
Optimizing output/garden
Output folder: output/garden [05/03 01:51:52]
Tensorboard not available: not logging progress [05/03 01:51:52]
Reading camera 185/185 [05/03 01:51:54]
4.750064706802369 cameras_extent [05/03 01:51:54]
Loading Training Cameras [05/03 01:51:54]
24it [00:07, 3.22it/s]
Loading Test Cameras [05/03 01:52:02]
24it [00:04, 5.08it/s]
Number of points at initialisation : 3906731 [05/03 01:52:50]
Training progress: 0%| | 0/10000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 279, in
training(lp.extract(args), op.extract(args), pp.extract(args), args)
File "train.py", line 90, in training
render_pkg = render(viewpoint_cam, gaussians, pipe, background)
File "/workspace/FSGS/gaussian_renderer/init.py", line 94, in render
rendered_image, radii, depth, alpha = rasterizer(
File "/opt/conda/envs/FSGS/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/FSGS/lib/python3.8/site-packages/diff_gaussian_rasterization/init.py", line 215, in forward
return rasterize_gaussians(
File "/opt/conda/envs/FSGS/lib/python3.8/site-packages/diff_gaussian_rasterization/init.py", line 32, in rasterize_gaussians
return _RasterizeGaussians.apply(
File "/opt/conda/envs/FSGS/lib/python3.8/site-packages/diff_gaussian_rasterization/init.py", line 92, in forward
num_rendered, color, depth, alpha, radii, geomBuffer, binningBuffer, imgBuffer = _C.rasterize_gaussians(*args)
RuntimeError: numel: integer multiplication overflow
Training progress: 0%| | 0/10000 [00:00<?, ?it/s]

"visibility_filter": radii > 0, RuntimeError: CUDA error: an illegal memory access was encountered

(FSGS) 23ckj@amax:/mnt1/ckj/FSGS/FSGS-main$ CUDA_VISIBLE_DEVICES=7 CUDA_LAUNCH_BLOCKING=1 python train.py --source_path /mnt1/ckj/gaussian-splatting/tandt_db/nerf_llff_data/horns --model_path output/horns --eval --n_views 3 --sample_pseudo_interval 1
/mnt1/ckj/miniconda/envs/FSGS/lib/python3.7/site-packages/timm/models/_factory.py:121: UserWarning: Mapping deprecated model name vit_base_resnet50_384 to current vit_base_r50_s16_384.orig_in21k_ft_in1k.
**kwargs,
[1000, 2000, 3000, 5000, 10000]
Optimizing output/horns
Output folder: output/horns [04/03 21:00:20]
2024-03-04 21:00:20.653428: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-04 21:00:20.856054: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2024-03-04 21:00:21.587124: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /mnt1/ckj/miniconda/envs/FSGS/lib/python3.7/site-packages/cv2/../../lib64:
2024-03-04 21:00:21.587234: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /mnt1/ckj/miniconda/envs/FSGS/lib/python3.7/site-packages/cv2/../../lib64:
2024-03-04 21:00:21.587246: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Reading camera 62/62 [04/03 21:00:22]
6.323975610733033 cameras_extent [04/03 21:00:22]
Loading Training Cameras [04/03 21:00:22]
3it [00:03, 1.28s/it]
Loading Test Cameras [04/03 21:00:26]
8it [00:00, 12.25it/s]
Number of points at initialisation : 19397 [04/03 21:00:34]
Training progress: 1%|█ | 100/10000 [00:02<04:35, 35.97it/s, Loss=0.3308100]Traceback (most recent call last):
File "train.py", line 280, in
training(lp.extract(args), op.extract(args), pp.extract(args), args)
File "train.py", line 90, in training
render_pkg = render(viewpoint_cam, gaussians, pipe, background)
File "/mnt1/ckj/FSGS/FSGS-main/gaussian_renderer/init.py", line 131, in render
"visibility_filter": radii > 0,
RuntimeError: CUDA error: an illegal memory access was encountered
Training progress: 1%|█ | 100/10000 [00:03<05:10, 31.90it/s, Loss=0.3308100]

I have tried several methods to solve it, but unfortunately i failed. The method https://github.com/graphdeco-inria/diff-gaussian-rasterization/pull/10 and https://github.com/graphdeco-inria/gaussian-splatting/issues/41#issuecomment-1784246821 just do not work for me.

Blender dataset processing

Hello, thank you for sharing this excellent work. I wonder if there is any plan to release the code related to Blender dataset processing?

Sparse or Dense Point Clouds as Input

Nice work!But I have a question about this work. I noticed that you used COLMAP to do both sparse and dense reconstruction. May I ask if you are using sparse or dense point clouds as input for 3D Gaussian Splatting?
Looking forward to your reply!
微信图片_20231213152334

CUDA error: an illegal memory access was encountered

Thank you for your nice work!
I was using wsl2 in win11, but I don't think that's the problem...
So I got the CUDA error, I wonder what's going on?

/home/jx/miniconda3/envs/gs3d/lib/python3.10/site-packages/timm/models/_factory.py:117: UserWarning: Mapping deprecated model name vit_base_resnet50_384 to current vit_base_r50_s16_384.orig_in21k_ft_in1k.
  model = create_fn(
Using cache found in /home/jx/.cache/torch/hub/intel-isl_MiDaS_master
[1000, 2000, 3000, 5000, 10000]
Optimizing output/horns
Output folder: output/horns [08/01 13:59:54]
Reading camera 62/62 [08/01 13:59:54]
0it [00:00, ?it/s]6.323975610733033 cameras_extent [08/01 13:59:54]
Loading Training Cameras [08/01 13:59:54]
3it [00:00,  3.65it/s]
0it [00:00, ?it/s]Loading Test Cameras [08/01 13:59:55]
8it [00:01,  7.02it/s]
Number of points at initialisation :  37399 [08/01 14:00:01]
Training progress:   0%|          | 0/10000 [00:00<?, ?it/s]/home/jx/miniconda3/envs/gs3d/lib/python3.10/site-packages/torchmetrics/utilities/prints.py:43: UserWarning: The variance of predictions or target is close to zero. This can cause instability in Pearson correlationcoefficient, leading to wrong results. Consider re-scaling the input if possible or computing using alarger dtype (currently using torch.float32).
  warnings.warn(*args, **kwargs)  # noqa: B028
Traceback (most recent call last):
  File "/mnt/c/Programs/PyCharmWorkplace/FSGS-main/train.py", line 279, in <module>
    training(lp.extract(args), op.extract(args), pp.extract(args), args)
  File "/mnt/c/Programs/PyCharmWorkplace/FSGS-main/train.py", line 97, in training
    loss = ((1.0 - opt.lambda_dssim) * Ll1 + opt.lambda_dssim * (1.0 - ssim(image, gt_image)))
  File "/mnt/c/Programs/PyCharmWorkplace/FSGS-main/utils/loss_utils.py", line 49, in ssim
    window = window.cuda(img1.get_device())
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Training progress:   0%|          | 0/10000 [00:01<?, ?it/s]
ERROR conda.cli.main_run:execute(124): `conda run python train.py --source_path dataset/nerf_llff_data/horns --model_path output/horns --eval --n_views 3 --sample_pseudo_interval 1` failed. (See above for error)

Process finished with exit code 1```

Question for Replicating Table 4

Hello, and thank you for your insightful research. I am intrigued by the results in your paper and am attempting to replicate them, particularly for the ablation results corresponding to Table 4.

In my replication, I noticed a discrepancy in the results; the PSNR value I obtained after removing three components is 19.78, which differs from what is reported in Table 4. Could you please clarify if there are other factors, possibly not detailed in the paper, that might have contributed to this performance improvement?

Additionally, I observed in your code the use of concepts like confidence, which is different from the vanilla 3DGS. Are these variations explicitly mentioned in the paper?

  Paper Replication at 10k
w/o unpooling, guidance, pseudo 17.83 19.39
w/o guidance, pseudo 18.64 -
w/o pseudo 19.93 19.79
FSGS 20.43 20.52

Question about the function to generate random poses

Hi,

I am interested in your fancy work. I have one question about the function generate_random_poses_llff and generate_random_poses_llff to generate random poses. I am curious to know which algorithm is related to this?

Could you please provide some insights or details about the algorithm used in the generation of random pose? I would greatly appreciate any information you can share.

I am looking forward to hearing back from you. Thank you very much.

Best

Question for Reproduce Experimental Results

Hello, and thank you for your insightful research. I am intrigued by the results in your paper and am attempting to replicate them.
I followed the instructions in your document to conduct the reproduction experiment on LLFF dataset and directly used the proprecessed sparse and dense point cloud which provided by you. The PSNR value (18.38 and 18.31, I conduct the replication experiment twice) are significantly lower than what is reported in your paper(20.43). Metrics for all scan in two replication experiments are provided in the attachmen
reproduce.txt
t.

May I ask if you know the possible reasons for this? Additionally, according to the default settings in the code you provided, the image resolution is 504 × 378, which is different from the resolution reported in the Table 1(503 × 381). I don’t know if there is some misalignment.

I would be very grateful if you could clarify my confusion. Thank you again for contributing such insightful work to the community.

how to generate pose_bounds.npy

I notice that pose_bounds.npy is used in your codebase but didn't see any related code for generation of this file from the bin or txt files.
Can you provide this file plz?

CUDA error when training

Thank you for your nice work!

I got a cuda error when I trying to train this model. It's fine for me to train a vanilla Gaussian Splatting and some other Gaussian Splatting models. Just wonder what is going on here:

 python train.py  --source_path dataset/nerf_llff_data/horns --model_path output/horns --eval  --use_color --n_views 3
Using cache found in /home/haitian/.cache/torch/hub/intel-isl_MiDaS_master
/home/haitian/anaconda3/envs/FSGS/lib/python3.8/site-packages/timm/models/_factory.py:117: UserWarning: Mapping deprecated model name vit_base_resnet50_384 to current vit_base_r50_s16_384.orig_in21k_ft_in1k.
  model = create_fn(
Using cache found in /home/haitian/.cache/torch/hub/intel-isl_MiDaS_master
[500, 1000, 2000, 2500, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000]
Optimizing output/horns
Output folder: output/horns [12/12 15:37:55]
Tensorboard not available: not logging progress [12/12 15:37:55]
Reading camera 62/62 [12/12 15:37:56]
6.323975610733033 cameras_extent [12/12 15:37:56]
Loading Training Cameras [12/12 15:37:56]
3it [00:02,  1.16it/s]
Loading Test Cameras [12/12 15:37:58]
8it [00:00,  9.50it/s]
Number of points at initialisation :  0 [12/12 15:38:17]
Training progress:   0%|                                                                                                                                              | 0/10000 [00:00<?, ?it/s]Traceback (most recent call last):
  File "train.py", line 281, in <module>
    training(lp.extract(args), op.extract(args), pp.extract(args), args)
  File "train.py", line 90, in training
    render_pkg = render(viewpoint_cam, gaussians, pipe, background)
  File "/home/haitian/work/NeRF/FSGS/gaussian_renderer/__init__.py", line 94, in render
    rendered_image, radii, depth, alpha = rasterizer(
  File "/home/haitian/anaconda3/envs/FSGS/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/haitian/anaconda3/envs/FSGS/lib/python3.8/site-packages/diff_gaussian_rasterization/__init__.py", line 215, in forward
    return rasterize_gaussians(
  File "/home/haitian/anaconda3/envs/FSGS/lib/python3.8/site-packages/diff_gaussian_rasterization/__init__.py", line 32, in rasterize_gaussians
    return _RasterizeGaussians.apply(
  File "/home/haitian/anaconda3/envs/FSGS/lib/python3.8/site-packages/diff_gaussian_rasterization/__init__.py", line 92, in forward
    num_rendered, color, depth, alpha, radii, geomBuffer, binningBuffer, imgBuffer = _C.rasterize_gaussians(*args)
RuntimeError: CUDA error: invalid configuration argument

Accuracy on Blender Dataset

Nice work!But I hope somebody can assist me with a question I've encountered. I noticed that when using the Blender dataset, I achieve a higher 3DGS accuracy than what is documented in your paper. I use 8 views as input and 25 as tests. Iterations of 10000. I'm puzzled by this and would like to inquire if there are any possible reasons for this discrepancy.
1766a459682031696c5ec016c713dc20

Depth image in Synthesized Pseudo Views

Thanks for sharing your great work.

I have a question about how to get the estimated depth image of synthesized pseudo views.

In 3.4 Optimization in paper, it says the pearson correlation loss is applied to the training views and synthesized pseudo views. We can get the estimated depth from training image using DPT, but how to get estimated depth about synthesized views?

Thanks.

Install simple-knn errors

Hi, Thanks for your great work and code! I met a question when i use the conda env create --file environment.yml to create FSGS environment.

Processing g:\fsgs-main\submodules\simple-knn
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'done'
Building wheels for collected packages: simple-knn
  Building wheel for simple-knn (setup.py): started
  Building wheel for simple-knn (setup.py): finished with status 'error'
  error: subprocess-exited-with-error
  
  python setup.py bdist_wheel did not run successfully.
  exit code: 1
  
  [27 lines of output]
  running bdist_wheel
  C:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\utils\cpp_extension.py:411: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
    warnings.warn(msg.format('we could not find ninja.'))
  running build
  running build_ext
  C:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\utils\cpp_extension.py:813: UserWarning: The detected CUDA version (11.3) has a minor version mismatch with the version that was used to compile PyTorch (11.6). Most likely this shouldn't be a problem.
    warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
  building 'simple_knn._C' extension
  "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\TH -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\Users\dell\anaconda3\envs\FSGS\include -IC:\Users\dell\anaconda3\envs\FSGS\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29910\include" "-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\shared" /EHsc /Tpext.cpp /Fobuild\temp.win-amd64-cpython-38\Release\ext.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc /wd4624 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
  ext.cpp
  C:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\c10/macros/Macros.h(143): warning C4067: Ԥ      ָ            - Ӧ   뻻 з 
  C:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\c10/core/TensorImpl.h(2214): warning C4805:   |  :  ڲ    н    ͡ uintptr_t       ͡ bool    ϲ   ȫ
  C:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005:   HAVE_SNPRINTF  :    ض   
  C:\Users\dell\anaconda3\envs\FSGS\include\pyerrors.h(315): note:  μ   HAVE_SNPRINTF    ǰһ      
  "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\bin\nvcc" -c simple_knn.cu -o build\temp.win-amd64-cpython-38\Release\simple_knn.obj -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\TH -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\Users\dell\anaconda3\envs\FSGS\include -IC:\Users\dell\anaconda3\envs\FSGS\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29910\include" "-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\shared" -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --use-local-env
  simple_knn.cu(23): warning C4005:   __CUDACC__  :    ض   
  simple_knn.cu: note:  μ   __CUDACC__    ǰһ      
  simple_knn.cu(23): warning C4005:   __CUDACC__  :    ض   
  simple_knn.cu: note:  μ   __CUDACC__    ǰһ      
  G:\FSGS-main\submodules\simple-knn\simple_knn.h(18): error: identifier "int32_t" is undefined
  
  simple_knn.cu(192): error: declaration is incompatible with "void SimpleKNN::knn(int, float3 *, float *, <error-type> *)"
  G:\FSGS-main\submodules\simple-knn\simple_knn.h(18): here
  
  2 errors detected in the compilation of "simple_knn.cu".
  simple_knn.cu
  error: command 'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.3\\bin\\nvcc.exe' failed with exit code 2
  [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for simple-knn
  Running setup.py clean for simple-knn
Failed to build simple-knn
Installing collected packages: simple-knn
  Running setup.py install for simple-knn: started
  Running setup.py install for simple-knn: finished with status 'error'
  error: subprocess-exited-with-error
  
  Running setup.py install for simple-knn did not run successfully.
  exit code: 1
  
  [43 lines of output]
  running install
  C:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\setuptools\_distutils\cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
  !!
  
          ********************************************************************************
          Please avoid running ``setup.py`` directly.
          Instead, use pypa/build, pypa/installer or other
          standards-based tools.
  
          See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
          ********************************************************************************
  
  !!
    self.initialize_options()
  running build
  running build_ext
  C:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\utils\cpp_extension.py:411: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
    warnings.warn(msg.format('we could not find ninja.'))
  C:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\utils\cpp_extension.py:813: UserWarning: The detected CUDA version (11.3) has a minor version mismatch with the version that was used to compile PyTorch (11.6). Most likely this shouldn't be a problem.
    warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
  building 'simple_knn._C' extension
  creating build
  creating build\temp.win-amd64-cpython-38
  creating build\temp.win-amd64-cpython-38\Release
  "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\TH -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\Users\dell\anaconda3\envs\FSGS\include -IC:\Users\dell\anaconda3\envs\FSGS\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29910\include" "-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\shared" /EHsc /Tpext.cpp /Fobuild\temp.win-amd64-cpython-38\Release\ext.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc /wd4624 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
  ext.cpp
  C:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\c10/macros/Macros.h(143): warning C4067: Ԥ      ָ            - Ӧ   뻻 з 
  C:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\c10/core/TensorImpl.h(2214): warning C4805:   |  :  ڲ    н    ͡ uintptr_t       ͡ bool    ϲ   ȫ
  C:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005:   HAVE_SNPRINTF  :    ض   
  C:\Users\dell\anaconda3\envs\FSGS\include\pyerrors.h(315): note:  μ   HAVE_SNPRINTF    ǰһ      
  "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\bin\nvcc" -c simple_knn.cu -o build\temp.win-amd64-cpython-38\Release\simple_knn.obj -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\TH -IC:\Users\dell\anaconda3\envs\FSGS\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\Users\dell\anaconda3\envs\FSGS\include -IC:\Users\dell\anaconda3\envs\FSGS\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29910\include" "-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\shared" -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --use-local-env
  simple_knn.cu(23): warning C4005:   __CUDACC__  :    ض   
  simple_knn.cu: note:  μ   __CUDACC__    ǰһ      
  simple_knn.cu(23): warning C4005:   __CUDACC__  :    ض   
  simple_knn.cu: note:  μ   __CUDACC__    ǰһ      
  G:\FSGS-main\submodules\simple-knn\simple_knn.h(18): error: identifier "int32_t" is undefined
  
  simple_knn.cu(192): error: declaration is incompatible with "void SimpleKNN::knn(int, float3 *, float *, <error-type> *)"
  G:\FSGS-main\submodules\simple-knn\simple_knn.h(18): here
  
  2 errors detected in the compilation of "simple_knn.cu".
  simple_knn.cu
  error: command 'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.3\\bin\\nvcc.exe' failed with exit code 2
  [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure

Encountered error while trying to install package.

simple-knn

note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
WARNING: There was an error checking the latest version of pip.

Above info is the error. Intuitively, it doesn't recognise 'int32_t', which wasn't in the original version of simple-knn.
I try to varify cuda versions to correct it, include cuda113, cuda116 and cuda118, all of them doesn't work.
Any suggestions, please?

Question about the rasterization

Thanks for your great works!!
I notices that you have computed the gradient of weight,which is value of T*alpha when running the backward::renderCUDA(maybe at line 567 in backward.cu ).
I just want to know why add this gradient . Because in the gaussian-splatting's code ,this is not added.

submodules/diff-gaussian-rasterization-confidence submodules/simple-knn会出现错误?

你好,为什么安装
submodules/diff-gaussian-rasterization-confidence
submodules/simple-knn会出现错误?
想知道问题出在哪里?
`
Processing e:\fsgs\submodules\simple-knn
Preparing metadata (setup.py) ... done
Building wheels for collected packages: simple-knn
Building wheel for simple-knn (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [63 lines of output]
running bdist_wheel
E:\envs\3dgs\lib\site-packages\torch\utils\cpp_extension.py:476: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
running build
running build_ext
E:\envs\3dgs\lib\site-packages\torch\utils\cpp_extension.py:359: UserWarning: Error checking compiler version for cl: [WinError 2] 系统找不到指定的文件。
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
building 'simple_knn.C' extension
creating build
creating build\temp.win-amd64-cpython-39
creating build\temp.win-amd64-cpython-39\Release
"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.38.33130\bin\HostX86\x64\cl.exe" /
c /nologo /O2 /W3 /GL /DNDEBUG /MD -IE:\envs\3dgs\lib\site-packages\torch\include -IE:\envs\3dgs\lib\site-packages
\torch\include\torch\csrc\api\include -IE:\envs\3dgs\lib\site-packages\torch\include\TH -IE:\envs\3dgs\lib\site-pa
ckages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IE:\envs\3dgs\inclu
de -IE:\envs\3dgs\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.38.33130\inc
lude" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.38.33130\ATLMFC\include" "-IC:\P
rogram Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kit
s\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\um" "-IC:\Program
Files (x86)\Windows Kits\10\include\10.0.22621.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.
0.22621.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\cppwinrt" "-IC:\Program Files (
x86)\Windows Kits\NETFXSDK\4.8\include\um" /EHsc /Tpext.cpp /Fobuild\temp.win-amd64-cpython-39\Release\ext.obj /MD
/wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc /wd4624 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=C -D_GLIBCXX_USE_CXX11_ABI=0
ext.cpp
E:\envs\3dgs\lib\site-packages\torch\include\c10/macros/Macros.h(138): warning C4067: 预处理器指令后有意外标 - 应输入换行符
"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc" -c simple_knn.cu -o build\temp.win-amd64
-cpython-39\Release\simple_knn.obj -IE:\envs\3dgs\lib\site-packages\torch\include -IE:\envs\3dgs\lib\site-packages
\torch\include\torch\csrc\api\include -IE:\envs\3dgs\lib\site-packages\torch\include\TH -IE:\envs\3dgs\lib\site-pa
ckages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IE:\envs\3dgs\inclu
de -IE:\envs\3dgs\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.38.33130\inc
lude" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.38.33130\ATLMFC\include" "-IC:\P
rogram Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kit
s\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\um" "-IC:\Program
Files (x86)\Windows Kits\10\include\10.0.22621.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.
0.22621.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\cppwinrt" "-IC:\Program Files (
x86)\Windows Kits\NETFXSDK\4.8\include\um" -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcud
afe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcud
afe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018
-Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -D

CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERA
TORS__ --expt-relaxed-constexpr -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 --use-local-env
simple_knn.cu(23): warning C4005: “CUDACC”: 宏重定义
simple_knn.cu(23): note: 之前在命令行上声明的“CUDACC
simple_knn.cu(23): warning C4005: “CUDACC”: 宏重定义
simple_knn.cu(23): note: 之前在命令行上声明的“CUDACC
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include\cuda\std\detail\libcxx\include\support\atomic\atomic_msvc.h(15): warning C4005: “_Compiler_barrier”: 宏重定义
C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.38.33130/include\xatomic.h(55): note: 参见“_Compiler_barrier”的前一个定义
E:\FSGS\submodules\simple-knn\simple_knn.h(18): error: identifier "int32_t" is undefined

  C:/Program Files (x86)/Windows Kits/10//include/10.0.22621.0//um\winnt.h(24437): warning #174-D: expression has no effect

  C:/Program Files (x86)/Windows Kits/10//include/10.0.22621.0//um\winuser.h(14668): warning #108-D: signed bit field of length 1

  C:/Program Files (x86)/Windows Kits/10//include/10.0.22621.0//um\winuser.h(14669): warning #108-D: signed bit field of length 1

  C:/Program Files (x86)/Windows Kits/10//include/10.0.22621.0//um\wincrypt.h(21836): warning #1835-D: attribute "dllimport" does not apply here

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(730): warning #108-D: signed bit field of length 1

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(731): warning #108-D: signed bit field of length 1

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(732): warning #108-D: signed bit field of length 1

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(733): warning #108-D: signed bit field of length 1

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(734): warning #108-D: signed bit field of length 1

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(735): warning #108-D: signed bit field of length 1

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(736): warning #108-D: signed bit field of length 1

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(737): warning #108-D: signed bit field of length 1

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(738): warning #108-D: signed bit field of length 1

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(739): warning #108-D: signed bit field of length 1

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(740): warning #108-D: signed bit field of length 1

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(741): warning #108-D: signed bit field of length 1

  C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared\rpcndr.h(742): warning #108-D: signed bit field of length 1

  simple_knn.cu(192): error: declaration is incompatible with "void SimpleKNN::knn(int, float3 *, float *, <error-type> *)"

ERROR: Failed building wheel for simple-knn
Running setup.py clean for simple-knn
Failed to build simple-knn
ERROR: Could not build wheels for simple-knn, which is required to install pyproject.toml-based projects

`

Questions of sparse view camera poses

Hello, I really appreciate your work. I don’t know much about the work in the field of sparse view 3D reconstruction. After reading your code and comparing the codes of other related works, I found that the camera poses of the training views and test views of sparse perspective 3D reconstruction are all used in the original data set, i.e. the camera pose calibrated using all views. Is my understanding correct? Thank you very much!

white background for blender dataset

Thank you for your excellent work!
when deal with blender dataset, I cannot get white background even if I set the white_background to True.It seems that the program cannot distinguish between the background and the central object. Looking forward to your reply,thank you!!
mate

Question about depth calculation at different stages

Thank you for your amazing work! Your idea about the sparse view iput and depth guidance has been very inspiring to us. However, we have some questions regarding depth estimation:

(1)After 2000 iterations, is the depth of the synthesized pseudo-views estimated using the Midas model's estimate_depth function?
(2)Before 2000 iterations, how was the depth of the given sparse views obtained? Was it also estimated using the Midas model? However, referring to the figure below, I did not find the process of depth calculation. Could you please inform me how the depth of the given sparse views was obtained before synthesizing the pseudo-views?

0b66bc7660aa89b95c38c3bc7b7a3343

(3)Throughout the entire 10000 iterations, was all render_depth estimated using the relevant code in diff_gaussian_rasterization? Does this mean that you added a depth attribute to diff_gaussian_rasterization based on the original 3D GS and added relative calculation method?

Invalid dependencies in environment.yml

Invalid dependencies in environment.yml

In this commit faa0bc5 , @henrypearce4D added dependencies, which are not valid and do not appear in conda channels listed in environment.yml

As a result, it is not possible to run the code. The culprit packages are:

  • imageio=2.31.2 (imageio=2.31.4 exists though)
  • open3d=0.17.0 ( in the channel open3d-admin, not listed in the yml, there is a open3d=0.15.1 )
  • opencv_python=4.8.1.78 ( no trace of opencv_python anywhere. There are for example opencv=4.6.0 in places but install fails with it)

It seems @henrypearce4D has access to some private channels unlisted here or something?

Question for the code of the DPT

Hello:
Great paper. I saw that u utilize the pre-trained Dense Predic�tion Transformer (DPT) model for zero-shot monocular depth estimation, but I can not find that in the code. Could u specify that?
Thx

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.