Coder Social home page Coder Social logo

xxlong0 / wonder3d Goto Github PK

View Code? Open in Web Editor NEW
4.4K 45.0 345.0 87.17 MB

Single Image to 3D using Cross-Domain Diffusion for 3D Generation

Home Page: https://www.xxlong.site/Wonder3D/

License: GNU Affero General Public License v3.0

Python 99.40% Shell 0.30% Dockerfile 0.30%
3d-generation 3d-aigc single-image-to-3d 3dgeneration

wonder3d's Introduction

中文版本 中文

Wonder3D

Single Image to 3D using Cross-Domain Diffusion (CVPR 2024 Highlight)

Wonder3D reconstructs highly-detailed textured meshes from a single-view image in only 2 ∼ 3 minutes. Wonder3D first generates consistent multi-view normal maps with corresponding color images via a cross-domain diffusion model, and then leverages a novel normal fusion method to achieve fast and high-quality reconstruction.

News

  • Fixed a severe training bug. The "zero_init_camera_projection" in 'configs/train/stage1-mix-6views-lvis.yaml' should be False. Otherwise, the domain control and pose control will be invalid in the training.
  • 2024.03.19 Checkout our new model GeoWizard that jointly produces depth and normal with high fidelity from single images.
  • 2024.05.24 We release a large 3D native diffusion model CraftsMan3D that is directly trained on 3D representation and therefore is capable of producing complex structures.
  • 2024.05.29 We release a more powerful MV cross-domain diffusion model Era3D that jointly produces 512x512 color images and normal maps, but more importantly Era3D could automatically figure out the focal length and elevation degree of the input image so that avoid geometry distortions.

Usage

# First clone the repo, and use the commands in the repo

import torch
import requests
from PIL import Image
import numpy as np
from torchvision.utils import make_grid, save_image
from diffusers import DiffusionPipeline  # only tested on diffusers[torch]==0.19.3, may have conflicts with newer versions of diffusers

def load_wonder3d_pipeline():

    pipeline = DiffusionPipeline.from_pretrained(
    'flamehaze1115/wonder3d-v1.0', # or use local checkpoint './ckpts'
    custom_pipeline='flamehaze1115/wonder3d-pipeline',
    torch_dtype=torch.float16
    )

    # enable xformers
    pipeline.unet.enable_xformers_memory_efficient_attention()

    if torch.cuda.is_available():
        pipeline.to('cuda:0')
    return pipeline

pipeline = load_wonder3d_pipeline()

# Download an example image.
cond = Image.open(requests.get("https://d.skis.ltd/nrp/sample-data/lysol.png", stream=True).raw)

# The object should be located in the center and resized to 80% of image height.
cond = Image.fromarray(np.array(cond)[:, :, :3])

# Run the pipeline!
images = pipeline(cond, num_inference_steps=20, output_type='pt', guidance_scale=1.0).images

result = make_grid(images, nrow=6, ncol=2, padding=0, value_range=(0, 1))

save_image(result, 'result.png')

Collaborations

Our overarching mission is to enhance the speed, affordability, and quality of 3D AIGC, making the creation of 3D content accessible to all. While significant progress has been achieved in the recent years, we acknowledge there is still a substantial journey ahead. We enthusiastically invite you to engage in discussions and explore potential collaborations in any capacity. If you're interested in connecting or partnering with us, please don't hesitate to reach out via email ([email protected]) .

News

  • 2024.02 We release the training codes. Welcome to train wonder3D on your personal data.
  • 2023.10 We release the inference model and codes.

Preparation for inference

Linux System Setup.

conda create -n wonder3d
conda activate wonder3d
pip install -r requirements.txt
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Windows System Setup.

Please switch to branch main-windows to see details of windows setup.

Docker Setup

see docker/README.MD

Training

Here we provide two training scripts train_mvdiffusion_image.py and train_mvdiffusion_joint.py.

The training has two stages: 1) first train multi-view attentions by randomly taking normal or color flag; 2) add cross-domain attention modules into the SD model, and only optimize the newly added parameters.

You need to modify root_dir that contain the data of the config files configs/train/stage1-mix-6views-lvis.yaml and configs/train/stage2-joint-6views-lvis.yaml accordingly.

# stage 1:
accelerate launch --config_file 8gpu.yaml train_mvdiffusion_image.py --config configs/train/stage1-mix-6views-lvis.yaml

# stage 2
accelerate launch --config_file 8gpu.yaml train_mvdiffusion_joint.py --config configs/train/stage2-joint-6views-lvis.yaml

Prepare the training data

see render_codes/README.md.

Inference

  1. Optional. If you have troubles to connect to huggingface. Make sure you have downloaded the following models. Download the checkpoints and into the root folder.

If you are in mainland China, you may download via aliyun.

Wonder3D
|-- ckpts
    |-- unet
    |-- scheduler
    |-- vae
    ...

Then modify the file ./configs/mvdiffusion-joint-ortho-6views.yaml, set pretrained_model_name_or_path="./ckpts"

  1. Download the SAM model. Put it to the sam_pt folder.
Wonder3D
|-- sam_pt
    |-- sam_vit_h_4b8939.pth
  1. Predict foreground mask as the alpha channel. We use Clipdrop to segment the foreground object interactively. You may also use rembg to remove the backgrounds.
# !pip install rembg
import rembg
result = rembg.remove(result)
result.show()
  1. Run Wonder3d to produce multiview-consistent normal maps and color images. Then you can check the results in the folder ./outputs. (we use rembg to remove backgrounds of the results, but the segmentations are not always perfect. May consider using Clipdrop to get masks for the generated normal maps and color images, since the quality of masks will significantly influence the reconstructed mesh quality.)
accelerate launch --config_file 1gpu.yaml test_mvdiffusion_seq.py \
            --config configs/mvdiffusion-joint-ortho-6views.yaml validation_dataset.root_dir={your_data_path} \
            validation_dataset.filepaths=['your_img_file'] save_dir={your_save_path}

see example:

accelerate launch --config_file 1gpu.yaml test_mvdiffusion_seq.py \
            --config configs/mvdiffusion-joint-ortho-6views.yaml validation_dataset.root_dir=./example_images \
            validation_dataset.filepaths=['owl.png'] save_dir=./outputs

Interactive inference: run your local gradio demo. (Only generate normals and colors without reconstruction)

python gradio_app_mv.py   # generate multi-view normals and colors
  1. Mesh Extraction

Instant-NSR Mesh Extraction

cd ./instant-nsr-pl
python launch.py --config configs/neuralangelo-ortho-wmask.yaml --gpu 0 --train dataset.root_dir=../{your_save_path}/cropsize-{crop_size}-cfg{guidance_scale:.1f}/ dataset.scene={scene}

see example:

cd ./instant-nsr-pl
python launch.py --config configs/neuralangelo-ortho-wmask.yaml --gpu 0 --train dataset.root_dir=../outputs/cropsize-192-cfg1.0/ dataset.scene=owl

Our generated normals and color images are defined in orthographic views, so the reconstructed mesh is also in orthographic camera space. If you use MeshLab to view the meshes, you can click Toggle Orthographic Camera in View tab.

Interactive inference: run your local gradio demo. (First generate normals and colors, and then do reconstructions. No need to perform gradio_app_mv.py first.)

python gradio_app_recon.py   

NeuS-based Mesh Extraction

Since there are many complaints about the Windows setup of instant-nsr-pl, we provide the NeuS-based reconstruction, which may get rid of the requirement problems.

NeuS consumes less GPU memory and favors smooth surfaces without parameters tuning. However, NeuS consumes more times and its texture may be less sharp. If you are not sensitive to time, we recommend NeuS for optimization due to its robustness.

cd ./NeuS
bash run.sh output_folder_path scene_name 

Common questions

Q: Tips to get better results.

  1. Wonder3D is sensitive the facing direciton of input images. By experiments, front-facing images always lead to good reconstruction.
  2. Limited by resources, current implemetation only supports limited views (6 views) and low resolution (256x256). Any images will be first resized into 256x256 for generation, so images after such a downsample that still keep clear and sharp features will lead to good results.
  3. Images with occlusions will cause worse reconstructions, since 6 views cannot cover the complete object. Images with less occlsuions lead to better results.
  4. Increate optimization steps in instant-nsr-pl, modify trainer.max_steps: 3000 in instant-nsr-pl/configs/neuralangelo-ortho-wmask.yaml to more steps like trainer.max_steps: 10000. Longer optimization leads to better texture.

Q: The evelation and azimuth degrees of the generated views?

A: Unlike that the prior works such as Zero123, SyncDreamer and One2345 adopt object world system, our views are defined in the camera system of the input image. The six views are in the plane with 0 elevation degree in the camera system of the input image. Therefore we don't need to estimate an elevation degree for input image. The azimuth degrees of the six views are 0, 45, 90, 180, -90, -45 respectively.

Q: The focal length of the generated views?

A: We assume the input images are captured by orthographic camera, so the generated views are also in orthographic space. This design enables our model to keep strong generlaization on unreal images, but sometimes it may suffer from focal lens distortions on real-captured images.

Details about the camera system and camera poses

In practice, the target object is assumed to be placed along the gravity direction.

  1. Canonical coordinate system. Some prior works (e.g. MVDream and SyncDreamer) adopt a shared canonical system for all objects, whose axis $Z_c$ shares the same direction with gravity (a).
  2. Input view related system. Wonder3D adopts an independent coordinate system for each object that is related to the input view. Its $Z_v$ and $X_v$ axes are aligned with the UV dimension of 2D input image space, and its $Y_v$ axis is vertical to the 2D image plane and passes through the center of ROI (Region of Interests) (b).
  3. Camera poses. Wonder3D outputs 6 views ${v_i, i=0,...,5}$ that are sampled at the $X_vOY_v$ plane of the input-view related system with a fixed radius, where the front view $v_0$ is initialized as input view and the other views are sampled with pre-defined azimuth degrees (see (b)).

Acknowledgement

We have intensively borrow codes from the following repositories. Many thanks to the authors for sharing their codes.

License

Wonder3D is under AGPL-3.0, so any downstream solution and products (including cloud services) that include wonder3d code or a trained model (both pretrained or custom trained) inside it should be open-sourced to comply with the AGPL conditions. If you have any questions about the usage of Wonder3D, please contact us first.

Citation

If you find this repository useful in your project, please cite the following work. :)

@article{long2023wonder3d,
  title={Wonder3D: Single Image to 3D using Cross-Domain Diffusion},
  author={Long, Xiaoxiao and Guo, Yuan-Chen and Lin, Cheng and Liu, Yuan and Dou, Zhiyang and Liu, Lingjie and Ma, Yuexin and Zhang, Song-Hai and Habermann, Marc and Theobalt, Christian and others},
  journal={arXiv preprint arXiv:2310.15008},
  year={2023}
}

wonder3d's People

Contributors

athinkingneal avatar flamehaze1115 avatar lqz2 avatar xxlong0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wonder3d's Issues

I updated python and pytorch but it still appears that I have an old version & there are more errors

C:\Users\USUARIO\Desktop\wonder3d>accelerate launch --config_file 1gpu.yaml test_mvdiffusion_seq.py
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cu118)
Python 3.11.6 (you have 3.11.5)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Traceback (most recent call last):
File "C:\Users\USUARIO\Desktop\wonder3d\test_mvdiffusion_seq.py", line 40, in
from mvdiffusion.models.unet_mv2d_condition import UNetMV2DConditionModel
File "C:\Users\USUARIO\Desktop\wonder3d\mvdiffusion\models\unet_mv2d_condition.py", line 47, in
from diffusers.utils import (
ImportError: cannot import name 'is_safetensors_available' from 'diffusers.utils' (C:\Users\USUARIO\anaconda3\Lib\site-packages\diffusers\utils_init_.py)
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "C:\Users\USUARIO\anaconda3\Scripts\accelerate.exe_main
.py", line 7, in
File "C:\Users\USUARIO\anaconda3\Lib\site-packages\accelerate\commands\accelerate_cli.py", line 47, in main
args.func(args)
File "C:\Users\USUARIO\anaconda3\Lib\site-packages\accelerate\commands\launch.py", line 994, in launch_command
simple_launcher(args)
File "C:\Users\USUARIO\anaconda3\Lib\site-packages\accelerate\commands\launch.py", line 636, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['C:\Users\USUARIO\anaconda3\python.exe', 'test_mvdiffusion_seq.py', '\']' returned non-zero exit status 1.

pred_type problem

Thank you for your great works! I'm trying to replace pred type "joints" of "normal" in your provided yaml file, but it didn't work. I attach the error below:
image

ERROR: the following arguments are required: --config

I get this when I want to run it, I hope you can help me, I have solved many things with GPT but I can't do it here

Microsoft Windows [Versión 10.0.19044.1706]
(c) Microsoft Corporation. Todos los derechos reservados.

C:\Users\USUARIO\Desktop\wonder3d>@echo off
set PYTHON_PATH=C:\Users\USUARIO\AppData\Local\Programs\Python\Python311\python.exe
set SCRIPT_PATH=C:\Users\USUARIO\Desktop\wonder3d\test_mvdiffusion_seq.py

%PYTHON_PATH% %SCRIPT_PATH%
usage: test_mvdiffusion_seq.py [-h] --config CONFIG
test_mvdiffusion_seq.py: error: the following arguments are required: --config

NeuS Results/Settings

Realized NeuS in this repo is mostly if not all cpu based, which is perfect. Running it at 512 batch size right now. Will write up a little more here. Changed the title to more properly reflect something that I think would be helpful info/insight to folks, will be editing this once training's done on my subject.

can not run with local picture T.T

When I replace an uploaded image with a new one, it will display a new bug,I have checked that the image is in RGB format

Traceback (most recent call last):
File "/home/gpu/data/Wonder3D/test_mvdiffusion_seq.py", line 333, in
main(cfg)
File "/home/gpu/data/Wonder3D/test_mvdiffusion_seq.py", line 269, in main
validation_dataset = MVDiffusionDataset(
File "/home/gpu/data/Wonder3D/mvdiffusion/data/single_image_dataset.py", line 138, in init
image, alpha = self.load_image(os.path.join(self.root_dir, file), bg_color, return_type='pt')
File "/home/gpu/data/Wonder3D/mvdiffusion/data/single_image_dataset.py", line 205, in load_image
alpha_np = np.asarray(image_input)[:, :, 3]
IndexError: index 3 is out of bounds for axis 2 with size 3
Traceback (most recent call last):
File "/home/gpu/data/Wonder3D/wonder3d/bin/accelerate", line 8, in
sys.exit(main())
File "/home/gpu/data/Wonder3D/wonder3d/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/home/gpu/data/Wonder3D/wonder3d/lib/python3.10/site-packages/accelerate/commands/launch.py", line 994, in launch_command
simple_launcher(args)
File "/home/gpu/data/Wonder3D/wonder3d/lib/python3.10/site-packages/accelerate/commands/launch.py", line 636, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/gpu/data/Wonder3D/wonder3d/bin/python3', 'test_mvdiffusion_seq.py', '--config', './configs/mvdiffusion-joint-ortho-6views.yaml']' returned non-zero exit status 1.

Depth Map experimentation

Great work!

I'm trying to experiment with depth maps. Usually for 3D reconstruction it's great to have a normals + depth pair (like in the humanNORM paper), but adding a 3rd domain with rendered depthmaps in Wonder3D might be too intensive.

In your opinion, if you try to only create the best geometry possible, and ignore textures, is RGB+Normals better than RGB+Depth? Or is stable diffusion very RGB dependant?

FileNotFoundError: [Errno 2] No such file or directory: 'sam_pt\\sam_vit_h_4b8939.pth'

way am i getting this error any buddy knows the cause full error

" File "gradio_app.py", line 351, in
fire.Fire(run_demo)
File "C:\Users\elio\anaconda3\envs\venv_wonder3d\lib\site-packages\fire\core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "C:\Users\elio\anaconda3\envs\venv_wonder3d\lib\site-packages\fire\core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "C:\Users\elio\anaconda3\envs\venv_wonder3d\lib\site-packages\fire\core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "gradio_app.py", line 268, in run_demo
predictor = sam_init()
File "gradio_app.py", line 67, in sam_init
sam = sam_model_registrymodel_type.to(device=f"cuda:{_GPU_ID}")
File "C:\Users\elio\anaconda3\envs\venv_wonder3d\lib\site-packages\segment_anything\build_sam.py", line 15, in build_sam_vit_h
return _build_sam(
File "C:\Users\elio\anaconda3\envs\venv_wonder3d\lib\site-packages\segment_anything\build_sam.py", line 104, in _build_sam
with open(checkpoint, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'sam_pt\sam_vit_h_4b8939.pth'"

About max_steps

hey, I've generated the mesh! thanks.

and I want to ask, is the 3000 max_steps enough for most case? or the more the better? I want to get the best quality

Error running gradio_app.py

Traceback (most recent call last):
File "/mnt/d/AIGC/Wonder3D/gradio_app.py", line 348, in
fire.Fire(run_demo)
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/mnt/d/AIGC/Wonder3D/gradio_app.py", line 281, in run_demo
input_image = gr.Image(type='pil', image_mode='RGBA', height=320, label='Input image', tool=None)
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/gradio/component_meta.py", line 145, in wrapper
return fn(self, **kwargs)
TypeError: Image.init() got an unexpected keyword argument 'tool'

Gradio demo locally

The Gradio demo looks slick at huggingface Spaces! It would be amazing to have an option to run this demo locally and have instructions on how to run demo on the project readme.

For example for Zero123plus- the demo file and instructions in readme can be found here.

i have this error: ImportError: cannot import name 'is_safetensors_available' from 'diffusers.utils'

Traceback (most recent call last):
File "C:\Users\USUARIO\Desktop\wonder3d\test_mvdiffusion_seq.py", line 40, in
from mvdiffusion.models.unet_mv2d_condition import UNetMV2DConditionModel
File "C:\Users\USUARIO\Desktop\wonder3d\mvdiffusion\models\unet_mv2d_condition.py", line 47, in
from diffusers.utils import (
ImportError: cannot import name 'is_safetensors_available' from 'diffusers.utils' (C:\Users\USUARIO\AppData\Local\Programs\Python\Python311\Lib\site-packages\diffusers\utils_init_.py)

windows installation and tiny cuda-nn guide worked for me !

conda create -n wonder3d python=3.8.10
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
pip install fire diffusers==0.19.3 transformers bitsandbytes accelerate gradio rembg segment_anything
pip install einops omegaconf pytorch-lightning==1.9.5 torch_efficient_distloss nerfacc==0.3.3 PyMCubes trimesh
pip install https://download.pytorch.org/whl/cu118/xformers-0.0.22.post4%2Bcu118-cp38-cp38-win_amd64.whl
pip install -U tensorboardX
pip install -U tensorboard

here is tiny cudann installation for windows

activate the conda environment for us it will be
conda activate wonder3d
then write these commands and make sure you change the microsoft visual studio version with the one you have ,you can search for the vcvars32.bat and change the directory i provided to the matching one and add x64 in the end like this

"\Program Files\Microsoft Visual Studio\2022\community\vc\Auxiliary\Build\vcvars32.bat" x64
after that write this command
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

now everything should work correctly

to use the gradio you will face an issue sam_vit_h_4b8939.pth is missing thats because the code cant create a folder and download the model to it to fix it you can do these steps

open the wonder3d folder that you cloned and create a file with name of
sam_pt
then download this model and add it there
https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth

you will get the link for the local running ,and you will get another error for gradio but its related with the repo and the dev is working on it

I get a series of errors after using the latest code

Traceback (most recent call last):
File "/mnt/d/AIGC/Wonder3D/test_mvdiffusion_seq.py", line 333, in
main(cfg)
File "/mnt/d/AIGC/Wonder3D/test_mvdiffusion_seq.py", line 260, in main
logger.warn(
NameError: name 'logger' is not defined
Traceback (most recent call last):
File "/mnt/d/AIGC/Wonder3D/venv/bin/accelerate", line 8, in
sys.exit(main())
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 994, in launch_command
simple_launcher(args)
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 636, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/mnt/d/AIGC/Wonder3D/venv/bin/python', 'test_mvdiffusion_seq.py', '--config', 'configs/mvdiffusion-joint-ortho-6views.yaml']' returned non-zero exit status 1.

Traceback (most recent call last):
File "/mnt/d/AIGC/Wonder3D/test_mvdiffusion_seq.py", line 335, in
main(cfg)
File "/mnt/d/AIGC/Wonder3D/test_mvdiffusion_seq.py", line 271, in main
validation_dataset = MVDiffusionDataset(
File "/mnt/d/AIGC/Wonder3D/mvdiffusion/data/single_image_dataset.py", line 146, in init
image, alpha = self.load_image(os.path.join(self.root_dir, file), bg_color, return_type='pt')
AttributeError: 'SingleImageDataset' object has no attribute 'root_dir'
Traceback (most recent call last):
File "/mnt/d/AIGC/Wonder3D/venv/bin/accelerate", line 8, in
sys.exit(main())
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 994, in launch_command
simple_launcher(args)
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 636, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/mnt/d/AIGC/Wonder3D/venv/bin/python', 'test_mvdiffusion_seq.py', '--config', 'configs/mvdiffusion-joint-ortho-6views.yaml']' returned non-zero exit status 1.

Traceback (most recent call last):
File "/mnt/d/AIGC/Wonder3D/test_mvdiffusion_seq.py", line 335, in
main(cfg)
File "/mnt/d/AIGC/Wonder3D/test_mvdiffusion_seq.py", line 271, in main
validation_dataset = MVDiffusionDataset(
File "/mnt/d/AIGC/Wonder3D/mvdiffusion/data/single_image_dataset.py", line 148, in init
image, alpha = self.load_image(os.path.join(self.root_dir, file), bg_color, return_type='pt')
File "/mnt/d/AIGC/Wonder3D/mvdiffusion/data/single_image_dataset.py", line 212, in load_image
image_input = Image.open(img_path)
AttributeError: 'NoneType' object has no attribute 'open'
Traceback (most recent call last):
File "/mnt/d/AIGC/Wonder3D/venv/bin/accelerate", line 8, in
sys.exit(main())
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 994, in launch_command
simple_launcher(args)
File "/mnt/d/AIGC/Wonder3D/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 636, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/mnt/d/AIGC/Wonder3D/venv/bin/python', 'test_mvdiffusion_seq.py', '--config', 'configs/mvdiffusion-joint-ortho-6views.yaml']' returned non-zero exit status 1.

ModuleNotFoundError: No module named 'utils.misc'

Setup environment and requirements + tiny cuda per instructions but getting error:

(wonder3d) C:\Users\User\Documents\Image_Scripts\Wonder3D>bash run_test.sh
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Users\User\Documents\Image_Scripts\Wonder3D\test_mvdiffusion_seq.py:321 in │
│ │
│ 318 │ parser.add_argument('--config', type=str, required=True) │
│ 319 │ args, extras = parser.parse_known_args() │
│ 320 │ │
│ ❱ 321 │ from utils.misc import load_config │
│ 322 │ │
│ 323 │ # parse YAML config to OmegaConf │
│ 324 │ cfg = load_config(args.config, cli_args=extras) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ModuleNotFoundError: No module named 'utils.misc'
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Users\User\AppData\Local\Programs\Python\Python310\lib\runpy.py:196 in │
│ _run_module_as_main │
│ │
│ 193 │ main_globals = sys.modules["main"].dict
│ 194 │ if alter_argv: │
│ 195 │ │ sys.argv[0] = mod_spec.origin │
│ ❱ 196 │ return _run_code(code, main_globals, None, │
│ 197 │ │ │ │ │ "main", mod_spec) │
│ 198 │
│ 199 def run_module(mod_name, init_globals=None, │
│ │
│ C:\Users\User\AppData\Local\Programs\Python\Python310\lib\runpy.py:86 in _run_code │
│ │
│ 83 │ │ │ │ │ loader = loader, │
│ 84 │ │ │ │ │ package = pkg_name, │
│ 85 │ │ │ │ │ spec = mod_spec) │
│ ❱ 86 │ exec(code, run_globals) │
│ 87 │ return run_globals │
│ 88 │
│ 89 def _run_module_code(code, init_globals=None, │
│ │
│ in :7 │
│ │
│ 4 from accelerate.commands.accelerate_cli import main │
│ 5 if name == 'main': │
│ 6 │ sys.argv[0] = re.sub(r'(-script.pyw|.exe)?$', '', sys.argv[0]) │
│ ❱ 7 │ sys.exit(main()) │
│ 8 │
│ │
│ C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\ │
│ accelerate_cli.py:45 in main │
│ │
│ 42 │ │ exit(1) │
│ 43 │ │
│ 44 │ # Run │
│ ❱ 45 │ args.func(args) │
│ 46 │
│ 47 │
│ 48 if name == "main": │
│ │
│ C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\ │
│ launch.py:918 in launch_command │
│ │
│ 915 │ elif defaults is not None and defaults.compute_environment == ComputeEnvironment.AMA │
│ 916 │ │ sagemaker_launcher(defaults, args) │
│ 917 │ else: │
│ ❱ 918 │ │ simple_launcher(args) │
│ 919 │
│ 920 │
│ 921 def main(): │
│ │
│ C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\ │
│ launch.py:580 in simple_launcher │
│ │
│ 577 │ process.wait() │
│ 578 │ if process.returncode != 0: │
│ 579 │ │ if not args.quiet: │
│ ❱ 580 │ │ │ raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) │
│ 581 │ │ else: │
│ 582 │ │ │ sys.exit(1) │
│ 583 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
CalledProcessError: Command '['C:\Users\User\AppData\Local\Programs\Python\Python310\python.exe',
'test_mvdiffusion_seq.py', '--config', 'configs/mvdiffusion-joint-ortho-6views.yaml']' returned non-zero exit status 1.

Same error if I run: accelerate launch --config_file 1gpu.yaml test_mvdiffusion_seq.py --config mvdiffusion-joint-ortho-6views.yaml

instant-nsr-pl error !

i optimized the yaml file and still get this error i have rtx 3060ti with 8gb vram

(wonder) C:\Users\Genesis\github\Wonder3D\instant-nsr-pl>python launch.py --config configs/neuralangelo-ortho-wmask_oom.yaml --gpu 0 --train dataset.root_dir=C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl dataset.scene=owl
Global seed set to 42
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\nn\utils\weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.")
Using finite difference to compute gradients with eps=progressive
Using 16bit None Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Trainer(limit_train_batches=1.0) was configured so 100% of the batches per epoch will be used..
You are using a CUDA device ('NVIDIA GeForce RTX 3060 Ti') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl\owl
(1024, 1024, 3)
the loaded normals are defined in the system of front view
C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl\owl
(1024, 1024, 3)
the loaded normals are defined in the system of front view
C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl\owl
(1024, 1024, 3)
the loaded normals are defined in the system of front view
C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl\owl
(1024, 1024, 3)
the loaded normals are defined in the system of front view
C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl\owl
(1024, 1024, 3)
the loaded normals are defined in the system of front view
C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl\owl
(1024, 1024, 3)
the loaded normals are defined in the system of front view
C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl\owl
(1024, 1024, 3)
the loaded normals are defined in the system of front view
C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl\owl
(1024, 1024, 3)
the loaded normals are defined in the system of front view
C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl\owl
(1024, 1024, 3)
the loaded normals are defined in the system of front view
C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl\owl
(1024, 1024, 3)
the loaded normals are defined in the system of front view
C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl\owl
(1024, 1024, 3)
the loaded normals are defined in the system of front view
C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\owl\owl
(1024, 1024, 3)
the loaded normals are defined in the system of front view
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

| Name | Type | Params

0 | cos | CosineSimilarity | 0
1 | model | NeuSModel | 7.7 M

7.7 M Trainable params
0 Non-trainable params
7.7 M Total params
15.371 Total estimated model params size (MB)
Epoch 0: : 0it [00:00, ?it/s]Update finite_difference_eps to 0.027204705103003882
Traceback (most recent call last):
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\utils\cpp_extension.py", line 2100, in _run_ninja_build
subprocess.run(
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "launch.py", line 125, in
main()
File "launch.py", line 114, in main
trainer.fit(system, datamodule=dm)
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 608, in fit call._call_and_handle_interrupt(
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\pytorch_lightning\trainer\call.py", line 38, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 650, in _fit_impl
self._run(model, ckpt_path=self.ckpt_path)
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1112, in _run
results = self._run_stage()
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1191, in _run_stage
self._run_train()
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1214, in run_train
self.fit_loop.run()
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\pytorch_lightning\loops\loop.py", line 199, in run
self.advance(*args, **kwargs)
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 267, in advance
self.outputs = self.epoch_loop.run(self.data_fetcher)
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\pytorch_lightning\loops\loop.py", line 199, in run
self.advance(*args, **kwargs)
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\pytorch_lightning\loops\epoch\training_epoch_loop.py", line 204, in advance
response = self.trainer.call_lightning_module_hook("on_train_batch_start", batch, batch_idx)
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1356, in call_lightning_module_hook
output = fn(*args, **kwargs)
File "C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\systems\base.py", line 57, in on_train_batch_start
update_module_step(self.model, self.current_epoch, self.global_step)
File "C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\systems\utils.py", line 351, in update_module_step
m.update_step(epoch, global_step)
File "C:\Users\Genesis\github\Wonder3D\instant-nsr-pl\models\neus.py", line 111, in update_step
self.occupancy_grid.every_n_step(step=global_step, occ_eval_fn=occ_eval_fn, occ_thre=self.config.get('grid_prune_occ_thre', 0.01))
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\grid.py", line 271, in every_n_step
self.update(
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\grid.py", line 224, in update
x = contract_inv(
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\contraction.py", line 101, in contract_inv
ctype = type.to_cpp_version()
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\contraction.py", line 62, in to_cpp_version
return C.ContractionTypeGetter(self.value)
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda_init
.py", line 11, in call_cuda
from .backend import C
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda_backend.py", line 85, in
C = load(
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\utils\cpp_extension.py", line 1308, in load
return jit_compile(
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\utils\cpp_extension.py", line 1710, in jit_compile
write_ninja_file_and_build_library(
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\utils\cpp_extension.py", line 1823, in write_ninja_file_and_build_library
run_ninja_build(
File "C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\utils\cpp_extension.py", line 2116, in run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'nerfacc_cuda': [1/10] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output pack.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_BFLOAT16_CONVERSIONS
-D__CUDA_NO_HALF2_OPERATORS
--expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\pack.cu -o pack.cuda.o
FAILED: pack.cuda.o
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output pack.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_BFLOAT16_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\pack.cu -o pack.cuda.o
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\include\xmemory(732): catastrophic error: out of memory
detected during:
instantiation of class "std::_Default_allocator_traits<_Alloc> [with _Alloc=std::allocatortorch::jit::Method]"
(742): here
instantiation of class "std::allocator_traits<_Alloc> [with _Alloc=std::allocatortorch::jit::Method]"
(766): here
instantiation of type "std::_Rebind_alloc_t<std::allocatortorch::jit::Method, torch::jit::Method>"
C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\include\vector(444): here
instantiation of class "std::vector<_Ty, _Alloc> [with _Ty=torch::jit::Method, _Alloc=std::allocatortorch::jit::Method]"
C:/Users/Genesis/anaconda3/envs/wonder/lib/site-packages/torch/include\torch/csrc/jit/api/object.h(113): here

1 catastrophic error detected in the compilation of "C:/Users/Genesis/anaconda3/envs/wonder/lib/site-packages/nerfacc/cuda/csrc/pack.cu".
Compilation terminated.
pack.cu
[2/10] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output ray_marching.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\ray_marching.cu -o ray_marching.cuda.o
FAILED: ray_marching.cuda.o
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output ray_marching.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\ray_marching.cu -o ray_marching.cuda.o
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:/Users/Genesis/anaconda3/envs/wonder/lib/site-packages/torch/include\ATen/core/interned_strings.h(345): catastrophic error: out of memory

1 catastrophic error detected in the compilation of "C:/Users/Genesis/anaconda3/envs/wonder/lib/site-packages/nerfacc/cuda/csrc/ray_marching.cu".
Compilation terminated.
ray_marching.cu
[3/10] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output contraction.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\contraction.cu -o contraction.cuda.o
FAILED: contraction.cuda.o
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output contraction.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\contraction.cu -o contraction.cuda.o
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\include\vector(288): catastrophic error: out of memory
detected during instantiation of class "std::_Vector_iterator<_Myvec> [with _Myvec=std::_Vector_val<std::_Simple_types<torch::OrderedDict<std::string, torch::nn::AnyModule>::Item>>]"
C:/Users/Genesis/anaconda3/envs/wonder/lib/site-packages/torch/include/torch/csrc/api/include\torch/nn/modules/container/sequential.h(110): here

1 catastrophic error detected in the compilation of "C:/Users/Genesis/anaconda3/envs/wonder/lib/site-packages/nerfacc/cuda/csrc/contraction.cu".
Compilation terminated.
contraction.cu
[4/10] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output render_transmittance.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\render_transmittance.cu -o render_transmittance.cuda.o
FAILED: render_transmittance.cuda.o
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output render_transmittance.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\render_transmittance.cu -o render_transmittance.cuda.o
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\include\vector(1010): catastrophic error: out of memory
detected during instantiation of class "std::vector<_Ty, _Alloc> [with _Ty=PyObject *, _Alloc=std::allocator<PyObject *>]"
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail\internals.h(182): here

1 catastrophic error detected in the compilation of "C:/Users/Genesis/anaconda3/envs/wonder/lib/site-packages/nerfacc/cuda/csrc/render_transmittance.cu".
Compilation terminated.
render_transmittance.cu
[5/10] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output pybind.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\pybind.cu -o pybind.cuda.o
FAILED: pybind.cuda.o
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output pybind.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\pybind.cu -o pybind.cuda.o
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\include\xmemory(1518): catastrophic error: out of memory
detected during:
instantiation of "std::_Compressed_pair<_Ty1, _Ty2, >::_Compressed_pair(std::_One_then_variadic_args_t, _Other1 &&, _Other2 &&...) [with _Ty1=std::allocator<std::_List_node<torch::autograd::Node *, void *>>, _Ty2=std::_List_val<std::_List_simple_types<torch::autograd::Node *>>, =true, _Other1=const std::allocator<torch::autograd::Node *> &, _Other2=<>]"
C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\include\list(803): here
instantiation of "std::list<_Ty, _Alloc>::list(const _Alloc &) [with _Ty=torch::autograd::Node *, _Alloc=std::allocator<torch::autograd::Node *>]"
C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\include\xhash(376): here
instantiation of "std::_Hash<_Traits>::_Hash(const std::_Hash<_Traits>::_Key_compare &, const std::_Hash<_Traits>::allocator_type &) [with _Traits=std::_Uset_traits<torch::autograd::Node *, std::_Uhash_compare<torch::autograd::Node *, std::hash<torch::autograd::Node *>, std::equal_to<torch::autograd::Node *>>, std::allocator<torch::autograd::Node *>, false>]"
C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\include\unordered_set(102): here
instantiation of "std::unordered_set<_Kty, _Hasher, _Keyeq, _Alloc>::unordered_set() [with _Kty=torch::autograd::Node *, _Hasher=std::hash<torch::autograd::Node *>, _Keyeq=std::equal_to<torch::autograd::Node *>, _Alloc=std::allocator<torch::autograd::Node *>]"
C:/Users/Genesis/anaconda3/envs/wonder/lib/site-packages/torch/include\torch/csrc/autograd/graph_task.h(211): here

1 catastrophic error detected in the compilation of "C:/Users/Genesis/anaconda3/envs/wonder/lib/site-packages/nerfacc/cuda/csrc/pybind.cu".
Compilation terminated.
pybind.cu
[6/10] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output render_weight.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\render_weight.cu -o render_weight.cuda.o
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
render_weight.cu
[7/10] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output cdf.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\cdf.cu -o cdf.cuda.o
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\cdf.cu(74): warning #1290-D: a class type that is not trivially copyable passed through ellipsis
detected during instantiation of "void cdf_resampling_kernel(uint32_t, const int *, const scalar_t *, const scalar_t *, const scalar_t *, const int *, scalar_t *, scalar_t *) [with scalar_t=c10::Half]"
(186): here

C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\cdf.cu(74): warning #181-D: argument is incompatible with corresponding format string conversion
detected during instantiation of "void cdf_resampling_kernel(uint32_t, const int *, const scalar_t *, const scalar_t *, const scalar_t *, const int *, scalar_t *, scalar_t *) [with scalar_t=c10::Half]"
(186): here

cdf.cu
[8/10] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output intersection.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\intersection.cu -o intersection.cuda.o
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
intersection.cu
[9/10] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output render_transmittance_cub.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=nerfacc_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\TH -IC:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Genesis\anaconda3\envs\wonder\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 -O3 -c C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\nerfacc\cuda\csrc\render_transmittance_cub.cu -o render_transmittance_cub.cuda.o
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Users\Genesis\anaconda3\envs\wonder\lib\site-packages\torch\include\pybind11\detail/common.h(210): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\Genesis\anaconda3\envs\wonder\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
render_transmittance_cub.cu
ninja: build stopped: subcommand failed.

[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

About the jointattn layer

Hi Xiaoxiao,

thanks for your great work!

I have a question about the jointattn layer for normal and rgb images.
I found that you didn't use any jointattn layer in the released model, which is different from the paper. Just curious why this still works without the jointattn layer and will the performance becomes better without it?

Must I run this repo with xformers?

My CUDA version is 11.3, while xformers==0.0.16 need higher CUDA version, so I try to run the code with 'enable_xformers_memory_efficient_attention: false'. But it throw the error below:
Traceback (most recent call last):
File "test_mvdiffusion_seq.py", line 335, in
main(cfg)
File "test_mvdiffusion_seq.py", line 291, in main
log_validation_joint(
File "test_mvdiffusion_seq.py", line 205, in log_validation_joint
out = pipeline(
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/data/dengyao/projects/Wonder3D/mvdiffusion/pipelines/pipeline_mvdiffusion_image.py", line 448, in call
noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=image_embeddings, class_labels=camera_embeddings).sample
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/unet_mv2d_condition.py", line 966, in forward
sample, res_samples = downsample_block(
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/unet_mv2d_blocks.py", line 858, in forward
hidden_states = attn(
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/transformer_mv2d.py", line 314, in forward
hidden_states = block(
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/data/dengyao/projects/Wonder3D/mvdiffusion/models/transformer_mv2d.py", line 544, in forward
attn_output = self.attn1(
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/data/dengyao/miniconda3/envs/wonder3d/lib/python3.8/site-packages/diffusers/models/attention_processor.py", line 322, in forward
return self.processor(
TypeError: call() got an unexpected keyword argument 'cross_domain_attention'

Texture quality and meaningful parameters

First of all, amazing work on this project!

Would it be possible to increase texture quality? I've tried increasing the img_wh under validation_dataset in the config to [512, 512] but I get:
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 32 but got size 64 for tensor number 1 in the list.

As for second phase, what relevant parameters in the 'neuralangelo-ortho-wmask.yaml' config would allow modification to mesh quality?

And finally, less relevant to the above, but I've tried running on multi-GPU and the seconds phase (instant-nsr-pl) works fine when reducing the iterations but when trying to run the diffusion phase with a similar config to 8gpu.yaml but with 4 instead (running on 4xNVIDIA L4) I get OOM. Any tips you can share on this?

Thanks a lot!

About testing data set questions

The test data set mentioned in your paper is the specific 30 objects of the Google scanning object data set,if you can answer my question i will very appricate。

Code documentation :)

Feature request
Hey! Hoping this finds you well.
We are generating a product to automate the generation of docs. We want to contribute to the community, this is the first iteration of the product and would love to receive feedback as to what could be enhanced to make them valuable for the community.

I’ve generated this docs for the repo.

They are generated with AI so it will always be up to date and a ChangeLog is fed with changes.

Motivation
contribute to the community!

We have a LinkedIn page here :)
We generated a linkedin post about it here

Install env failed

According your tutorial, I've meet this error
(wonder3d) D:\Git\Wonder3D>pip install -r requirements.txt
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu117
ERROR: Could not find a version that satisfies the requirement torch==1.13.1 (from versions: 2.0.0, 2.0.0+cu117, 2.0.1, 2.0.1+cu117, 2.1.0)
ERROR: No matching distribution found for torch==1.13.1

Stop at the first step

[Bug]: train_num_rays ZeroDivisionError: division by zero

Hello, i could be wrong on config but the last command 4. Mesh Extraction give me error:

...

..Wonder3D\instant-nsr-pl\systems\neus_ortho.py", line 139, in training_step
    train_num_rays = int(self.train_num_rays * (self.train_num_samples / out['num_samples_full'].sum().item()))
ZeroDivisionError: division by zero
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

TypeError: SingleImageDataset.__init__() got an unexpected keyword argument 'single_image'

run python gradio_app.py

You have disabled the safety checker for <class 'mvdiffusion.pipelines.pipeline_mvdiffusion_image.MVDiffusionImagePipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .
Running on local URL: http://0.0.0.0:7861

To create a public link, set share=True in launch().
SAM Time: 0.926s
Traceback (most recent call last):
File "/data/Wonder3D/venv/lib/python3.10/site-packages/gradio/queueing.py", line 407, in call_prediction
output = await route_utils.call_process_api(
File "/data/Wonder3D/venv/lib/python3.10/site-packages/gradio/route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "/data/Wonder3D/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1550, in process_api
result = await self.call_function(
File "/data/Wonder3D/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1185, in call_function
prediction = await anyio.to_thread.run_sync(
File "/data/Wonder3D/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/data/Wonder3D/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/data/Wonder3D/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/data/Wonder3D/venv/lib/python3.10/site-packages/gradio/utils.py", line 661, in wrapper
response = f(*args, **kwargs)
File "/data/Wonder3D/gradio_app.py", line 183, in run_pipeline
batch = prepare_data(single_image, crop_size)
File "/data/Wonder3D/gradio_app.py", line 168, in prepare_data
dataset = SingleImageDataset(
TypeError: SingleImageDataset.init() got an unexpected keyword argument 'single_image'
image

RuntimeError: Input type (float) and bias type (c10::Half) should be the same

python gradio_app.py
You have disabled the safety checker for <class 'mvdiffusion.pipelines.pipeline_mvdiffusion_image.MVDiffusionImagePipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .
Running on local URL: http://0.0.0.0:7861

To create a public link, set share=True in launch().
SAM Time: 0.946s
/data/Wonder3D/mvdiffusion/data/single_image_dataset.py:301: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:230.)
elevations = torch.as_tensor(elevations).float().squeeze(1)
Traceback (most recent call last):
File "/data/Wonder3D/venv/lib/python3.10/site-packages/gradio/queueing.py", line 407, in call_prediction
output = await route_utils.call_process_api(
File "/data/Wonder3D/venv/lib/python3.10/site-packages/gradio/route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "/data/Wonder3D/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1550, in process_api
result = await self.call_function(
File "/data/Wonder3D/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1185, in call_function
prediction = await anyio.to_thread.run_sync(
File "/data/Wonder3D/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/data/Wonder3D/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/data/Wonder3D/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/data/Wonder3D/venv/lib/python3.10/site-packages/gradio/utils.py", line 661, in wrapper
response = f(*args, **kwargs)
File "/data/Wonder3D/gradio_app.py", line 204, in run_pipeline
out = pipeline(
File "/data/Wonder3D/venv/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/data/Wonder3D/mvdiffusion/pipelines/pipeline_mvdiffusion_image.py", line 401, in call
image_embeddings, image_latents = self._encode_image(image_pil, device, num_images_per_prompt, do_classifier_free_guidance)
File "/data/Wonder3D/mvdiffusion/pipelines/pipeline_mvdiffusion_image.py", line 160, in _encode_image
image_latents = self.vae.encode(image_pt).latent_dist.mode() * self.vae.config.scaling_factor
File "/data/Wonder3D/venv/lib/python3.10/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "/data/Wonder3D/venv/lib/python3.10/site-packages/diffusers/models/autoencoder_kl.py", line 242, in encode
h = self.encoder(x)
File "/data/Wonder3D/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data/Wonder3D/venv/lib/python3.10/site-packages/diffusers/models/vae.py", line 110, in forward
sample = self.conv_in(sample)
File "/data/Wonder3D/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data/Wonder3D/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/data/Wonder3D/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (float) and bias type (c10::Half) should be the same

image

image

Mesh Extraction: out['num_samples_full'] is 0

Thank you so much for this great work and for putting the code up!

When running the mesh extraction (under instant-nsr-pl) the out['num_samples_full'] on this line appears to be 0 (commenting it out also results on nan losses).

I'm testing the default owl example and the results from Wonder3D work just fine (and are placed in the right folder etc) so I was wondering if you have any insights on why would this be the case?

Thanks again!

FileNotFoundError: The passed configuration file `1gpu.yaml` does not exist. Please pass an existing file to `accelerate launch`, or use the the default one created through `accelerate config` and run `accelerate launch` without the `--config_file` argument.

When I execute step 3, I get this:

C:\Users\USER\Desktop\wonder3d>accelerate launch --config_file 1gpu.yaml test_mvdiffusion_seq.py
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "C:\Users\USER\anaconda3\Scripts\accelerate.exe_main
.py", line 7, in
File "C:\Users\USER\anaconda3\Lib\site-packages\accelerate\commands\accelerate_cli.py", line 47, in main
args.func(args)
File "C:\Users\USER\anaconda3\Lib\site-packages\accelerate\commands\launch.py", line 972, in launch_command
args, defaults, mp_from_config_flag = _validate_launch_command(args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\Lib\site-packages\accelerate\commands\launch.py", line 829, in _validate_launch_command
defaults = load_config_from_file(args.config_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\Lib\site-packages\accelerate\commands\config\config_args.py", line 46, in load_config_from_file
raise FileNotFoundError(
FileNotFoundError: The passed configuration file 1gpu.yaml does not exist. Please pass an existing file to accelerate launch, or use the the default one created through accelerate config and run accelerate launch without the --config_file argument.

Also, until before that step I only have my ckpt folder in wonder3d, I imagine that everything I installed before is for the system

I have a question about pose

I have a question of how to get a new camera pose in training, and how to generate multiple frames of videos when generating a test video. Now there are only 6 frames of videos. If you can answer me, I will be very grateful

Xformers 0.0.22 link not fit windows

Can you please provide a Windows-compatible version of xformers for the installation command mentioned in the README.md file on GitHub? The current version specified (xformers-0.0.22) seems to be suitable only for Linux, resulting in an error when trying to install it on Windows.
image

I have a question about more poses during NERF training,

because I saw you using fixed poses during NERF training. I used an random initialization pose to train nerf and found that it was a white training failure. How did you get this c2w Pose for NERF training?i want to testing more view images. It would be greatly appreciated if you could answer my question.

A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' usage: test_mvdiffusion_seq.py [-h] --config CONFIG test_mvdiffusion_seq.py: error: the following arguments are required: --config

Now I get this error, I read elsewhere that it is only on Linux and to run it you need to install WLS 2 XPixelGroup/DiffBIR#31 (comment)
https://github.com/bycloud-AI/DiffBIR-Windows/

but I see that you have to have 8gb vram, mine is gtx 1660 6gb, it means that there is no way to run it, I think I understand, is there another solution?

C:\Users\USUARIO\Desktop\wonder3d>@echo off
set PYTHON_PATH=C:\Users\USUARIO\AppData\Local\Programs\Python\Python311\python.exe
set SCRIPT_PATH=C:\Users\USUARIO\Desktop\wonder3d\test_mvdiffusion_seq.py

%PYTHON_PATH% %SCRIPT_PATH%
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
usage: test_mvdiffusion_seq.py [-h] --config CONFIG
test_mvdiffusion_seq.py: error: the following arguments are required: --config

python test_mvdiffusion_seq.py --config C:\Users\USUARIO\Desktop\wonder3d\configs\mvdiffusion-joint-ortho-6views.yaml
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
{'pretrained_model_name_or_path': 'lambdalabs/sd-image-variations-diffusers', 'pretrained_unet_path': './ckpts/', 'revision': None, 'validation_dataset': {'root_dir': './example_images', 'num_views': 6, 'bg_color': 'white', 'img_wh': [256, 256], 'num_validation_samples': 1000, 'crop_size': 192, 'filepaths': ['owl.png']}, 'save_dir': 'outputs/', 'pred_type': 'joint', 'seed': 42, 'validation_batch_size': 1, 'dataloader_num_workers': 64, 'local_rank': -1, 'pipe_kwargs': {'camera_embedding_type': 'e_de_da_sincos', 'num_views': 6}, 'validation_guidance_scales': [3.0], 'pipe_validation_kwargs': {'eta': 1.0}, 'validation_grid_nrow': 6, 'unet_from_pretrained_kwargs': {'camera_embedding_type': 'e_de_da_sincos', 'projection_class_embeddings_input_dim': 10, 'num_views': 6, 'sample_size': 32, 'zero_init_conv_in': False, 'zero_init_camera_projection': False}, 'num_views': 6, 'camera_embedding_type': 'e_de_da_sincos', 'enable_xformers_memory_efficient_attention': True}

HuggingFace Demo Halted/Froze Bug

I have the warning message below that came with the behavior after I was testing on a private duplicated space.
I should note that it has something to do with the SAM model preprocess step.

Every now and then I would randomly get the following warning with the preprocess checkbox checked after clicking generate; followed by the program/GUI completely halting any further progress even if I pressed generate again on any image with or without the preprocess checkbox checked.

PERFORMANCE WARNING:
Thresholded incomplete Cholesky decomposition failed due to insufficient positive-definiteness of matrix A with parameters:
    discard_threshold = 1.000000e-04
    shift = 0.000000e+00
Try decreasing discard_threshold or start with a larger shift

PERFORMANCE WARNING:
Thresholded incomplete Cholesky decomposition failed due to insufficient positive-definiteness of matrix A with parameters:
    discard_threshold = 1.000000e-04
    shift = 1.000000e-04
Try decreasing discard_threshold or start with a larger shift

PERFORMANCE WARNING:
Thresholded incomplete Cholesky decomposition failed due to insufficient positive-definiteness of matrix A with parameters:
    discard_threshold = 1.000000e-04
    shift = 1.000000e-03
Try decreasing discard_threshold or start with a larger shift

PERFORMANCE WARNING:
Thresholded incomplete Cholesky decomposition failed due to insufficient positive-definiteness of matrix A with parameters:
    discard_threshold = 1.000000e-04
    shift = 1.000000e-02
Try decreasing discard_threshold or start with a larger shift

is this an implement bug?

I notice that in pipeline_mvdiffusion_image.py, when doing inference using 'do_classifier_free_guidance', the camera_embedding and img_embedding are repeated in batch, which is rational。 but the joint attention joints the first half and second half batch, which is probably wrong?the chunked key_0, key_1 that concatenated into one tensor in fact are using the same embedding(i.e. normal-normal joint and rgb-rgb joint)? They indeed should using normal and rgb embedding respectively. Im not sure if I was wrong, how do you think

repeat embedding:
camera_embedding = torch.cat([ camera_embedding, camera_embedding ], dim=0)
joint attention:

    key_0, key_1 = torch.chunk(key, dim=0, chunks=2)  # keys shape (b t) d c
    value_0, value_1 = torch.chunk(value, dim=0, chunks=2)
    key = torch.cat([key_0, key_1], dim=1)  # (b t) 2d c
    value = torch.cat([value_0, value_1], dim=1)  # (b t) 2d c
    key = torch.cat([key]*2, dim=0)   # ( 2 b t) 2d c
    value = torch.cat([value]*2, dim=0)  # (2 b t) 2d c

test_mvdiffusion_seq.py need a logger initial

accelerate launch --config_file 1gpu.yaml test_mvdiffusion_seq.py --config ./configs/mvdiffusion-joint-ortho-6views.yaml
/home/gpu/data/Wonder3D/wonder3d/lib/python3.10/site-packages/numba/np/ufunc/parallel.py:371: NumbaWarning: The TBB threading layer requires TBB version 2021 update 6 or later i.e., TBB_INTERFACE_VERSION >= 12060. Found TBB_INTERFACE_VERSION = 12050. The TBB threading layer is disabled.
warnings.warn(problem)
{'pretrained_model_name_or_path': 'lambdalabs/sd-image-variations-diffusers', 'pretrained_unet_path': './ckpts/', 'revision': None, 'validation_dataset': {'root_dir': './example_images', 'num_views': 6, 'bg_color': 'white', 'img_wh': [256, 256], 'num_validation_samples': 1000, 'crop_size': 192, 'filepaths': ['owl.png']}, 'save_dir': 'outputs/', 'pred_type': 'joint', 'seed': 42, 'validation_batch_size': 1, 'dataloader_num_workers': 64, 'local_rank': -1, 'pipe_kwargs': {'camera_embedding_type': 'e_de_da_sincos', 'num_views': 6}, 'validation_guidance_scales': [3.0], 'pipe_validation_kwargs': {'eta': 1.0}, 'validation_grid_nrow': 6, 'unet_from_pretrained_kwargs': {'camera_embedding_type': 'e_de_da_sincos', 'projection_class_embeddings_input_dim': 10, 'num_views': 6, 'sample_size': 32, 'zero_init_conv_in': False, 'zero_init_camera_projection': False}, 'num_views': 6, 'camera_embedding_type': 'e_de_da_sincos', 'enable_xformers_memory_efficient_attention': True}
Traceback (most recent call last):
File "/home/gpu/data/Wonder3D/test_mvdiffusion_seq.py", line 333, in
main(cfg)
File "/home/gpu/data/Wonder3D/test_mvdiffusion_seq.py", line 260, in main
logger.warn(
NameError: name 'logger' is not defined
Traceback (most recent call last):
File "/home/gpu/data/Wonder3D/wonder3d/bin/accelerate", line 8, in
sys.exit(main())
File "/home/gpu/data/Wonder3D/wonder3d/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/home/gpu/data/Wonder3D/wonder3d/lib/python3.10/site-packages/accelerate/commands/launch.py", line 994, in launch_command
simple_launcher(args)
File "/home/gpu/data/Wonder3D/wonder3d/lib/python3.10/site-packages/accelerate/commands/launch.py", line 636, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/gpu/data/Wonder3D/wonder3d/bin/python3', 'test_mvdiffusion_seq.py', '--config', './configs/mvdiffusion-joint-ortho-6views.yaml']' returned non-zero exit status 1.

Windows support !

Hello there thanks alot for this amazing work , i have been trying to use it on windows system but it seems not working it only works on linux ! , also it would be nice if it gets a gradio interface combining the image to views and extract the mesh all in one gradio interface

Any Plans for a release of Fine-Tuning Code

I found some edge cases that I think extend beyond the knowledge domain of the model after testing. I figure that could be resolved by fine-tuning on a different domain-specific dataset, so I am very curious if any fine-tuning code might be released in the future which could do this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.