Coder Social home page Coder Social logo

songrise / avatarcraft Goto Github PK

View Code? Open in Web Editor NEW
176.0 13.0 6.0 6.98 MB

[ICCV23] AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control

License: Other

Python 81.54% C++ 0.22% Cuda 16.19% C 2.04%
diffusion nerf text-to-3d

avatarcraft's Issues

Question about A-pose rendering

Hey, really happy to see that AvatarCraft is finally released.

I observed most of the demos are present as the T-pose. Meanwhile, there are some A-pose demonstrations in Fig.6 of the paper. I wonder how to render an A-pose image of a given model. Do I need to prepare a corresponding motion pkl? Or using some SMPL Parameters would just work?

Hope for your reply :-)

einsum() operands do not broadcast with remapped shapes in render_warp.py

Thanks for great work!
To render articulated avatar, I tried yoga pose in AMASS SFU dataset.
Then encountered an error:

Traceback (most recent call last):
  File "render_warp.py", line 272, in <module>
    main(opt)
  File "render_warp.py", line 50, in main
    max_frames=opt.max_frames
  File "render_warp.py", line 176, in calc_local_trans
    concat_joints=True
  File "/workspace/jaeseok/avatarcraft/models/smpl.py", line 152, in verts_transformations
    return_T=True, concat_joints=concat_joints)
  File "/workspace/jaeseok/avatarcraft/models/smpl.py", line 396, in lbs
    v_delta = blend_shapes(betas, shapedirs)
  File "/workspace/jaeseok/avatarcraft/models/smpl.py", line 546, in blend_shapes
    blend_shape = torch.einsum('bl,mkl->bmk', [betas, shape_disps])
  File "/opt/conda/envs/craft/lib/python3.7/site-packages/torch/functional.py", line 406, in einsum
    return einsum(equation, *_operands)
  File "/opt/conda/envs/craft/lib/python3.7/site-packages/torch/functional.py", line 408, in einsum
    return _VF.einsum(equation, operands)  # type: ignore
einsum() operands do not broadcast with remapped shapes [original->remapped]: [1, 10]->[1, 1, 1, 10] [6890, 3, 300]->[1, 6890, 3, 300]

How to reconstruct bare SMPL avatar?

First of all, thank you for excellent work!
Anyway, I tried to reconstruct bare_smpl model using reconstruct.py and da_smpl_512 dataset in your repo.
But naively executing the code using the dataset, reconstruction quality is really bad.
이미지_2023-08-22_020855571
Would you let me know how to train it?

Confusion about the Avatar Creation Results

Your work AvatarCraft impressed me a lot and it inspired me a lot.

But when I use the following command to create an avatar with text prompt, the results are far from expectation.
python stylize.py --weights_path "ckpts/bare_smpl.pth.tar" --tgt_text "Hulk, photorealistic style" --exp_name "hulk" --batch_size 4096
15376753f03bffa79c658959ada4d84
Here shows the canonical render result:
exp_body_can

Are these results normal?
Also, we found there seemed to be a mode collapse after 4800iter. Can you help with this problem?

Ps. We test this code on Win10+cuda 11.1 and RTX3090-24G, and we download the stable-diffusion from huggingface instead of importing it online.

Error building extension '_hash_encoder'

First of all, thank you for excellent work!
But,I meet this problem when I run python render_canonical.py to render the canonical avatar . Is there any solution?

Traceback (most recent call last):
File "/miniconda3/envs/avatar/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1673, in _run_ninja_build
env=env)
File "/miniconda3/envs/avatar/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "render_canonical.py", line 30, in
import models.instant_nsr as instant_nsr
File "/AvatarCraft-main/models/instant_nsr.py", line 19, in
from encoder import get_encoder
File "/AvatarCraft-main/encoder/init.py", line 2, in
from encoder.hashencoder import HashEncoder
File "/AvatarCraft-main/encoder/hashencoder/init.py", line 1, in
from .hashgrid import HashEncoder
File "/AvatarCraft-main/encoder/hashencoder/hashgrid.py", line 9, in
from .backend import backend
File "/AvatarCraft-main/encoder/hashencoder/backend.py", line 12, in
sources=[os.path.join(src_path, 'src', f) for f in [
File "/miniconda3/envs/avatar/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1091, in load
keep_intermediates=keep_intermediates)
File "/miniconda3/envs/avatar/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1302, in jit_compile
is_standalone=is_standalone)
File "/miniconda3/envs/avatar/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1407, in write_ninja_file_and_build_library
error_prefix=f"Error building extension '{name}'")
File "/miniconda3/envs/avatar/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1683, in run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'hash_encoder': [1/2] /usr/bin/nvcc --generate-dependencies-with-compile --dependency-output hashencoder.cuda.o.d -DTORCH_EXTENSION_NAME=hash_encoder -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /miniconda3/envs/avatar/lib/python3.7/site-packages/torch/include -isystem
/miniconda3/envs/avatar/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /miniconda3/envs/avatar/lib/python3.7/site-packages/torch/include/TH -isystem
/miniconda3/envs/avatar/lib/python3.7/site-packages/torch/include/THC -isystem
/miniconda3/envs/avatar/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_BFLOAT16_CONVERSIONS
-D__CUDA_NO_HALF2_OPERATORS
--expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -O3 -std=c++14 -U__CUDA_NO_HALF_OPERATORS
-U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -c /AvatarCraft-main/encoder/hashencoder/src/hashencoder.cu -o hashencoder.cuda.o
FAILED: hashencoder.cuda.o
/usr/bin/nvcc --generate-dependencies-with-compile --dependency-output hashencoder.cuda.o.d -DTORCH_EXTENSION_NAME=hash_encoder -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem
/miniconda3/envs/avatar/lib/python3.7/site-packages/torch/include -isystem
/miniconda3/envs/avatar/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem
/miniconda3/envs/avatar/lib/python3.7/site-packages/torch/include/TH -isystem
/miniconda3/envs/avatar/lib/python3.7/site-packages/torch/include/THC -isystem
/miniconda3/envs/avatar/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -O3 -std=c++14 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -c /AvatarCraft-main/encoder/hashencoder/src/hashencoder.cu -o hashencoder.cuda.o
nvcc fatal : Unknown option '-generate-dependencies-with-compile'
ninja: build stopped: subcommand failed.

The paper link of the project page is redirected to Nerfie

Both paper button and arxiv button in the project page here is linked with Nerfie. I wonder whether it is a mistake or the paper is just not available for reading yet. If the situation is the latter one, then when will the paper be released? Hope for your answer : )

BTW, it is a REALLY great work and demo. Your team's work is always superising me

Hardcoded inputs in render_warp.py

Hi, thanks for the great work.

However, I cannot get the articulation code to work, as there seem to be some missing files, including "data/stand_pose.npy", "data/smpl_da_512" and 'data/smplx/smpl_uv.obj'.

Are there any more updates planned?
Thanks.

No CUDA runtime is found

python stylize.py --weights_path "ckpts/bare_smpl.pth.tar" --tgt_text "Hulk, photorealistic style" --exp_name "hulk" --batch_size 4096
No CUDA runtime is found, using CUDA_HOME=':/usr/local/cuda-11.3'
Traceback (most recent call last):
  File "stylize.py", line 17, in <module>
    from models import instant_nsr, diffusion
  File "/data2/gaoyufei_m24/AvatarCraft/models/instant_nsr.py", line 19, in <module>
    from encoder import get_encoder
  File "/data2/gaoyufei_m24/AvatarCraft/encoder/__init__.py", line 2, in <module>
    from encoder.hashencoder import HashEncoder
  File "/data2/gaoyufei_m24/AvatarCraft/encoder/hashencoder/__init__.py", line 1, in <module>
    from .hashgrid import HashEncoder
  File "/data2/gaoyufei_m24/AvatarCraft/encoder/hashencoder/hashgrid.py", line 9, in <module>
    from .backend import _backend
  File "/data2/gaoyufei_m24/AvatarCraft/encoder/hashencoder/backend.py", line 12, in <module>
    sources=[os.path.join(_src_path, 'src', f) for f in [
  File "/data2/gaoyufei_m24/mambaforge/envs/avatar/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1091, in load
    keep_intermediates=keep_intermediates)
  File "/data2/gaoyufei_m24/mambaforge/envs/avatar/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1302, in _jit_compile
    is_standalone=is_standalone)
  File "/data2/gaoyufei_m24/mambaforge/envs/avatar/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1400, in _write_ninja_file_and_build_library
    is_standalone=is_standalone)
  File "/data2/gaoyufei_m24/mambaforge/envs/avatar/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1782, in _write_ninja_file_to_build_library
    cuda_flags = common_cflags + COMMON_NVCC_FLAGS + _get_cuda_arch_flags()
  File "/data2/gaoyufei_m24/mambaforge/envs/avatar/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1561, in _get_cuda_arch_flags
    arch_list[-1] += '+PTX'
IndexError: list index out of range

envs

NVIDIA 3090 24G

NVIDIA-SMI 510.47.03    Driver Version: 510.47.03    CUDA Version: 11.6

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_May__3_19:15:13_PDT_2021
Cuda compilation tools, release 11.3, V11.3.109
Build cuda_11.3.r11.3/compiler.29920130_0

reproduce

clone the repo
enviroment setup with cuda version 11.x
data setup without optional
run python stylize.py --weights_path "ckpts/bare_smpl.pth.tar" --tgt_text "Hulk, photorealistic style" --exp_name "hulk" --batch_size 4096

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.