iglict / stylizednerf Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2022] Code for StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D mutual learning
License: MIT License
[CVPR 2022] Code for StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D mutual learning
License: MIT License
How to get the warping function W and the warping mask M?
Thanks for sharing your code, I try to run your code but found the following issue:
Camera Pose: (20, 4, 4)
Global Step: 0 Origin Step: 200
Origin Train
Traceback (most recent call last):
File "run_stylenerf.py", line 610, in
train(args=args)
File "run_stylenerf.py", line 587, in train
Origin_train(global_step)
File "run_stylenerf.py", line 309, in Origin_train
pts_fine, ts_fine = samp_func_fine(rays_o, rays_d, ts, weights, args.N_samples_fine)
File "/home/ma-user/work/wangcan/StylizedNeRF/utils.py", line 578, in sampling_pts_fine_jt
t_samples = sample_pdf(ts_mid, weights[..., 1:-1], N_samples_fine, det=True)
File "/home/ma-user/work/wangcan/StylizedNeRF/utils.py", line 615, in sample_pdf
a = denom.where(denom < 1e-5)
RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.where)).
Types of your inputs are:
self = Var,
args = (Var, ),
Can you do me a favor?
Traceback (most recent call last):
File "run_stylenerf.py", line 600, in
train(args=args)
File "run_stylenerf.py", line 142, in train
Prepare_Style_data(nerf_gen_data_path=nerf_gen_data_path)
File "run_stylenerf.py", line 124, in Prepare_Style_data
tmp_dataset = StyleRaySampler(data_path=args.datadir, style_path=args.styledir, factor=args.factor,
File "/root/dataset.py", line 459, in init
style_names, style_paths, style_images, style_features = style_data_prepare(style_path, images, size=512, chunk=8, sv_path=data_path + '/stylized_' + str(factor) + '/', decode_path='./pretrained/decoder.pth')
File "/root/dataset.py", line 152, in style_data_prepare
tmp_stylized_imgs, tmp_style_features = style_transfer(vgg=vgg, decoder=decoder, content=tmp_imgs, style=style[:tmp_imgs.shape[0]], alpha=1., return_feature=True)
File "/root/dataset.py", line 65, in style_transfer
content_f = vgg(content)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/container.py", line 119, in forward
input = module(input)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 395, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA out of memory. Tried to allocate 23.26 GiB (GPU 0; 23.70 GiB total capacity; 2.21 GiB already allocated; 19.27 GiB free; 2.22 GiB reserved in total by PyTorch)
It still report this error when using larger GPU memory (80G), it will stack GPU memory, How can I solve it?
If you could provide some suggestions, I would greatly appreciate it!
Hi, thanks for sharing the codes from your research.
In the pre-trained model preparation, do you know where I can find the files (vgg_normalised.pth, decoder.pth, vae.pth, etc)? Thanks!
Hey! How do you run the code for data generated with COLMAP that is not llff? Some examples of the truck dataset for example are present in the paper but can't run due to the "poses_bound.npy" file not being present.
Thanks for your help!!
Thank you for your excellent work.
I would like to cite your paper for comparison. To ensure the accuracy of the comparison, can I obtain pretrained StylizedNeRF?
Hi.
Thank you for sharing your nice work!
I have some questions about the dataset for stylization
Which dataset is used for stylization training?
Did you use Wikiart dataset for the style dataset?
How many style images did you use for the training?
I'm looking forward to hearing your reply!
Thank you
Where can I get a pre-trained models, decoder.pth and vgg_normalised.pth
Thanks for the work you share!
I want to know how to generate the content image, that is, the image inside all_contents
folder that is needed in finetune_decoder
.
Is it using the original NeRF to directly generate multi-view images?
Are there any additional formatting requirements?
Can you provide a download of this data?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.