shirleymaxx / virtualmarker Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2023] Offical Pytorch implementation of "3D Human Mesh Estimation from Virtual Markers"
License: Apache License 2.0
[CVPR 2023] Offical Pytorch implementation of "3D Human Mesh Estimation from Virtual Markers"
License: Apache License 2.0
Thanks for your great work!
I run the demo, got the result video with Mesh, so I have one question: can I get the SMPL joints parameters of the result?
Hi! Great work with the project. I'm having some issues with the output when running the project.
I installed the environment as specified but the output video is exactly the same as the input. I have no warnings or errors about the model or missing files.
thanks for the help!
Output video:
Output of the console when running the project:
sh command/simple3dmesh_infer/baseline.sh
args: Namespace(batch_size=32, cfg='./configs/simple3dmesh_infer/baseline.yml', cur_path='.', data_path='.', device=[0], experiment_name='simple3dmesh_infer/', gpus=1, input_path='inputs/input.mp4', input_type='video', is_distributed=False, seed=123)
Experiment Data on ./experiment/simple3dmesh_infer/exp_11-24_07_25
Input path: inputs/input.mp4
Input type: video
Running "ffmpeg -i inputs/input.mp4 -r 30 -f image2 -v error /home/seba/Documents/VirtualMarker/inputs/input/%06d.png"
Images saved to "/home/seba/Documents/VirtualMarker/inputs/input"
=> load backbone statedict from ./VirtualPose/output/mix_coco_muco/multi_person_posenet_152/coco_backbone_res152_coco_muco/backbone_res152_mix_muco/model_last.pth.tar
0%| | 0/32 [00:00<?, ?it/s]/home/seba/anaconda3/envs/virtualmarkers/lib/python3.8/site-packages/torch/nn/_reduction.py:42: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
/home/seba/anaconda3/envs/virtualmarkers/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1695392020195/work/aten/src/ATen/native/TensorShape.cpp:3526.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
100%|███████████████████████████████████████████| 32/32 [00:27<00:00, 1.18it/s]
=> init HRNet weights from normal distribution
=> loading pretrained HRNet model models/pytorch/pose_coco/pose_hrnet_w48_384x288.pth
=> reinit final layer...
=> successfully loaded
==> Loading checkpoint
Fetch model weight from experiment/final.pth.tar
Successfully load checkpoint from experiment/final.pth.tar.
===> Start inferencing...
100%|█████████████████████████████████████████████| 8/8 [00:03<00:00, 2.64it/s]
100%|█████████████████████████████████████████| 253/253 [00:04<00:00, 51.32it/s]
Results saved to ./experiment/simple3dmesh_infer/exp_11-24_07_25/vis.
===> Done.
Thanks for sharing such wonderful work
But i have a problem about model weights can't open status is 404
Hello, thank you for sharing your impressive work. But I meet a problem where all my inference images were deleted by the inference code due to the following line of code:
VirtualMarker/main/inference.py
Line 293 in 60dbbe4
The code deletes the parent path of 'input_path' if the last folder name of my 'input_path' has less than four words, and it caused me much trouble. It would be better if the author could consider changing this line of code.
Thanks for your great work!
I have two questions:
How you get the Camera Intrinsics of wild datasets (e.g. COCO)
How you get the SMPL parameters of wild datasets, it seems that you just use h36m to supervise the Virtual Markers?
Hello! @ShirleyMaxx
VirtualMarker is nice work. I have a question about the experiment setting about 3dpw in table 2. Most of the prior works use the mixed datasets to train and then test on 3dpw test set, while virtualmarker is first trained on mixed datasets and then finetuned on 3dpw train set. Do you have the results for virtualmarker trained on mixed datasets which include 3dpw train set instead of fine-tuning for a second stage?
Hello! @ShirleyMaxx
Will you consider offering the scripts to generate the VirtualMarker in h36m dataset in the future?
Thanks
Hi, thanks for your great work. The data annotation link and some other links seem invalid and I cannot open them. @ShirleyMaxx
I want to know. Can the identified model be output? I mean, suppose I pose a pose, and then he can output a 3D model file, which I can use on the 3D engine.
项目主页为什么放kunkun视频!苏珊!!!
I noticed this unfinished task in your readme todo? Please optimize?
Hi,
After running baseline.sh, the experiment folder was created with the resultant video. This video is exactly the same as the input video with no keyframes or mesh on the human doing the side monster walk action.
Could you help me out.
Thank you!
Amazing paper!
The method proposed is absolutely interesting in human 3d mesh reconstruction field (especially for the shape regression).
Could you provide the inference code to see how it performs on in-the-wild images?
Hello, thanks for posting this work!
I was wondering can you share the script that you used to process the surreal data from the original .mat files to the pkl anotation file that you included in the project.
Much appreciated
Hi there, well done with this excellent work.
I was trying to run the demo of the model and I encountered this problem:
Successfully load checkpoint from experiment/simple3dmesh_infer/baseline_mix/final.pth.tar.
===> Start inferencing...
0%| | 0/8 [00:03<?, ?it/s]
Traceback (most recent call last):
File "main/inference.py", line 355, in
main(args)
File "main/inference.py", line 348, in main
inferencer.infer(epoch=0)
File "main/inference.py", line 124, in infer
_, _, _, _, pred_mesh, _, pred_root_xy_img = self.model(imgs, inv_trans, intrinsic_param, pose_root, depth_factor, flip_item=None, flip_mask=None)
File "/home/innova/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/innova/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 159, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/innova/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/innova/environment/VirtualMarker/virtualmarker/models/simple3dmesh.py", line 62, in forward
pred_xyz_jts, confidence, pred_uvd_jts_flat, pred_root_xy_img = self.simple3dpose(x, trans_inv, intrinsic_param, joint_root, depth_factor, flip_item, flip_output, flip_mask) # (B, J+K, 3), (B, J+K)
File "/home/innova/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/innova/environment/VirtualMarker/virtualmarker/models/simple3dpose.py", line 202, in forward
out = norm_heatmap(self.norm_type, out)
File "/home/innova/environment/VirtualMarker/virtualmarker/models/simple3dpose.py", line 20, in norm_heatmap
heatmap = F.softmax(heatmap, 2)
File "/home/innova/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/functional.py", line 1512, in softmax
ret = input.softmax(dim)
RuntimeError: CUDA out of memory. Tried to allocate 2.53 GiB (GPU 0; 7.92 GiB total capacity; 5.50 GiB already allocated; 1.40 GiB free; 5.65 GiB reserved in total by PyTorch)
Would it be possible if you could assist me with this issue or give any advice or recommendations.
My GPU is a NVIDIA Corporation GP104GL [Quadro P4000] 8 GB
Many thanks
Hi, thanks for the great work. I found the files in the assets folder only provide 64 markers, which remove the markers on the head. It is appreciated if you can provide markers with 96 points mentioned in the paper.
Thanks very much.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.