Coder Social home page Coder Social logo

daniil-osokin / lightweight-human-pose-estimation-3d-demo.pytorch Goto Github PK

View Code? Open in Web Editor NEW
652.0 18.0 137.0 82 KB

Real-time 3D multi-person pose estimation demo in PyTorch. OpenVINO backend can be used for fast inference on CPU.

License: Apache License 2.0

Python 69.31% CMake 1.57% C++ 29.12%
human-pose-estimation real-time 3d-human-pose multi-person-pose-estimation lightweight pytorch openvino deep-learning keypoint-estimation cmu-panoptic computer-vision

lightweight-human-pose-estimation-3d-demo.pytorch's Introduction

Real-time 3D Multi-person Pose Estimation Demo

This repository contains 3D multi-person pose estimation demo in PyTorch. Intel OpenVINO™ backend can be used for fast inference on CPU. This demo is based on Lightweight OpenPose and Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB papers. It detects 2D coordinates of up to 18 types of keypoints: ears, eyes, nose, neck, shoulders, elbows, wrists, hips, knees, and ankles, as well as their 3D coordinates. It was trained on MS COCO and CMU Panoptic datasets and achieves 100 mm MPJPE (mean per joint position error) on CMU Panoptic subset. This repository significantly overlaps with https://github.com/opencv/open_model_zoo/, however contains just the necessary code for 3D human pose estimation demo.

The major part of this work was done by Mariia Ageeva, when she was the 🔝🚀🔥 intern at Intel.

Table of Contents

Requirements

  • Python 3.5 (or above)
  • CMake 3.10 (or above)
  • C++ Compiler (g++ or MSVC)
  • OpenCV 4.0 (or above)

[Optional] Intel OpenVINO for fast inference on CPU. [Optional] NVIDIA TensorRT for fast inference on Jetson.

Prerequisites

  1. Install requirements:
pip install -r requirements.txt
  1. Build pose_extractor module:
python setup.py build_ext
  1. Add build folder to PYTHONPATH:
export PYTHONPATH=pose_extractor/build/:$PYTHONPATH

Pre-trained model

Pre-trained model is available at Google Drive.

Running

To run the demo, pass path to the pre-trained checkpoint and camera id (or path to video file):

python demo.py --model human-pose-estimation-3d.pth --video 0

Camera can capture scene under different view angles, so for correct scene visualization, please pass camera extrinsics and focal length with --extrinsics and --fx options correspondingly (extrinsics sample format can be found in data folder). In case no camera parameters provided, demo will use the default ones.

Inference with OpenVINO

To run with OpenVINO, it is necessary to convert checkpoint to OpenVINO format:

  1. Set OpenVINO environment variables:
    source <OpenVINO_INSTALL_DIR>/bin/setupvars.sh
    
  2. Convert checkpoint to ONNX:
    python scripts/convert_to_onnx.py --checkpoint-path human-pose-estimation-3d.pth
    
  3. Convert to OpenVINO format:
    python <OpenVINO_INSTALL_DIR>/deployment_tools/model_optimizer/mo.py --input_model human-pose-estimation-3d.onnx --input=data --mean_values=data[128.0,128.0,128.0] --scale_values=data[255.0,255.0,255.0] --output=features,heatmaps,pafs
    

To run the demo with OpenVINO inference, pass --use-openvino option and specify device to infer on:

python demo.py --model human-pose-estimation-3d.xml --device CPU --use-openvino --video 0

Inference with TensorRT

To run with TensorRT, it is necessary to install it properly. Please, follow the official guide, these steps work for me:

  1. Install CUDA 11.1.
  2. Install cuDNN 8 (runtime library, then developer).
  3. Install nvidia-tensorrt:
    python -m pip install nvidia-pyindex
    pip install nvidia-tensorrt==7.2.1.6
    
  4. Install torch2trt.

Convert checkpoint to TensorRT format:

python scripts/convert_to_trt.py --checkpoint-path human-pose-estimation-3d.pth

TensorRT does not support dynamic network input size reshape. Make sure you have set proper network input height, width with --height and --width options during conversion (if not, there will be no detections). Default values work for a usual video with 16:9 aspect ratio (1280x720, 1920x1080). You can check the network input size with print(scaled_img.shape) in the demo.py

To run the demo with TensorRT inference, pass --use-tensorrt option:

python demo.py --model human-pose-estimation-3d-trt.pth --use-tensorrt --video 0

I have observed ~10x network inference speedup on RTX 2060 (in comparison with default PyTorch 1.6.0+cu101 inference).

lightweight-human-pose-estimation-3d-demo.pytorch's People

Contributors

daniil-osokin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lightweight-human-pose-estimation-3d-demo.pytorch's Issues

Key point ordering

Is there any particular reason on why you didn't follow the COCO format for keypoints ordering?

I also learned the following (please correct me if i'm wrong)

  1. During training you are using the COCO format.
  2. It is only after getting the keypoints that you are changing the order of kpts.

pose_extractor.py

Hi! Why won't upload directly the pose_extractor.py? I'm really having trouble in compiling the cmake inside the setup.py

Ubuntu 16.04 installation fails

Thanks for great work!

Trying to reproduce results on the step:
'python setup.py build_ext'
I've faced next problem:

running build_ext -- Configuring done -- Generating done -- Build files have been written to: /home/ysikachov/PycharmProjects/3dpose/lightweight-human-pose-estimation-3d-demo.pytorch/pose_extractor/build/tmp [ 20%] Linking CXX shared library ../pose_extractor.so /usr/bin/ld: /usr/local/lib/libpython3.7m.a(exceptions.o): relocation R_X86_64_32S against _Py_NoneStruct' can not be used when making a shared object; recompile with -fPIC`

Could you help please with this?

3D Coordination

In my application, the human stands in a fixed position and a camera rotates around the human. When I run the algorithm, there is no translation movements in 3D visualization. it shows only a skeleton rotating. In other words, the algorithm believes that human rotates around himself. How can I fix this issue?

Converting Camera Space co-ordinates to pixels format

Hi @Daniil-Osokin ,

Thanks for reverting back on the thread here : #28 (comment)

By using the parsing function :

poses_3d, poses_2d = parse_poses(inference_result, input_scale, stride, fx, is_video)
, I am able to get the 3d poses in Camera space as follows :

neck = [-190.62363 -143.5822 589.5959 0.7935022]
nose = [-195.41003 -150.29758 570.4913 0.6902704]
body_center = [-187.9092 -105.43911 617.3712 -1. ]
l_shoulder = [-177.8263 -141.78792 584.78796 0.8494377]
l_elbow = [-169.89859 -121.25131 595.30255 0.8332423]
l_wrist = [-173.83104 -100.761116 599.0281 0.80825585]
l_hip = [-178.46548 -105.983055 618.76904 0.80648357]
l_knee = [-174.8669 -79.69078 633.88434 0.72061825]
l_ankle = [-169.39319 -56.625427 660.8477 -1. ]
r_shoulder = [-201.56178 -144.06097 592.5884 0.83254266]
r_elbow = [-201.56342 -125.039116 607.90967 0.77501464]
r_wrist = [-199.14009 -110.08909 602.84406 0.72150564]
r_hip = [-188.06558 -107.70375 621.0893 0.7721691]
r_knee = [-185.46873 -84.546844 643.06537 0.78837216]
r_ankle = [-1.7721944e+02 -5.9826057e+01 6.6426001e+02 6.0836148e-01]
r_eye = [-192.99725 -151.44644 569.71234 -1. ]
l_eye = [-186.4578 -152.53731 573.75745 -1. ]
r_ear = [-1.9430800e+02 -1.4779059e+02 5.7083228e+02 3.1776953e-01]
l_ear = [-1.9454225e+02 -1.4477330e+02 5.7405463e+02 5.1655698e-01]

Camera space represents coordinates in X, Y, Z format.
If we look into the results obtained above, always the X and Y coordinate values are negative for key points. Sometimes the Z coordinate value is also negative.

Is there a way or function to convert this camera space format coordinates to pixels format coordinates ?

Note : Since the 2D-coordinates obtained from the parsing function is not accurate at times, willing to derive it from 3D.

Thanks in advance !!

Converting to OpenVINO format

Hi, thank you for the elegant implementation into OpenVINO framework!
I encountered converting issue in inference with OpenVINO step 3 as:
[ ERROR ] Cannot pre-process ONNX graph after reading from model file "ProjectPath\human-pose-estimation-3d.onnx". File is corrupt or has unsupported format. Details: 'Graph' object has no attribute 'node'
In step 2, the ONNX converting script print long log list with module layer shape, and some weird messge like:
%401 : Float(1, 32, 128, 224) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[2, 2]](%data, %model.0.0.weight) # C:\Users\envyi\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\conv.py:342:0

My OS is Windows 10, and the demo runs normally.
Looking forward to your help.

Retrain model?

Hey sorry if this is a silly question. Is there a script for training a model in this repo?

debugging 3D output

HI @Daniil-Osokin

The parsing function returns something like :

[ 40.361755 -217.88615 124.155304]
[ 47.1129 -230.51137 136.9353 ]
[ 34.38633 -221.30576 75.94971]
[ 51.526398 -209.39973 121.390335]
[ 52.525818 -207.54752 98.20388 ]
[ 50.982742 -218.03622 92.99812 ]
[ 41.953068 -217.2575 73.95691 ]
[ 40.428207 -218.8851 41.116997]
[ 34.975197 -216.91167 6.3523984]

Can you help me in how to debug this? And this is in which format like x, y, z order ?

Also how to get the 2D values from this 3D pixel ??

Linux installation Issue

Hi,
When I run "python setup.py build_ext"
I got this error:
-- Configuring incomplete, errors occurred!
See also "/local-scratch/sepid/lightweight-human-pose-estimation-3d-demo.pytorch/pose_extractor/build/tmp/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "setup.py", line 72, in
cmdclass={'build_ext': CMakeBuild})
File "/local-scratch/anaconda/lib/python3.7/site-packages/setuptools/init.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/local-scratch/anaconda/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/local-scratch/anaconda/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/local-scratch/anaconda/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/local-scratch/anaconda/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 84, in run
_build_ext.run(self)
File "/local-scratch/anaconda/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/local-scratch/anaconda/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "setup.py", line 63, in build_extensions
subprocess.check_call(['cmake', ext.cmake_lists_dir] + cmake_args, cwd=tmp_dir)
File "/local-scratch/anaconda/lib/python3.7/subprocess.py", line 347, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/local-scratch/sepid/lightweight-human-pose-estimation-3d-demo.pytorch/pose_extractor', '-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=/local-scratch/sepid/lightweight-human-pose-estimation-3d-demo.pytorch/pose_extractor/build', '-DCMAKE_ARCHIVE_OUTPUT_DIRECTORY_RELEASE=/local-scratch/sepid/lightweight-human-pose-estimation-3d-demo.pytorch/pose_extractor/build/tmp', '-DPYTHON_EXECUTABLE=/local-scratch/anaconda/bin/python']' returned non-zero exit status 1.

Do you have any Idea what is the problem?

UBUNTU INSTALLATION FAILED

While running python3.5 setup.py build_ext
the following error occurred
python version - 3.5 , 3.7 (tested at both versions)
cmake - 3.16.2
opencv version - 4.4.0 (installed with pip)

running build_ext
CMake Error at CMakeLists.txt:16 (find_package):
  Could not find a configuration file for package "OpenCV" that is compatible
  with requested version "4.2.0.32".

  The following configuration files were considered but not accepted:

    /usr/share/OpenCV/OpenCVConfig.cmake, version: 3.2.0



-- Configuring incomplete, errors occurred!
See also "/home/ujjawal/my_work/motion_tracking3D/lightweight-human-pose-estimation-3d/pose_extractor/build/tmp/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
  File "setup.py", line 72, in <module>
    cmdclass={'build_ext': CMakeBuild})
  File "/home/ujjawal/.local/lib/python3.5/site-packages/setuptools/__init__.py", line 153, in setup
    return distutils.core.setup(**attrs)
  File "/home/ujjawal/.local/lib/python3.5/site-packages/setuptools/_distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/home/ujjawal/.local/lib/python3.5/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
    self.run_command(cmd)
  File "/home/ujjawal/.local/lib/python3.5/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
    cmd_obj.run()
  File "/home/ujjawal/.local/lib/python3.5/site-packages/setuptools/command/build_ext.py", line 79, in run
    _build_ext.run(self)
  File "/usr/lib/python3/dist-packages/Cython/Distutils/old_build_ext.py", line 185, in run
    _build_ext.build_ext.run(self)
  File "/home/ujjawal/.local/lib/python3.5/site-packages/setuptools/_distutils/command/build_ext.py", line 340, in run
    self.build_extensions()
  File "setup.py", line 63, in build_extensions
    subprocess.check_call(['cmake', ext.cmake_lists_dir] + cmake_args, cwd=tmp_dir)
  File "/usr/lib/python3.5/subprocess.py", line 271, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/home/ujjawal/my_work/motion_tracking3D/lightweight-human-pose-estimation-3d/pose_extractor', '-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=/home/ujjawal/my_work/motion_tracking3D/lightweight-human-pose-estimation-3d/pose_extractor/build', '-DCMAKE_ARCHIVE_OUTPUT_DIRECTORY_RELEASE=/home/ujjawal/my_work/motion_tracking3D/lightweight-human-pose-estimation-3d/pose_extractor/build/tmp', '-DPYTHON_EXECUTABLE=/usr/bin/python3.5']' returned non-zero exit status 1

Re-training/Fine tuning

How can I retrain the model on custom image dataset? or pick the existing weights and fine tune those?

Axes in 3D Coordinate System

Thanks for this awesome code and demo! It would be great if you can point out

Although axes were pointed out in #27 (comment), in this specific demo I got confused which one is z-axis because the axis that is perpendicular to the plane does not look like pointing at the camera. What does y-down meaning? It would be great if you can clarify them in this picture.

Screenshot from 2020-10-28 17-32-14
Screenshot from 2020-10-28 17-32-41
Screenshot from 2020-10-28 17-33-03

Picture credit: https://arxiv.org/pdf/1711.05941.pdf

Thanks!

#27

Cannot import extract_poses from pose_extractor.pyd

Hi there!
I am using a Windows 10 on an i5 cpu.
Python version: 3.6

I successfully compiled my .pyd from python setup.py build_ext. I added the path in my environmental variables to a folder consisting only with the pose_extractor.pyd

But when I run the camera demo, I get an issue:

from pose_extractor import extract_poses
ImportError: cannot import name 'extract_poses'

Any idea why it isn't able to import the name?

demo issue

root@node6:~/data/wang_hao/lightweight-human-pose-estimation-3d-demo.pytorch# python demo.py --model human-pose-estimation-3d.pth --video dance.mov

Cannot load fast pose extraction, switched to legacy slow implementation.

: cannot connect to X server

I have not run step2,3 because it mistake, and I accept your advice on other issue only run the demo, and I get this mistake, how to solve it?

Image sequence and location of data output

Hi there

Thanks for setting up this repo, the instructions are clear to setup.

I have a couple of questions, I would like to run this model on a sequence of images of my own, however a window pops up of only 1 image.

Code I run:
python demo.py --model human-pose-estimation-3d.pth --image ./00_03/*.jpg

My list of images:

00_03_00000001.jpg
00_03_00000002.jpg
00_03_00000003.jpg
...

The other question is, where is the output of 3D coordinates of keypoints written?

Thanks

json file

Why are there R and t here?
If there is only a camera, shouldn't it only consider the internal parameters of the camera?
What is the role of R and t?

How to Build module in MacOS

A great work for 3d pose estimation!
I found difficulties when build pose_extractor module in MacOS:

$ python setup.py build_ext
running build_ext
-- Configuring done
-- Generating done
-- Build files have been written to: /xxxxx/lightweight-human-pose-estimation-3d-demo.pytorch/pose_extractor/build/tmp
[100%] Built target pose_extractor
$ export PYTHONPATH=/xxxxx/lightweight-human-pose-estimation-3d-demo.pytorch/pose_extractor/build:$PYTHONPATH
$ python demo.py --model human-pose-estimation-3d.pth --video 0
Traceback (most recent call last):
  File "demo.py", line 10, in <module>
    from modules.parse_poses import parse_poses
  File "/xxxxx/lightweight-human-pose-estimation-3d-demo.pytorch/modules/parse_poses.py", line 4, in <module>
    from pose_extractor import extract_poses
ImportError: cannot import name 'extract_poses' from 'pose_extractor' (unknown location)

I also tried to import in python:

$ python
Python 3.7.3 (default, Mar 27 2019, 16:54:48)
[Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pose_extractor
>>> from pose_extractor import parse_poses
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: cannot import name 'parse_poses' from 'pose_extractor' (unknown location)

It takes much more time to parse poses than inference of model.

My device: nvidia jetson TX2
In my device, it took about 130ms to run parse_poses function while about 90ms on network inference. Maybe this work is writen for powerful x86 CPU instead of ARM? Because I found the part of parse_poses seems like a bunch of matrix computation(maybe?) using numpy.
Any idea to improve it?

conda PYTHONPATH could offer method?

Add build folder to PYTHONPATH:

export PYTHONPATH=pose_extractor/build/:$PYTHONPATH

but my evn is conda ,so
from pose_extractor import extract_poses
ImportError: cannot import name 'extract_poses'

error when running demos

when I run the python demo using the command python demo.py --model human-pose-estimation-3d.pth --video 0

I got the following error. I am running it within Anaconda on my MacOS 10.13. Running with the OpenVino option gave me similar error.

I was able to access my camera by using a little test script. So there should be no problem there. Any idea how to solve it? Thanks!

2020-04-21 16:21:48.662 python[693:7647] +[AVCaptureDevice authorizationStatusForMediaType:]: unrecognized selector sent to class 0x7fffb06446a0
[ERROR:0] global /localdisk/jenkins/workspace/OpenCV/OpenVINO/2020.2/build/osx/opencv/modules/videoio/src/cap.cpp (265) open VIDEOIO(AVFOUNDATION): raised unknown C++ exception!

Traceback (most recent call last):
File "demo.py", line 87, in
for frame in frame_provider:
File "/Users/yongtang/Documents/DL/CV/pose/Pose3D/modules/input_reader.py", line 36, in iter
raise IOError('Video {} cannot be opened'.format(self.file_name))
OSError: Video 0 cannot be opened

how to run it on GPU?

Hello, it got some problems.
when I run it it shows something:

No CUDA device found, inferring on CPU
Qt: Cannot set locale modifiers:

It is very slow, only 0.7fps. However, I input nvcc -V on the command line and run some samples of cuda, it shows cuda and cudnn are installed successfully. Do you how to run it on GPU? Thank you very much, look forward to your reply!

Run it on Windows

Not a bug.
the "pip install -r requirments.txt" errs on Torch. is there a workaround that?
I am using windows 10 and Python 3.6
thanks

3D coordinate system

Hello. Thanks for making your code available. I was looking through the drawing code, and it appears you are using opencv's coordinate system. Could you please confirm that I am correct on my assessment? Thanks.

Saving the output coordinates

Hello. Thank you for the work. I was wondering if there's implementation to save the 3D coordinates for video in .fbx format.

Thank You

Import error for pose_extractor

Hello Daniil,
After successfully running python setup.py build_ext and adding the result to the PYTHONPATH, I get a Segementation Fault 11 when trying to run the demo.py.
I checked the the path using python -c "import sys; print(sys.path) and the appropriate path('/xxx/lightweight-human-pose-estimation-3d-demo.pytorch/pose_extractor/build') is part of the list.
If I run python -c " from pose_extractor import extract_poses I get the same segmentation fault 11.

The full error message for the first part is:
objc[4980]: Class CaptureDelegate is implemented in both /xxx/python3.7/site-packages/cv2/cv2.cpython-37m-darwin.so (0x11214f7d8) and /xxxi/lib/libopencv_videoio.3.4.2.dylib (0x123d6c8e8). One of the two will be used. Which one is undefined.
Segmentation fault: 11

I am currently using an Anaconda environment. My operating system is MacOs Catalina.
Thank you

Question About 3D Model

Hi,

First of all, thanks for creating this github repo.

I understand that you aren't the author of the papers, but you seemed really knowledgeable about the subject so I wanted to ask a few questions regarding 3D pose estimation and this repo:

  1. What is the format of the final 3D output of the model? (the dimensionality of the output tensor, what it represents, etc.)

  2. How is the ground truth data for this model annotated? I've heard some methods generate a 3 dimensional heatmap (x, y, z) and take an L2 loss. Other methods break down this problem into a set of 2D maps.

  1. Many papers talk about how to the z position is "root relative". I don't quite understand how they measure the distance along the z - axis. Is 0 the z position of the pelvis, and a ground truth z position of 0 is in front of it and -10 is behind it? Isn't it better to apply a gaussian kernel? What if the pelvis is out of frame? What do we do then?

  2. Research papers measure the distance along the z axis in terms of mm, but across the x and y axis, they use pixels. How does that work?

Again, thank you for making this repo available and open.

Thanks

Will it run on mobile?

Hi, I am trying to develop for Unity android and have tested a lot of body pose models (single person is enough) using opencv for unity. The goal is to attach a 3D humanoid avatar to the joint, however most model only provide the 2D points so this project could be the solution.

I am trying to deploy the model and convert it to the ONNX file so I can read it from Unity, but I am having trouble trying to run convert_to_onnx.py on windows 10, I tried using the windows python version and could not install the packages from requirements.txt, so I tried in a new environment using anaconda. Finally, I ended up installing all packages using conda install, and when executing python setup.py build_ext, I get the following error:

running build_ext
-- Building for: NMake Makefiles
CMake Error at CMakeLists.txt:2 (project):
  Generator
    NMake Makefiles
  does not support platform specification, but platform
    x64
  was specified.

CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage

I have android studio and visual studio installed as well which contain different versions of cmake but I don`t really know how should I configure that, so instead is the ONNX model available somewhere else? if it does, is it possible to get the input and output names so get the information from the joints?

Those where like 3 questions in one but if you want to, I can open different issues to discuss:

1.- Will it run on a device? during compilation a "cuda required" message appeared so maybe not, unless openvino has some magical way to make it run on CPU only.

2.- issues during installation and setup, such as the requirement.txt not working as expected

3.- ONNX model relevant output names to get 3D points.

If there is any point that need further explanation, please let me know.

EDIT: I managed to run it on a different PC but now when execute the line
python scripts/convert_to_onnx.py --checkpoint-path human-pose-estimation-3d.pth

I get this error:

Traceback (most recent call last):
File "scripts/convert_to_onnx.py", line 5 in <module>
  from models.with_mobilenet import PoseEstimationWithoutMobileNet
ModuleNotFoundError: No module name 'models'

I added the same PYTHONPATH as well

Thank you

Executing on windows with TensorRT

Hi,

Are the steps for tensorRT verified on a windows PC as well?
I could not follow the steps on windows.
Unfortunately, the python bindings for tensorRT on windows are not available as well.

Bests,
Sid

an error when running an optimizer

Got an error when running optimizer below. Any idea why this happens:

python /opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/mo.py --input_model human-pose-estimation-3d.onnx --input=data --mean_values=data[128.0,128.0,128.0] --scale_values=data[255.0,255.0,255.0] --output=features,heatmaps,pafs

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/user/2TB/openvino/lightweight-human-pose-estimation-3d-demo.pytorch/human-pose-estimation-3d.onnx
- Path for generated IR: /home/user/2TB/openvino/lightweight-human-pose-estimation-3d-demo.pytorch/.
- IR output name: human-pose-estimation-3d
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: data
- Output layers: features,heatmaps,pafs
- Input shapes: Not specified, inherited from the model
- Mean values: data[128.0,128.0,128.0]
- Scale values: data[255.0,255.0,255.0]
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
ONNX specific parameters:
Model Optimizer version: 2020.2.0-60-g0bc66e26ff
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.user_data_repack.UserDataRepack'>): No node with name features.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #51.

How are the values in extrinsics.json formulated ?

Hi @Daniil-Osokin ,

There is an json named "extrinsics.json" which has R & T matrix and is being used in the process of converting 3D pose coordinates form camera space to World Space here

Here are my questions:

  1. How are these R & t values derived ?
  2. Are these values going to be the same for all cameras/angles ? If not how to recompute it ?

Interpreting raw model output

Hey,
I've just started working with ML. I'm using ML.net and using a container, converted the model in .pth to .onnx.
So far, so good. I've been able to run the model in C#, but I don't know how to convert the results to strings that represent the various poses found.
I suspect I should use parse_poses, but I'm new to python, so it's a bit hard for me to parse. What's it doing? Taking the features output and deriving the relevant strings?
TIA.

Inference on Edge TPU

Hi,
thanks for the project.

I am still new to AI and would like to run the project with a RPI4 in combination with coral usb accelerator (with NCS2 works fine).

For this I have converted the model. The input shape was changed to 1x256x448x3. This corresponds to the original format of the scaled_img. In addition, I have adapted the InferenceEngine accordingly.

However, I get different results after executing the code, which later causes a memory access error.

I hope you have an idea where my error is and how I can make it right.

Part of the results look like this (maybe it is helpful) :

features values form tflite:
[[[[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 3.4559049e-02
-6.9118097e-02 -5.5294476e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 4.8382670e-02
-5.5294476e-02 -4.8382666e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 3.4559049e-02
-3.4559049e-02 -6.2206283e-02]
...
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -1.3823618e-02
1.9353066e-01 -4.1470855e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -1.1058895e-01
8.2941718e-02 1.3823620e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -2.7647238e-02
-6.9118097e-02 2.0735430e-02]]

[[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 2.7647238e-02
-8.9853525e-02 -4.8382666e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 2.0735430e-02
2.0735430e-02 -4.1470855e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -3.4559049e-02
3.4559049e-02 -4.8382666e-02]
...
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -4.8382666e-02
2.7647239e-01 -6.9118086e-03]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -1.0367714e-01
1.1058895e-01 4.1470859e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -4.1470855e-02
-6.9118097e-02 2.0735430e-02]]

[[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 2.7647238e-02
-8.2941711e-02 -4.8382666e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -2.7647238e-02
3.4559049e-02 -2.0735428e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -6.9118097e-02
9.6765332e-02 -3.4559049e-02]
...
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 6.9118105e-03
3.1103143e-01 2.0735430e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -5.5294476e-02
1.3823619e-01 5.5294476e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -4.1470855e-02
-5.5294476e-02 1.3823620e-02]]

...

[[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 4.8382670e-02
-4.8382666e-02 2.0735430e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -1.3823618e-02
4.1470859e-02 1.7279524e-01]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -2.7647238e-02
6.9118097e-02 2.2117791e-01]
...
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 9.3132257e-10
-6.2206283e-02 -1.3823618e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 1.3823620e-02
-6.2206283e-02 -2.0735428e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -6.9118086e-03
-6.9118097e-02 -6.9118086e-03]]

[[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 4.8382670e-02
-4.8382666e-02 1.3823620e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 6.9118105e-03
4.8382670e-02 1.5205981e-01]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 6.9118105e-03
4.8382670e-02 2.1426609e-01]
...
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 6.9118105e-03
-6.9118097e-02 -1.3823618e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 6.9118105e-03
-6.2206283e-02 -2.0735428e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 6.9118105e-03
-7.6029904e-02 -6.9118086e-03]]

[[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 3.4559049e-02
-4.1470855e-02 4.8382670e-02]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... -1.3823618e-02
6.2206287e-02 1.7279524e-01]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 9.3132257e-10
4.1470859e-02 2.1426609e-01]
...
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 3.4559049e-02
-7.6029904e-02 -6.9118086e-03]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 2.7647238e-02
-7.6029904e-02 -6.9118086e-03]
[ 9.3132257e-10 9.3132257e-10 9.3132257e-10 ... 6.9118105e-03
-8.2941711e-02 6.9118105e-03]]]]

features values from openvino_model:
[[[-2.44140625e-04 -2.13623047e-04 -2.05993652e-04 ... -3.28063965e-04
-2.97546387e-04 -2.28881836e-04]
[-2.21252441e-04 -2.05993652e-04 -1.75476074e-04 ... -2.59399414e-04
-2.44140625e-04 -2.59399414e-04]
[-2.21252441e-04 -1.44958496e-04 -1.14440918e-04 ... -2.59399414e-04
-2.36511230e-04 -2.36511230e-04]
...
[-1.98364258e-04 -2.13623047e-04 -1.75476074e-04 ... -1.83105469e-04
-1.83105469e-04 -1.75476074e-04]
[-2.05993652e-04 -2.44140625e-04 -1.98364258e-04 ... -1.83105469e-04
-1.90734863e-04 -2.21252441e-04]
[-2.44140625e-04 -1.98364258e-04 -2.82287598e-04 ... -1.52587891e-04
-1.90734863e-04 -1.52587891e-04]]

[[-3.28063965e-04 -9.91821289e-05 -1.60217285e-04 ... -1.44958496e-04
-2.51770020e-04 -2.21252441e-04]
[-1.83105469e-04 0.00000000e+00 -1.37329102e-04 ... -1.52587891e-04
-2.44140625e-04 -2.51770020e-04]
[-2.05993652e-04 0.00000000e+00 0.00000000e+00 ... -3.66210938e-04
-2.97546387e-04 -1.60217285e-04]
...
[-1.90734863e-04 -6.10351562e-05 1.83105469e-04 ... 6.86645508e-05
0.00000000e+00 -1.06811523e-04]
[-1.83105469e-04 0.00000000e+00 1.14440918e-04 ... 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[-2.82287598e-04 0.00000000e+00 -1.29699707e-04 ... -1.14440918e-04
0.00000000e+00 -9.91821289e-05]]

[[-4.65393066e-04 -4.11987305e-04 -3.73840332e-04 ... -3.73840332e-04
-4.19616699e-04 -4.65393066e-04]
[-4.50134277e-04 -4.80651855e-04 -3.96728516e-04 ... -4.80651855e-04
-4.95910645e-04 -4.50134277e-04]
[-4.57763672e-04 -4.73022461e-04 -4.73022461e-04 ... -5.11169434e-04
-4.11987305e-04 -4.42504883e-04]
...
[-4.50134277e-04 -4.73022461e-04 -4.27246094e-04 ... -3.50952148e-04
-3.58581543e-04 -3.89099121e-04]
[-4.42504883e-04 -4.34875488e-04 -4.27246094e-04 ... -3.89099121e-04
-3.66210938e-04 -3.58581543e-04]
[-4.57763672e-04 -4.65393066e-04 -5.03540039e-04 ... -3.96728516e-04
-3.66210938e-04 -4.04357910e-04]]

...

[[ 3.30810547e-02 3.55529785e-02 2.82897949e-02 ... -5.01403809e-02
-1.05651855e-01 -4.50134277e-02]
[ 2.61383057e-02 2.57720947e-02 2.22473145e-02 ... -1.08093262e-01
-1.07727051e-01 -7.55004883e-02]
[ 3.06091309e-02 4.54101562e-02 3.17382812e-02 ... -1.33422852e-01
-1.08215332e-01 -6.64672852e-02]
...
[ 2.83813477e-02 1.57470703e-02 2.27050781e-02 ... -7.24487305e-02
-3.86657715e-02 -5.70373535e-02]
[ 3.02276611e-02 3.09448242e-02 4.38232422e-02 ... -3.70788574e-02
-1.86462402e-02 -3.84826660e-02]
[ 3.55224609e-02 -1.41296387e-02 2.09655762e-02 ... -2.11486816e-02
-2.07214355e-02 -4.09240723e-02]]

[[-7.28149414e-02 -8.08715820e-02 -9.50317383e-02 ... 9.97314453e-02
4.61120605e-02 -4.08630371e-02]
[-8.20312500e-02 -7.01293945e-02 -8.61206055e-02 ... 1.18774414e-01
6.27441406e-02 -2.68096924e-02]
[-7.11059570e-02 -7.59887695e-02 -8.07495117e-02 ... 1.32324219e-01
3.90319824e-02 -2.95562744e-02]
...
[-6.37817383e-02 -3.49426270e-02 2.21252441e-03 ... -4.04663086e-02
-5.35278320e-02 -5.59997559e-02]
[-6.97021484e-02 -4.21752930e-02 -1.54113770e-03 ... -5.98449707e-02
-7.10449219e-02 -7.20214844e-02]
[-5.45654297e-02 3.93371582e-02 7.75756836e-02 ... -6.37817383e-02
-6.73828125e-02 -7.04345703e-02]]

[[-3.30810547e-02 -1.82495117e-02 -1.69372559e-02 ... -1.94244385e-02
-6.71386719e-03 2.88238525e-02]
[-2.57568359e-02 -1.54495239e-02 -2.28881836e-04 ... -4.48608398e-03
5.11169434e-03 2.99224854e-02]
[-2.84271240e-02 -9.94873047e-03 -1.35192871e-02 ... 1.59149170e-02
2.22015381e-02 1.96380615e-02]
...
[-2.91137695e-02 1.09405518e-02 3.97338867e-02 ... 6.98242188e-02
2.69927979e-02 2.35137939e-02]
[-3.27758789e-02 1.03759766e-03 3.13110352e-02 ... 3.80859375e-02
4.89807129e-03 1.16729736e-02]
[ 4.80651855e-03 9.50927734e-02 1.45996094e-01 ... 1.78070068e-02
2.21405029e-02 1.81427002e-02]]]

Implement on raspberry pi camera

Hello,

firstly thank you for sharing the code. I would like to implement that on Jetson Nano with a raspberry pi camera. However I met the following errors:

$ python demo.py --model human-pose-estimation-3d.pth --video 0
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

I suppose the "video" is for webcam, do you have any idea how I could implement on raspberry pi camera?
Thank you in advance,
Fan

some problem about the depth value

Hi, i use the model to estimate the pose from a 1920x1080 pictures, the depth value of the joint is 230 in camera coordinate. So i want to ask the number of 230 means 230mm in the camera coordinate? And the X and Y is 15, -15.

what's means of the X and Y and depth value?
Thanks.

Error converting checkpoints to OpenVino format

(cv) user@Descartes:~/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch$ python scripts/convert_to_onnx.py --checkpoint-path human-pose-estimation-3d.pth
[WARNING] Not found pre-trained parameters for fake_conv_heatmaps.weight
[WARNING] Not found pre-trained parameters for fake_conv_pafs.weight
graph(%data : Float(1, 3, 256, 448),
      %model.0.0.weight : Float(32, 3, 3, 3),
      %model.0.1.weight : Float(32),
      %model.0.1.bias : Float(32),
      %model.0.1.running_mean : Float(32),
      %model.0.1.running_var : Float(32),
      %model.1.0.weight : Float(32, 1, 3, 3),
      %model.1.1.weight : Float(32),
      %model.1.1.bias : Float(32),
      %model.1.1.running_mean : Float(32),
      %model.1.1.running_var : Float(32),
      %model.1.3.weight : Float(64, 32, 1, 1),
      %model.1.4.weight : Float(64),
      %model.1.4.bias : Float(64),
      %model.1.4.running_mean : Float(64),
      %model.1.4.running_var : Float(64),
      %model.2.0.weight : Float(64, 1, 3, 3),
      %model.2.1.weight : Float(64),
      %model.2.1.bias : Float(64),
      %model.2.1.running_mean : Float(64),
      %model.2.1.running_var : Float(64),
      %model.2.3.weight : Float(128, 64, 1, 1),
      %model.2.4.weight : Float(128),
      %model.2.4.bias : Float(128),
      %model.2.4.running_mean : Float(128),
      %model.2.4.running_var : Float(128),
      %model.3.0.weight : Float(128, 1, 3, 3),
      %model.3.1.weight : Float(128),
      %model.3.1.bias : Float(128),
      %model.3.1.running_mean : Float(128),
      %model.3.1.running_var : Float(128),
      %model.3.3.weight : Float(128, 128, 1, 1),
      %model.3.4.weight : Float(128),
      %model.3.4.bias : Float(128),
      %model.3.4.running_mean : Float(128),
      %model.3.4.running_var : Float(128),
      %model.4.0.weight : Float(128, 1, 3, 3),
      %model.4.1.weight : Float(128),
      %model.4.1.bias : Float(128),
      %model.4.1.running_mean : Float(128),
      %model.4.1.running_var : Float(128),
      %model.4.3.weight : Float(256, 128, 1, 1),
      %model.4.4.weight : Float(256),
      %model.4.4.bias : Float(256),
      %model.4.4.running_mean : Float(256),
      %model.4.4.running_var : Float(256),
      %model.5.0.weight : Float(256, 1, 3, 3),
      %model.5.1.weight : Float(256),
      %model.5.1.bias : Float(256),
      %model.5.1.running_mean : Float(256),
      %model.5.1.running_var : Float(256),
      %model.5.3.weight : Float(256, 256, 1, 1),
      %model.5.4.weight : Float(256),
      %model.5.4.bias : Float(256),
      %model.5.4.running_mean : Float(256),
      %model.5.4.running_var : Float(256),
      %model.6.0.weight : Float(256, 1, 3, 3),
      %model.6.1.weight : Float(256),
      %model.6.1.bias : Float(256),
      %model.6.1.running_mean : Float(256),
      %model.6.1.running_var : Float(256),
      %model.6.3.weight : Float(512, 256, 1, 1),
      %model.6.4.weight : Float(512),
      %model.6.4.bias : Float(512),
      %model.6.4.running_mean : Float(512),
      %model.6.4.running_var : Float(512),
      %model.7.0.weight : Float(512, 1, 3, 3),
      %model.7.1.weight : Float(512),
      %model.7.1.bias : Float(512),
      %model.7.1.running_mean : Float(512),
      %model.7.1.running_var : Float(512),
      %model.7.3.weight : Float(512, 512, 1, 1),
      %model.7.4.weight : Float(512),
      %model.7.4.bias : Float(512),
      %model.7.4.running_mean : Float(512),
      %model.7.4.running_var : Float(512),
      %model.8.0.weight : Float(512, 1, 3, 3),
      %model.8.1.weight : Float(512),
      %model.8.1.bias : Float(512),
      %model.8.1.running_mean : Float(512),
      %model.8.1.running_var : Float(512),
      %model.8.3.weight : Float(512, 512, 1, 1),
      %model.8.4.weight : Float(512),
      %model.8.4.bias : Float(512),
      %model.8.4.running_mean : Float(512),
      %model.8.4.running_var : Float(512),
      %model.9.0.weight : Float(512, 1, 3, 3),
      %model.9.1.weight : Float(512),
      %model.9.1.bias : Float(512),
      %model.9.1.running_mean : Float(512),
      %model.9.1.running_var : Float(512),
      %model.9.3.weight : Float(512, 512, 1, 1),
      %model.9.4.weight : Float(512),
      %model.9.4.bias : Float(512),
      %model.9.4.running_mean : Float(512),
      %model.9.4.running_var : Float(512),
      %model.10.0.weight : Float(512, 1, 3, 3),
      %model.10.1.weight : Float(512),
      %model.10.1.bias : Float(512),
      %model.10.1.running_mean : Float(512),
      %model.10.1.running_var : Float(512),
      %model.10.3.weight : Float(512, 512, 1, 1),
      %model.10.4.weight : Float(512),
      %model.10.4.bias : Float(512),
      %model.10.4.running_mean : Float(512),
      %model.10.4.running_var : Float(512),
      %model.11.0.weight : Float(512, 1, 3, 3),
      %model.11.1.weight : Float(512),
      %model.11.1.bias : Float(512),
      %model.11.1.running_mean : Float(512),
      %model.11.1.running_var : Float(512),
      %model.11.3.weight : Float(512, 512, 1, 1),
      %model.11.4.weight : Float(512),
      %model.11.4.bias : Float(512),
      %model.11.4.running_mean : Float(512),
      %model.11.4.running_var : Float(512),
      %cpm.align.0.weight : Float(128, 512, 1, 1),
      %cpm.align.0.bias : Float(128),
      %cpm.trunk.0.0.weight : Float(128, 1, 3, 3),
      %cpm.trunk.0.2.weight : Float(128, 128, 1, 1),
      %cpm.trunk.1.0.weight : Float(128, 1, 3, 3),
      %cpm.trunk.1.2.weight : Float(128, 128, 1, 1),
      %cpm.trunk.2.0.weight : Float(128, 1, 3, 3),
      %cpm.trunk.2.2.weight : Float(128, 128, 1, 1),
      %cpm.conv.0.weight : Float(128, 128, 3, 3),
      %cpm.conv.0.bias : Float(128),
      %initial_stage.trunk.0.0.weight : Float(128, 128, 3, 3),
      %initial_stage.trunk.0.0.bias : Float(128),
      %initial_stage.trunk.1.0.weight : Float(128, 128, 3, 3),
      %initial_stage.trunk.1.0.bias : Float(128),
      %initial_stage.trunk.2.0.weight : Float(128, 128, 3, 3),
      %initial_stage.trunk.2.0.bias : Float(128),
      %initial_stage.heatmaps.0.0.weight : Float(512, 128, 1, 1),
      %initial_stage.heatmaps.0.0.bias : Float(512),
      %initial_stage.heatmaps.1.0.weight : Float(19, 512, 1, 1),
      %initial_stage.heatmaps.1.0.bias : Float(19),
      %initial_stage.pafs.0.0.weight : Float(512, 128, 1, 1),
      %initial_stage.pafs.0.0.bias : Float(512),
      %initial_stage.pafs.1.0.weight : Float(38, 512, 1, 1),
      %initial_stage.pafs.1.0.bias : Float(38),
      %refinement_stages.0.trunk.0.initial.0.weight : Float(128, 185, 1, 1),
      %refinement_stages.0.trunk.0.initial.0.bias : Float(128),
      %refinement_stages.0.trunk.0.trunk.0.0.weight : Float(128, 128, 3, 3),
      %refinement_stages.0.trunk.0.trunk.0.0.bias : Float(128),
      %refinement_stages.0.trunk.0.trunk.0.1.weight : Float(128),
      %refinement_stages.0.trunk.0.trunk.0.1.bias : Float(128),
      %refinement_stages.0.trunk.0.trunk.0.1.running_mean : Float(128),
      %refinement_stages.0.trunk.0.trunk.0.1.running_var : Float(128),
      %refinement_stages.0.trunk.0.trunk.1.0.weight : Float(128, 128, 3, 3),
      %refinement_stages.0.trunk.0.trunk.1.0.bias : Float(128),
      %refinement_stages.0.trunk.0.trunk.1.1.weight : Float(128),
      %refinement_stages.0.trunk.0.trunk.1.1.bias : Float(128),
      %refinement_stages.0.trunk.0.trunk.1.1.running_mean : Float(128),
      %refinement_stages.0.trunk.0.trunk.1.1.running_var : Float(128),
      %refinement_stages.0.trunk.1.initial.0.weight : Float(128, 128, 1, 1),
      %refinement_stages.0.trunk.1.initial.0.bias : Float(128),
      %refinement_stages.0.trunk.1.trunk.0.0.weight : Float(128, 128, 3, 3),
      %refinement_stages.0.trunk.1.trunk.0.0.bias : Float(128),
      %refinement_stages.0.trunk.1.trunk.0.1.weight : Float(128),
      %refinement_stages.0.trunk.1.trunk.0.1.bias : Float(128),
      %refinement_stages.0.trunk.1.trunk.0.1.running_mean : Float(128),
      %refinement_stages.0.trunk.1.trunk.0.1.running_var : Float(128),
      %refinement_stages.0.trunk.1.trunk.1.0.weight : Float(128, 128, 3, 3),
      %refinement_stages.0.trunk.1.trunk.1.0.bias : Float(128),
      %refinement_stages.0.trunk.1.trunk.1.1.weight : Float(128),
      %refinement_stages.0.trunk.1.trunk.1.1.bias : Float(128),
      %refinement_stages.0.trunk.1.trunk.1.1.running_mean : Float(128),
      %refinement_stages.0.trunk.1.trunk.1.1.running_var : Float(128),
      %refinement_stages.0.trunk.2.initial.0.weight : Float(128, 128, 1, 1),
      %refinement_stages.0.trunk.2.initial.0.bias : Float(128),
      %refinement_stages.0.trunk.2.trunk.0.0.weight : Float(128, 128, 3, 3),
      %refinement_stages.0.trunk.2.trunk.0.0.bias : Float(128),
      %refinement_stages.0.trunk.2.trunk.0.1.weight : Float(128),
      %refinement_stages.0.trunk.2.trunk.0.1.bias : Float(128),
      %refinement_stages.0.trunk.2.trunk.0.1.running_mean : Float(128),
      %refinement_stages.0.trunk.2.trunk.0.1.running_var : Float(128),
      %refinement_stages.0.trunk.2.trunk.1.0.weight : Float(128, 128, 3, 3),
      %refinement_stages.0.trunk.2.trunk.1.0.bias : Float(128),
      %refinement_stages.0.trunk.2.trunk.1.1.weight : Float(128),
      %refinement_stages.0.trunk.2.trunk.1.1.bias : Float(128),
      %refinement_stages.0.trunk.2.trunk.1.1.running_mean : Float(128),
      %refinement_stages.0.trunk.2.trunk.1.1.running_var : Float(128),
      %refinement_stages.0.trunk.3.initial.0.weight : Float(128, 128, 1, 1),
      %refinement_stages.0.trunk.3.initial.0.bias : Float(128),
      %refinement_stages.0.trunk.3.trunk.0.0.weight : Float(128, 128, 3, 3),
      %refinement_stages.0.trunk.3.trunk.0.0.bias : Float(128),
      %refinement_stages.0.trunk.3.trunk.0.1.weight : Float(128),
      %refinement_stages.0.trunk.3.trunk.0.1.bias : Float(128),
      %refinement_stages.0.trunk.3.trunk.0.1.running_mean : Float(128),
      %refinement_stages.0.trunk.3.trunk.0.1.running_var : Float(128),
      %refinement_stages.0.trunk.3.trunk.1.0.weight : Float(128, 128, 3, 3),
      %refinement_stages.0.trunk.3.trunk.1.0.bias : Float(128),
      %refinement_stages.0.trunk.3.trunk.1.1.weight : Float(128),
      %refinement_stages.0.trunk.3.trunk.1.1.bias : Float(128),
      %refinement_stages.0.trunk.3.trunk.1.1.running_mean : Float(128),
      %refinement_stages.0.trunk.3.trunk.1.1.running_var : Float(128),
      %refinement_stages.0.trunk.4.initial.0.weight : Float(128, 128, 1, 1),
      %refinement_stages.0.trunk.4.initial.0.bias : Float(128),
      %refinement_stages.0.trunk.4.trunk.0.0.weight : Float(128, 128, 3, 3),
      %refinement_stages.0.trunk.4.trunk.0.0.bias : Float(128),
      %refinement_stages.0.trunk.4.trunk.0.1.weight : Float(128),
      %refinement_stages.0.trunk.4.trunk.0.1.bias : Float(128),
      %refinement_stages.0.trunk.4.trunk.0.1.running_mean : Float(128),
      %refinement_stages.0.trunk.4.trunk.0.1.running_var : Float(128),
      %refinement_stages.0.trunk.4.trunk.1.0.weight : Float(128, 128, 3, 3),
      %refinement_stages.0.trunk.4.trunk.1.0.bias : Float(128),
      %refinement_stages.0.trunk.4.trunk.1.1.weight : Float(128),
      %refinement_stages.0.trunk.4.trunk.1.1.bias : Float(128),
      %refinement_stages.0.trunk.4.trunk.1.1.running_mean : Float(128),
      %refinement_stages.0.trunk.4.trunk.1.1.running_var : Float(128),
      %refinement_stages.0.heatmaps.0.0.weight : Float(128, 128, 1, 1),
      %refinement_stages.0.heatmaps.0.0.bias : Float(128),
      %refinement_stages.0.heatmaps.1.0.weight : Float(19, 128, 1, 1),
      %refinement_stages.0.heatmaps.1.0.bias : Float(19),
      %refinement_stages.0.pafs.0.0.weight : Float(128, 128, 1, 1),
      %refinement_stages.0.pafs.0.0.bias : Float(128),
      %refinement_stages.0.pafs.1.0.weight : Float(38, 128, 1, 1),
      %refinement_stages.0.pafs.1.0.bias : Float(38),
      %Pose3D.stem.0.bottleneck.0.0.weight : Float(92, 185, 1, 1),
      %Pose3D.stem.0.bottleneck.0.0.bias : Float(92),
      %Pose3D.stem.0.bottleneck.0.1.weight : Float(92),
      %Pose3D.stem.0.bottleneck.0.1.bias : Float(92),
      %Pose3D.stem.0.bottleneck.0.1.running_mean : Float(92),
      %Pose3D.stem.0.bottleneck.0.1.running_var : Float(92),
      %Pose3D.stem.0.bottleneck.1.0.weight : Float(92, 92, 3, 3),
      %Pose3D.stem.0.bottleneck.1.0.bias : Float(92),
      %Pose3D.stem.0.bottleneck.1.1.weight : Float(92),
      %Pose3D.stem.0.bottleneck.1.1.bias : Float(92),
      %Pose3D.stem.0.bottleneck.1.1.running_mean : Float(92),
      %Pose3D.stem.0.bottleneck.1.1.running_var : Float(92),
      %Pose3D.stem.0.bottleneck.2.0.weight : Float(128, 92, 1, 1),
      %Pose3D.stem.0.bottleneck.2.0.bias : Float(128),
      %Pose3D.stem.0.bottleneck.2.1.weight : Float(128),
      %Pose3D.stem.0.bottleneck.2.1.bias : Float(128),
      %Pose3D.stem.0.bottleneck.2.1.running_mean : Float(128),
      %Pose3D.stem.0.bottleneck.2.1.running_var : Float(128),
      %Pose3D.stem.0.align.0.weight : Float(128, 185, 1, 1),
      %Pose3D.stem.0.align.0.bias : Float(128),
      %Pose3D.stem.0.align.1.weight : Float(128),
      %Pose3D.stem.0.align.1.bias : Float(128),
      %Pose3D.stem.0.align.1.running_mean : Float(128),
      %Pose3D.stem.0.align.1.running_var : Float(128),
      %Pose3D.stem.1.bottleneck.0.0.weight : Float(64, 128, 1, 1),
      %Pose3D.stem.1.bottleneck.0.0.bias : Float(64),
      %Pose3D.stem.1.bottleneck.0.1.weight : Float(64),
      %Pose3D.stem.1.bottleneck.0.1.bias : Float(64),
      %Pose3D.stem.1.bottleneck.0.1.running_mean : Float(64),
      %Pose3D.stem.1.bottleneck.0.1.running_var : Float(64),
      %Pose3D.stem.1.bottleneck.1.0.weight : Float(64, 64, 3, 3),
      %Pose3D.stem.1.bottleneck.1.0.bias : Float(64),
      %Pose3D.stem.1.bottleneck.1.1.weight : Float(64),
      %Pose3D.stem.1.bottleneck.1.1.bias : Float(64),
      %Pose3D.stem.1.bottleneck.1.1.running_mean : Float(64),
      %Pose3D.stem.1.bottleneck.1.1.running_var : Float(64),
      %Pose3D.stem.1.bottleneck.2.0.weight : Float(128, 64, 1, 1),
      %Pose3D.stem.1.bottleneck.2.0.bias : Float(128),
      %Pose3D.stem.1.bottleneck.2.1.weight : Float(128),
      %Pose3D.stem.1.bottleneck.2.1.bias : Float(128),
      %Pose3D.stem.1.bottleneck.2.1.running_mean : Float(128),
      %Pose3D.stem.1.bottleneck.2.1.running_var : Float(128),
      %Pose3D.stem.2.bottleneck.0.0.weight : Float(64, 128, 1, 1),
      %Pose3D.stem.2.bottleneck.0.0.bias : Float(64),
      %Pose3D.stem.2.bottleneck.0.1.weight : Float(64),
      %Pose3D.stem.2.bottleneck.0.1.bias : Float(64),
      %Pose3D.stem.2.bottleneck.0.1.running_mean : Float(64),
      %Pose3D.stem.2.bottleneck.0.1.running_var : Float(64),
      %Pose3D.stem.2.bottleneck.1.0.weight : Float(64, 64, 3, 3),
      %Pose3D.stem.2.bottleneck.1.0.bias : Float(64),
      %Pose3D.stem.2.bottleneck.1.1.weight : Float(64),
      %Pose3D.stem.2.bottleneck.1.1.bias : Float(64),
      %Pose3D.stem.2.bottleneck.1.1.running_mean : Float(64),
      %Pose3D.stem.2.bottleneck.1.1.running_var : Float(64),
      %Pose3D.stem.2.bottleneck.2.0.weight : Float(128, 64, 1, 1),
      %Pose3D.stem.2.bottleneck.2.0.bias : Float(128),
      %Pose3D.stem.2.bottleneck.2.1.weight : Float(128),
      %Pose3D.stem.2.bottleneck.2.1.bias : Float(128),
      %Pose3D.stem.2.bottleneck.2.1.running_mean : Float(128),
      %Pose3D.stem.2.bottleneck.2.1.running_var : Float(128),
      %Pose3D.stem.3.bottleneck.0.0.weight : Float(64, 128, 1, 1),
      %Pose3D.stem.3.bottleneck.0.0.bias : Float(64),
      %Pose3D.stem.3.bottleneck.0.1.weight : Float(64),
      %Pose3D.stem.3.bottleneck.0.1.bias : Float(64),
      %Pose3D.stem.3.bottleneck.0.1.running_mean : Float(64),
      %Pose3D.stem.3.bottleneck.0.1.running_var : Float(64),
      %Pose3D.stem.3.bottleneck.1.0.weight : Float(64, 64, 3, 3),
      %Pose3D.stem.3.bottleneck.1.0.bias : Float(64),
      %Pose3D.stem.3.bottleneck.1.1.weight : Float(64),
      %Pose3D.stem.3.bottleneck.1.1.bias : Float(64),
      %Pose3D.stem.3.bottleneck.1.1.running_mean : Float(64),
      %Pose3D.stem.3.bottleneck.1.1.running_var : Float(64),
      %Pose3D.stem.3.bottleneck.2.0.weight : Float(128, 64, 1, 1),
      %Pose3D.stem.3.bottleneck.2.0.bias : Float(128),
      %Pose3D.stem.3.bottleneck.2.1.weight : Float(128),
      %Pose3D.stem.3.bottleneck.2.1.bias : Float(128),
      %Pose3D.stem.3.bottleneck.2.1.running_mean : Float(128),
      %Pose3D.stem.3.bottleneck.2.1.running_var : Float(128),
      %Pose3D.stem.4.bottleneck.0.0.weight : Float(64, 128, 1, 1),
      %Pose3D.stem.4.bottleneck.0.0.bias : Float(64),
      %Pose3D.stem.4.bottleneck.0.1.weight : Float(64),
      %Pose3D.stem.4.bottleneck.0.1.bias : Float(64),
      %Pose3D.stem.4.bottleneck.0.1.running_mean : Float(64),
      %Pose3D.stem.4.bottleneck.0.1.running_var : Float(64),
      %Pose3D.stem.4.bottleneck.1.0.weight : Float(64, 64, 3, 3),
      %Pose3D.stem.4.bottleneck.1.0.bias : Float(64),
      %Pose3D.stem.4.bottleneck.1.1.weight : Float(64),
      %Pose3D.stem.4.bottleneck.1.1.bias : Float(64),
      %Pose3D.stem.4.bottleneck.1.1.running_mean : Float(64),
      %Pose3D.stem.4.bottleneck.1.1.running_var : Float(64),
      %Pose3D.stem.4.bottleneck.2.0.weight : Float(128, 64, 1, 1),
      %Pose3D.stem.4.bottleneck.2.0.bias : Float(128),
      %Pose3D.stem.4.bottleneck.2.1.weight : Float(128),
      %Pose3D.stem.4.bottleneck.2.1.bias : Float(128),
      %Pose3D.stem.4.bottleneck.2.1.running_mean : Float(128),
      %Pose3D.stem.4.bottleneck.2.1.running_var : Float(128),
      %Pose3D.prediction.trunk.0.initial.0.weight : Float(128, 128, 1, 1),
      %Pose3D.prediction.trunk.0.initial.0.bias : Float(128),
      %Pose3D.prediction.trunk.0.trunk.0.0.weight : Float(128, 128, 3, 3),
      %Pose3D.prediction.trunk.0.trunk.0.0.bias : Float(128),
      %Pose3D.prediction.trunk.0.trunk.0.1.weight : Float(128),
      %Pose3D.prediction.trunk.0.trunk.0.1.bias : Float(128),
      %Pose3D.prediction.trunk.0.trunk.0.1.running_mean : Float(128),
      %Pose3D.prediction.trunk.0.trunk.0.1.running_var : Float(128),
      %Pose3D.prediction.trunk.0.trunk.1.0.weight : Float(128, 128, 3, 3),
      %Pose3D.prediction.trunk.0.trunk.1.0.bias : Float(128),
      %Pose3D.prediction.trunk.0.trunk.1.1.weight : Float(128),
      %Pose3D.prediction.trunk.0.trunk.1.1.bias : Float(128),
      %Pose3D.prediction.trunk.0.trunk.1.1.running_mean : Float(128),
      %Pose3D.prediction.trunk.0.trunk.1.1.running_var : Float(128),
      %Pose3D.prediction.trunk.1.initial.0.weight : Float(128, 128, 1, 1),
      %Pose3D.prediction.trunk.1.initial.0.bias : Float(128),
      %Pose3D.prediction.trunk.1.trunk.0.0.weight : Float(128, 128, 3, 3),
      %Pose3D.prediction.trunk.1.trunk.0.0.bias : Float(128),
      %Pose3D.prediction.trunk.1.trunk.0.1.weight : Float(128),
      %Pose3D.prediction.trunk.1.trunk.0.1.bias : Float(128),
      %Pose3D.prediction.trunk.1.trunk.0.1.running_mean : Float(128),
      %Pose3D.prediction.trunk.1.trunk.0.1.running_var : Float(128),
      %Pose3D.prediction.trunk.1.trunk.1.0.weight : Float(128, 128, 3, 3),
      %Pose3D.prediction.trunk.1.trunk.1.0.bias : Float(128),
      %Pose3D.prediction.trunk.1.trunk.1.1.weight : Float(128),
      %Pose3D.prediction.trunk.1.trunk.1.1.bias : Float(128),
      %Pose3D.prediction.trunk.1.trunk.1.1.running_mean : Float(128),
      %Pose3D.prediction.trunk.1.trunk.1.1.running_var : Float(128),
      %Pose3D.prediction.feature_maps.0.0.weight : Float(128, 128, 1, 1),
      %Pose3D.prediction.feature_maps.0.0.bias : Float(128),
      %Pose3D.prediction.feature_maps.1.0.weight : Float(57, 128, 1, 1),
      %Pose3D.prediction.feature_maps.1.0.bias : Float(57),
      %fake_conv_heatmaps.weight : Float(19, 19, 1, 1),
      %fake_conv_pafs.weight : Float(38, 38, 1, 1)):
  %401 : Float(1, 32, 128, 224) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[2, 2]](%data, %model.0.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %402 : Float(1, 32, 128, 224) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%401, %model.0.1.weight, %model.0.1.bias, %model.0.1.running_mean, %model.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %403 : Float(1, 32, 128, 224) = onnx::Relu(%402) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %404 : Float(1, 32, 128, 224) = onnx::Conv[dilations=[1, 1], group=32, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%403, %model.1.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %405 : Float(1, 32, 128, 224) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%404, %model.1.1.weight, %model.1.1.bias, %model.1.1.running_mean, %model.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %406 : Float(1, 32, 128, 224) = onnx::Relu(%405) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %407 : Float(1, 64, 128, 224) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%406, %model.1.3.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %408 : Float(1, 64, 128, 224) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%407, %model.1.4.weight, %model.1.4.bias, %model.1.4.running_mean, %model.1.4.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %409 : Float(1, 64, 128, 224) = onnx::Relu(%408) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %410 : Float(1, 64, 64, 112) = onnx::Conv[dilations=[1, 1], group=64, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[2, 2]](%409, %model.2.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %411 : Float(1, 64, 64, 112) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%410, %model.2.1.weight, %model.2.1.bias, %model.2.1.running_mean, %model.2.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %412 : Float(1, 64, 64, 112) = onnx::Relu(%411) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %413 : Float(1, 128, 64, 112) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%412, %model.2.3.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %414 : Float(1, 128, 64, 112) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%413, %model.2.4.weight, %model.2.4.bias, %model.2.4.running_mean, %model.2.4.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %415 : Float(1, 128, 64, 112) = onnx::Relu(%414) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %416 : Float(1, 128, 64, 112) = onnx::Conv[dilations=[1, 1], group=128, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%415, %model.3.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %417 : Float(1, 128, 64, 112) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%416, %model.3.1.weight, %model.3.1.bias, %model.3.1.running_mean, %model.3.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %418 : Float(1, 128, 64, 112) = onnx::Relu(%417) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %419 : Float(1, 128, 64, 112) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%418, %model.3.3.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %420 : Float(1, 128, 64, 112) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%419, %model.3.4.weight, %model.3.4.bias, %model.3.4.running_mean, %model.3.4.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %421 : Float(1, 128, 64, 112) = onnx::Relu(%420) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %422 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=128, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[2, 2]](%421, %model.4.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %423 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%422, %model.4.1.weight, %model.4.1.bias, %model.4.1.running_mean, %model.4.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %424 : Float(1, 128, 32, 56) = onnx::Relu(%423) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %425 : Float(1, 256, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%424, %model.4.3.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %426 : Float(1, 256, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%425, %model.4.4.weight, %model.4.4.bias, %model.4.4.running_mean, %model.4.4.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %427 : Float(1, 256, 32, 56) = onnx::Relu(%426) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %428 : Float(1, 256, 32, 56) = onnx::Conv[dilations=[1, 1], group=256, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%427, %model.5.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %429 : Float(1, 256, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%428, %model.5.1.weight, %model.5.1.bias, %model.5.1.running_mean, %model.5.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %430 : Float(1, 256, 32, 56) = onnx::Relu(%429) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %431 : Float(1, 256, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%430, %model.5.3.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %432 : Float(1, 256, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%431, %model.5.4.weight, %model.5.4.bias, %model.5.4.running_mean, %model.5.4.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %433 : Float(1, 256, 32, 56) = onnx::Relu(%432) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %434 : Float(1, 256, 32, 56) = onnx::Conv[dilations=[1, 1], group=256, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%433, %model.6.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %435 : Float(1, 256, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%434, %model.6.1.weight, %model.6.1.bias, %model.6.1.running_mean, %model.6.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %436 : Float(1, 256, 32, 56) = onnx::Relu(%435) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %437 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%436, %model.6.3.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %438 : Float(1, 512, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%437, %model.6.4.weight, %model.6.4.bias, %model.6.4.running_mean, %model.6.4.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %439 : Float(1, 512, 32, 56) = onnx::Relu(%438) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %440 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[2, 2], group=512, kernel_shape=[3, 3], pads=[2, 2, 2, 2], strides=[1, 1]](%439, %model.7.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %441 : Float(1, 512, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%440, %model.7.1.weight, %model.7.1.bias, %model.7.1.running_mean, %model.7.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %442 : Float(1, 512, 32, 56) = onnx::Relu(%441) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %443 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%442, %model.7.3.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %444 : Float(1, 512, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%443, %model.7.4.weight, %model.7.4.bias, %model.7.4.running_mean, %model.7.4.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %445 : Float(1, 512, 32, 56) = onnx::Relu(%444) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %446 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[1, 1], group=512, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%445, %model.8.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %447 : Float(1, 512, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%446, %model.8.1.weight, %model.8.1.bias, %model.8.1.running_mean, %model.8.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %448 : Float(1, 512, 32, 56) = onnx::Relu(%447) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %449 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%448, %model.8.3.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %450 : Float(1, 512, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%449, %model.8.4.weight, %model.8.4.bias, %model.8.4.running_mean, %model.8.4.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %451 : Float(1, 512, 32, 56) = onnx::Relu(%450) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %452 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[1, 1], group=512, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%451, %model.9.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %453 : Float(1, 512, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%452, %model.9.1.weight, %model.9.1.bias, %model.9.1.running_mean, %model.9.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %454 : Float(1, 512, 32, 56) = onnx::Relu(%453) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %455 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%454, %model.9.3.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %456 : Float(1, 512, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%455, %model.9.4.weight, %model.9.4.bias, %model.9.4.running_mean, %model.9.4.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %457 : Float(1, 512, 32, 56) = onnx::Relu(%456) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %458 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[1, 1], group=512, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%457, %model.10.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %459 : Float(1, 512, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%458, %model.10.1.weight, %model.10.1.bias, %model.10.1.running_mean, %model.10.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %460 : Float(1, 512, 32, 56) = onnx::Relu(%459) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %461 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%460, %model.10.3.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %462 : Float(1, 512, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%461, %model.10.4.weight, %model.10.4.bias, %model.10.4.running_mean, %model.10.4.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %463 : Float(1, 512, 32, 56) = onnx::Relu(%462) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %464 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[1, 1], group=512, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%463, %model.11.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %465 : Float(1, 512, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%464, %model.11.1.weight, %model.11.1.bias, %model.11.1.running_mean, %model.11.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %466 : Float(1, 512, 32, 56) = onnx::Relu(%465) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %467 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%466, %model.11.3.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %468 : Float(1, 512, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%467, %model.11.4.weight, %model.11.4.bias, %model.11.4.running_mean, %model.11.4.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %469 : Float(1, 512, 32, 56) = onnx::Relu(%468) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %470 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%469, %cpm.align.0.weight, %cpm.align.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %471 : Float(1, 128, 32, 56) = onnx::Relu(%470) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %472 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=128, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%471, %cpm.trunk.0.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %473 : Float(1, 128, 32, 56) = onnx::Elu[alpha=1.](%472) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1154:0
  %474 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%473, %cpm.trunk.0.2.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %475 : Float(1, 128, 32, 56) = onnx::Elu[alpha=1.](%474) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1154:0
  %476 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=128, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%475, %cpm.trunk.1.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %477 : Float(1, 128, 32, 56) = onnx::Elu[alpha=1.](%476) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1154:0
  %478 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%477, %cpm.trunk.1.2.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %479 : Float(1, 128, 32, 56) = onnx::Elu[alpha=1.](%478) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1154:0
  %480 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=128, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%479, %cpm.trunk.2.0.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %481 : Float(1, 128, 32, 56) = onnx::Elu[alpha=1.](%480) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1154:0
  %482 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%481, %cpm.trunk.2.2.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %483 : Float(1, 128, 32, 56) = onnx::Elu[alpha=1.](%482) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1154:0
  %484 : Float(1, 128, 32, 56) = onnx::Add(%471, %483) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:20:0
  %485 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%484, %cpm.conv.0.weight, %cpm.conv.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %486 : Float(1, 128, 32, 56) = onnx::Relu(%485) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %487 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%486, %initial_stage.trunk.0.0.weight, %initial_stage.trunk.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %488 : Float(1, 128, 32, 56) = onnx::Relu(%487) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %489 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%488, %initial_stage.trunk.1.0.weight, %initial_stage.trunk.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %490 : Float(1, 128, 32, 56) = onnx::Relu(%489) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %491 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%490, %initial_stage.trunk.2.0.weight, %initial_stage.trunk.2.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %492 : Float(1, 128, 32, 56) = onnx::Relu(%491) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %493 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%492, %initial_stage.heatmaps.0.0.weight, %initial_stage.heatmaps.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %494 : Float(1, 512, 32, 56) = onnx::Relu(%493) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %495 : Float(1, 19, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%494, %initial_stage.heatmaps.1.0.weight, %initial_stage.heatmaps.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %496 : Float(1, 512, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%492, %initial_stage.pafs.0.0.weight, %initial_stage.pafs.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %497 : Float(1, 512, 32, 56) = onnx::Relu(%496) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %498 : Float(1, 38, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%497, %initial_stage.pafs.1.0.weight, %initial_stage.pafs.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %499 : Float(1, 185, 32, 56) = onnx::Concat[axis=1](%486, %495, %498) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:186:0
  %500 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%499, %refinement_stages.0.trunk.0.initial.0.weight, %refinement_stages.0.trunk.0.initial.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %501 : Float(1, 128, 32, 56) = onnx::Relu(%500) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %502 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%501, %refinement_stages.0.trunk.0.trunk.0.0.weight, %refinement_stages.0.trunk.0.trunk.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %503 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%502, %refinement_stages.0.trunk.0.trunk.0.1.weight, %refinement_stages.0.trunk.0.trunk.0.1.bias, %refinement_stages.0.trunk.0.trunk.0.1.running_mean, %refinement_stages.0.trunk.0.trunk.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %504 : Float(1, 128, 32, 56) = onnx::Relu(%503) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %505 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[2, 2], group=1, kernel_shape=[3, 3], pads=[2, 2, 2, 2], strides=[1, 1]](%504, %refinement_stages.0.trunk.0.trunk.1.0.weight, %refinement_stages.0.trunk.0.trunk.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %506 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%505, %refinement_stages.0.trunk.0.trunk.1.1.weight, %refinement_stages.0.trunk.0.trunk.1.1.bias, %refinement_stages.0.trunk.0.trunk.1.1.running_mean, %refinement_stages.0.trunk.0.trunk.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %507 : Float(1, 128, 32, 56) = onnx::Relu(%506) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %508 : Float(1, 128, 32, 56) = onnx::Add(%501, %507) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:60:0
  %509 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%508, %refinement_stages.0.trunk.1.initial.0.weight, %refinement_stages.0.trunk.1.initial.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %510 : Float(1, 128, 32, 56) = onnx::Relu(%509) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %511 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%510, %refinement_stages.0.trunk.1.trunk.0.0.weight, %refinement_stages.0.trunk.1.trunk.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %512 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%511, %refinement_stages.0.trunk.1.trunk.0.1.weight, %refinement_stages.0.trunk.1.trunk.0.1.bias, %refinement_stages.0.trunk.1.trunk.0.1.running_mean, %refinement_stages.0.trunk.1.trunk.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %513 : Float(1, 128, 32, 56) = onnx::Relu(%512) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %514 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[2, 2], group=1, kernel_shape=[3, 3], pads=[2, 2, 2, 2], strides=[1, 1]](%513, %refinement_stages.0.trunk.1.trunk.1.0.weight, %refinement_stages.0.trunk.1.trunk.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %515 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%514, %refinement_stages.0.trunk.1.trunk.1.1.weight, %refinement_stages.0.trunk.1.trunk.1.1.bias, %refinement_stages.0.trunk.1.trunk.1.1.running_mean, %refinement_stages.0.trunk.1.trunk.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %516 : Float(1, 128, 32, 56) = onnx::Relu(%515) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %517 : Float(1, 128, 32, 56) = onnx::Add(%510, %516) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:60:0
  %518 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%517, %refinement_stages.0.trunk.2.initial.0.weight, %refinement_stages.0.trunk.2.initial.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %519 : Float(1, 128, 32, 56) = onnx::Relu(%518) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %520 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%519, %refinement_stages.0.trunk.2.trunk.0.0.weight, %refinement_stages.0.trunk.2.trunk.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %521 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%520, %refinement_stages.0.trunk.2.trunk.0.1.weight, %refinement_stages.0.trunk.2.trunk.0.1.bias, %refinement_stages.0.trunk.2.trunk.0.1.running_mean, %refinement_stages.0.trunk.2.trunk.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %522 : Float(1, 128, 32, 56) = onnx::Relu(%521) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %523 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[2, 2], group=1, kernel_shape=[3, 3], pads=[2, 2, 2, 2], strides=[1, 1]](%522, %refinement_stages.0.trunk.2.trunk.1.0.weight, %refinement_stages.0.trunk.2.trunk.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %524 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%523, %refinement_stages.0.trunk.2.trunk.1.1.weight, %refinement_stages.0.trunk.2.trunk.1.1.bias, %refinement_stages.0.trunk.2.trunk.1.1.running_mean, %refinement_stages.0.trunk.2.trunk.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %525 : Float(1, 128, 32, 56) = onnx::Relu(%524) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %526 : Float(1, 128, 32, 56) = onnx::Add(%519, %525) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:60:0
  %527 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%526, %refinement_stages.0.trunk.3.initial.0.weight, %refinement_stages.0.trunk.3.initial.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %528 : Float(1, 128, 32, 56) = onnx::Relu(%527) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %529 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%528, %refinement_stages.0.trunk.3.trunk.0.0.weight, %refinement_stages.0.trunk.3.trunk.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %530 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%529, %refinement_stages.0.trunk.3.trunk.0.1.weight, %refinement_stages.0.trunk.3.trunk.0.1.bias, %refinement_stages.0.trunk.3.trunk.0.1.running_mean, %refinement_stages.0.trunk.3.trunk.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %531 : Float(1, 128, 32, 56) = onnx::Relu(%530) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %532 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[2, 2], group=1, kernel_shape=[3, 3], pads=[2, 2, 2, 2], strides=[1, 1]](%531, %refinement_stages.0.trunk.3.trunk.1.0.weight, %refinement_stages.0.trunk.3.trunk.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %533 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%532, %refinement_stages.0.trunk.3.trunk.1.1.weight, %refinement_stages.0.trunk.3.trunk.1.1.bias, %refinement_stages.0.trunk.3.trunk.1.1.running_mean, %refinement_stages.0.trunk.3.trunk.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %534 : Float(1, 128, 32, 56) = onnx::Relu(%533) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %535 : Float(1, 128, 32, 56) = onnx::Add(%528, %534) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:60:0
  %536 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%535, %refinement_stages.0.trunk.4.initial.0.weight, %refinement_stages.0.trunk.4.initial.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %537 : Float(1, 128, 32, 56) = onnx::Relu(%536) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %538 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%537, %refinement_stages.0.trunk.4.trunk.0.0.weight, %refinement_stages.0.trunk.4.trunk.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %539 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%538, %refinement_stages.0.trunk.4.trunk.0.1.weight, %refinement_stages.0.trunk.4.trunk.0.1.bias, %refinement_stages.0.trunk.4.trunk.0.1.running_mean, %refinement_stages.0.trunk.4.trunk.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %540 : Float(1, 128, 32, 56) = onnx::Relu(%539) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %541 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[2, 2], group=1, kernel_shape=[3, 3], pads=[2, 2, 2, 2], strides=[1, 1]](%540, %refinement_stages.0.trunk.4.trunk.1.0.weight, %refinement_stages.0.trunk.4.trunk.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %542 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%541, %refinement_stages.0.trunk.4.trunk.1.1.weight, %refinement_stages.0.trunk.4.trunk.1.1.bias, %refinement_stages.0.trunk.4.trunk.1.1.running_mean, %refinement_stages.0.trunk.4.trunk.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %543 : Float(1, 128, 32, 56) = onnx::Relu(%542) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %544 : Float(1, 128, 32, 56) = onnx::Add(%537, %543) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:60:0
  %545 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%544, %refinement_stages.0.heatmaps.0.0.weight, %refinement_stages.0.heatmaps.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %546 : Float(1, 128, 32, 56) = onnx::Relu(%545) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %547 : Float(1, 19, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%546, %refinement_stages.0.heatmaps.1.0.weight, %refinement_stages.0.heatmaps.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %548 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%544, %refinement_stages.0.pafs.0.0.weight, %refinement_stages.0.pafs.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %549 : Float(1, 128, 32, 56) = onnx::Relu(%548) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %550 : Float(1, 38, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%549, %refinement_stages.0.pafs.1.0.weight, %refinement_stages.0.pafs.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %551 : Float(1, 19, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%547, %fake_conv_heatmaps.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %heatmaps : Float(1, 19, 32, 56) = onnx::Add(%547, %551) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:190:0
  %553 : Float(1, 38, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%550, %fake_conv_pafs.weight) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %pafs : Float(1, 38, 32, 56) = onnx::Add(%550, %553) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:191:0
  %555 : Float(1, 57, 32, 56) = onnx::Concat[axis=1](%547, %550) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:192:0
  %556 : Float(1, 185, 32, 56) = onnx::Concat[axis=1](%486, %555) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:140:0
  %557 : Float(1, 92, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%556, %Pose3D.stem.0.bottleneck.0.0.weight, %Pose3D.stem.0.bottleneck.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %558 : Float(1, 92, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%557, %Pose3D.stem.0.bottleneck.0.1.weight, %Pose3D.stem.0.bottleneck.0.1.bias, %Pose3D.stem.0.bottleneck.0.1.running_mean, %Pose3D.stem.0.bottleneck.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %559 : Float(1, 92, 32, 56) = onnx::Relu(%558) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %560 : Float(1, 92, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%559, %Pose3D.stem.0.bottleneck.1.0.weight, %Pose3D.stem.0.bottleneck.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %561 : Float(1, 92, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%560, %Pose3D.stem.0.bottleneck.1.1.weight, %Pose3D.stem.0.bottleneck.1.1.bias, %Pose3D.stem.0.bottleneck.1.1.running_mean, %Pose3D.stem.0.bottleneck.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %562 : Float(1, 92, 32, 56) = onnx::Relu(%561) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %563 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%562, %Pose3D.stem.0.bottleneck.2.0.weight, %Pose3D.stem.0.bottleneck.2.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %564 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%563, %Pose3D.stem.0.bottleneck.2.1.weight, %Pose3D.stem.0.bottleneck.2.1.bias, %Pose3D.stem.0.bottleneck.2.1.running_mean, %Pose3D.stem.0.bottleneck.2.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %565 : Float(1, 128, 32, 56) = onnx::Relu(%564) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %566 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%556, %Pose3D.stem.0.align.0.weight, %Pose3D.stem.0.align.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %567 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%566, %Pose3D.stem.0.align.1.weight, %Pose3D.stem.0.align.1.bias, %Pose3D.stem.0.align.1.running_mean, %Pose3D.stem.0.align.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %568 : Float(1, 128, 32, 56) = onnx::Relu(%567) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %569 : Float(1, 128, 32, 56) = onnx::Add(%568, %565) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:123:0
  %570 : Float(1, 64, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%569, %Pose3D.stem.1.bottleneck.0.0.weight, %Pose3D.stem.1.bottleneck.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %571 : Float(1, 64, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%570, %Pose3D.stem.1.bottleneck.0.1.weight, %Pose3D.stem.1.bottleneck.0.1.bias, %Pose3D.stem.1.bottleneck.0.1.running_mean, %Pose3D.stem.1.bottleneck.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %572 : Float(1, 64, 32, 56) = onnx::Relu(%571) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %573 : Float(1, 64, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%572, %Pose3D.stem.1.bottleneck.1.0.weight, %Pose3D.stem.1.bottleneck.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %574 : Float(1, 64, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%573, %Pose3D.stem.1.bottleneck.1.1.weight, %Pose3D.stem.1.bottleneck.1.1.bias, %Pose3D.stem.1.bottleneck.1.1.running_mean, %Pose3D.stem.1.bottleneck.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %575 : Float(1, 64, 32, 56) = onnx::Relu(%574) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %576 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%575, %Pose3D.stem.1.bottleneck.2.0.weight, %Pose3D.stem.1.bottleneck.2.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %577 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%576, %Pose3D.stem.1.bottleneck.2.1.weight, %Pose3D.stem.1.bottleneck.2.1.bias, %Pose3D.stem.1.bottleneck.2.1.running_mean, %Pose3D.stem.1.bottleneck.2.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %578 : Float(1, 128, 32, 56) = onnx::Relu(%577) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %579 : Float(1, 128, 32, 56) = onnx::Add(%569, %578) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:123:0
  %580 : Float(1, 64, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%579, %Pose3D.stem.2.bottleneck.0.0.weight, %Pose3D.stem.2.bottleneck.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %581 : Float(1, 64, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%580, %Pose3D.stem.2.bottleneck.0.1.weight, %Pose3D.stem.2.bottleneck.0.1.bias, %Pose3D.stem.2.bottleneck.0.1.running_mean, %Pose3D.stem.2.bottleneck.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %582 : Float(1, 64, 32, 56) = onnx::Relu(%581) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %583 : Float(1, 64, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%582, %Pose3D.stem.2.bottleneck.1.0.weight, %Pose3D.stem.2.bottleneck.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %584 : Float(1, 64, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%583, %Pose3D.stem.2.bottleneck.1.1.weight, %Pose3D.stem.2.bottleneck.1.1.bias, %Pose3D.stem.2.bottleneck.1.1.running_mean, %Pose3D.stem.2.bottleneck.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %585 : Float(1, 64, 32, 56) = onnx::Relu(%584) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %586 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%585, %Pose3D.stem.2.bottleneck.2.0.weight, %Pose3D.stem.2.bottleneck.2.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %587 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%586, %Pose3D.stem.2.bottleneck.2.1.weight, %Pose3D.stem.2.bottleneck.2.1.bias, %Pose3D.stem.2.bottleneck.2.1.running_mean, %Pose3D.stem.2.bottleneck.2.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %588 : Float(1, 128, 32, 56) = onnx::Relu(%587) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %589 : Float(1, 128, 32, 56) = onnx::Add(%579, %588) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:123:0
  %590 : Float(1, 64, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%589, %Pose3D.stem.3.bottleneck.0.0.weight, %Pose3D.stem.3.bottleneck.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %591 : Float(1, 64, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%590, %Pose3D.stem.3.bottleneck.0.1.weight, %Pose3D.stem.3.bottleneck.0.1.bias, %Pose3D.stem.3.bottleneck.0.1.running_mean, %Pose3D.stem.3.bottleneck.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %592 : Float(1, 64, 32, 56) = onnx::Relu(%591) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %593 : Float(1, 64, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%592, %Pose3D.stem.3.bottleneck.1.0.weight, %Pose3D.stem.3.bottleneck.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %594 : Float(1, 64, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%593, %Pose3D.stem.3.bottleneck.1.1.weight, %Pose3D.stem.3.bottleneck.1.1.bias, %Pose3D.stem.3.bottleneck.1.1.running_mean, %Pose3D.stem.3.bottleneck.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %595 : Float(1, 64, 32, 56) = onnx::Relu(%594) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %596 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%595, %Pose3D.stem.3.bottleneck.2.0.weight, %Pose3D.stem.3.bottleneck.2.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %597 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%596, %Pose3D.stem.3.bottleneck.2.1.weight, %Pose3D.stem.3.bottleneck.2.1.bias, %Pose3D.stem.3.bottleneck.2.1.running_mean, %Pose3D.stem.3.bottleneck.2.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %598 : Float(1, 128, 32, 56) = onnx::Relu(%597) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %599 : Float(1, 128, 32, 56) = onnx::Add(%589, %598) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:123:0
  %600 : Float(1, 64, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%599, %Pose3D.stem.4.bottleneck.0.0.weight, %Pose3D.stem.4.bottleneck.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %601 : Float(1, 64, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%600, %Pose3D.stem.4.bottleneck.0.1.weight, %Pose3D.stem.4.bottleneck.0.1.bias, %Pose3D.stem.4.bottleneck.0.1.running_mean, %Pose3D.stem.4.bottleneck.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %602 : Float(1, 64, 32, 56) = onnx::Relu(%601) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %603 : Float(1, 64, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%602, %Pose3D.stem.4.bottleneck.1.0.weight, %Pose3D.stem.4.bottleneck.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %604 : Float(1, 64, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%603, %Pose3D.stem.4.bottleneck.1.1.weight, %Pose3D.stem.4.bottleneck.1.1.bias, %Pose3D.stem.4.bottleneck.1.1.running_mean, %Pose3D.stem.4.bottleneck.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %605 : Float(1, 64, 32, 56) = onnx::Relu(%604) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %606 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%605, %Pose3D.stem.4.bottleneck.2.0.weight, %Pose3D.stem.4.bottleneck.2.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %607 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%606, %Pose3D.stem.4.bottleneck.2.1.weight, %Pose3D.stem.4.bottleneck.2.1.bias, %Pose3D.stem.4.bottleneck.2.1.running_mean, %Pose3D.stem.4.bottleneck.2.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %608 : Float(1, 128, 32, 56) = onnx::Relu(%607) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %609 : Float(1, 128, 32, 56) = onnx::Add(%599, %608) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:123:0
  %610 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%609, %Pose3D.prediction.trunk.0.initial.0.weight, %Pose3D.prediction.trunk.0.initial.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %611 : Float(1, 128, 32, 56) = onnx::Relu(%610) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %612 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%611, %Pose3D.prediction.trunk.0.trunk.0.0.weight, %Pose3D.prediction.trunk.0.trunk.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %613 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%612, %Pose3D.prediction.trunk.0.trunk.0.1.weight, %Pose3D.prediction.trunk.0.trunk.0.1.bias, %Pose3D.prediction.trunk.0.trunk.0.1.running_mean, %Pose3D.prediction.trunk.0.trunk.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %614 : Float(1, 128, 32, 56) = onnx::Relu(%613) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %615 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[2, 2], group=1, kernel_shape=[3, 3], pads=[2, 2, 2, 2], strides=[1, 1]](%614, %Pose3D.prediction.trunk.0.trunk.1.0.weight, %Pose3D.prediction.trunk.0.trunk.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %616 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%615, %Pose3D.prediction.trunk.0.trunk.1.1.weight, %Pose3D.prediction.trunk.0.trunk.1.1.bias, %Pose3D.prediction.trunk.0.trunk.1.1.running_mean, %Pose3D.prediction.trunk.0.trunk.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %617 : Float(1, 128, 32, 56) = onnx::Relu(%616) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %618 : Float(1, 128, 32, 56) = onnx::Add(%611, %617) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:60:0
  %619 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%618, %Pose3D.prediction.trunk.1.initial.0.weight, %Pose3D.prediction.trunk.1.initial.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %620 : Float(1, 128, 32, 56) = onnx::Relu(%619) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %621 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%620, %Pose3D.prediction.trunk.1.trunk.0.0.weight, %Pose3D.prediction.trunk.1.trunk.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %622 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%621, %Pose3D.prediction.trunk.1.trunk.0.1.weight, %Pose3D.prediction.trunk.1.trunk.0.1.bias, %Pose3D.prediction.trunk.1.trunk.0.1.running_mean, %Pose3D.prediction.trunk.1.trunk.0.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %623 : Float(1, 128, 32, 56) = onnx::Relu(%622) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %624 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[2, 2], group=1, kernel_shape=[3, 3], pads=[2, 2, 2, 2], strides=[1, 1]](%623, %Pose3D.prediction.trunk.1.trunk.1.0.weight, %Pose3D.prediction.trunk.1.trunk.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %625 : Float(1, 128, 32, 56) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%624, %Pose3D.prediction.trunk.1.trunk.1.1.weight, %Pose3D.prediction.trunk.1.trunk.1.1.bias, %Pose3D.prediction.trunk.1.trunk.1.1.running_mean, %Pose3D.prediction.trunk.1.trunk.1.1.running_var) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1923:0
  %626 : Float(1, 128, 32, 56) = onnx::Relu(%625) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %627 : Float(1, 128, 32, 56) = onnx::Add(%620, %626) # /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/models/with_mobilenet.py:60:0
  %628 : Float(1, 128, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%627, %Pose3D.prediction.feature_maps.0.0.weight, %Pose3D.prediction.feature_maps.0.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  %629 : Float(1, 128, 32, 56) = onnx::Relu(%628) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/functional.py:1061:0
  %features : Float(1, 57, 32, 56) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%629, %Pose3D.prediction.feature_maps.1.0.weight, %Pose3D.prediction.feature_maps.1.0.bias) # /home/user/.virtualenvs/cv/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
  return (%features, %heatmaps, %pafs)

The resulting onnx file seems normal to me. Then, according to your manual:

(cv) user@Descartes:~/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch$ python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model human-pose-estimation-3d.onnx --input=data --mean_values=data[128.0,128.0,128.0] --scale_values=data[255.0,255.0,255.0] --output=features,heatmaps,pafs

Returns an error:

human-pose-estimation-3d.onnx --input=data --mean_values=data[128.0,128.0,128.0] --scale_values=data[255.0,255.0,255.0] --output=features,heatmaps,pafs
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/human-pose-estimation-3d.onnx
        - Path for generated IR:        /home/user/human-pose-estimation/lightweight-human-pose-estimation-3d-demo.pytorch/.
        - IR output name:       human-pose-estimation-3d
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         data
        - Output layers:        features,heatmaps,pafs
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  data[128.0,128.0,128.0]
        - Scale values:         data[255.0,255.0,255.0]
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
ONNX specific parameters:
Model Optimizer version:        2020.2.0-60-g0bc66e26ff
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.user_data_repack.UserDataRepack'>): No node with name features.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #51.

What can it be? Following the link provided, question 51 doesn't clear the situation. Thank you in advance!

Should I retrain the model because of camera extrinsic parameters?

Hi,
I want to use my own camera to inference the 3D positions of the joints of the body.
Do I need to retrain the model to get the correct 3D positions of joints? Or I just need to modify the extrinsics.json and run demo.py
I have replaced the extrinsic.json with my own value, but I can only get reasonable points in the 2D image but not in the 3D coordinate.
BTW, I get intrinsic and extrinsic parameters by OpenCV:
here

If something is still not clear, please let me know.
Thank you!

Win 10 installation problem

Hi, author. Thanks for supplying so wonderful code. I want to use it on my win 10 operation system. When I installed it, I met an error, like this:

-- Configuring incomplete, errors occurred!
See also "D:/paper/human-pose-estimation/pose_extractor/build/tmp/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "setup.py", line 72, in
cmdclass={'build_ext': CMakeBuild})
File "D:\Anaconda\lib\site-packages\setuptools_init_.py", line 145, in setup
return distutils.core.setup(**attrs)
File "D:\Anaconda\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "D:\Anaconda\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "D:\Anaconda\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "D:\Anaconda\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
_build_ext.run(self)
File "D:\Anaconda\lib\site-packages\Cython\Distutils\old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "D:\Anaconda\lib\distutils\command\build_ext.py", line 340, in run
self.build_extensions()
File "setup.py", line 63, in build_extensions
subprocess.check_call(['cmake', ext.cmake_lists_dir] + cmake_args, cwd=tmp_dir)
File "D:\Anaconda\lib\subprocess.py", line 347, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', 'D:\paper\human-pose-estimation\pose_extractor', '-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=D:\paper\human-pose-estimation\pose_extractor\build', '-DCMAKE_ARCHIVE_OUTPUT_DIRECTORY_RELEASE=D:\paper\human-pose-estimation\pose_extractor\build\tmp', '-DPYTHON_EXECUTABLE=D:\Anaconda\python.exe', '-DCMAKE_WINDOWS_EXPORT_ALL_SYMBOLS=TRUE', '-DCMAKE_RUNTIME_OUTPUT_DIRECTORY_RELEASE=D:\paper\human-pose-estimation\pose_extractor\build', '-DCMAKE_GENERATOR_PLATFORM=x64']' returned non-zero exit status 1.

Can you help me to solve it? Thanks!

problems of cmake

Hi, I got an error when I ran below cammand. I have install cmake with pip, but it is still not usefull.

**(venv) E:\PythonProjects\lightweight-human-pose-estimation>python setup.py build_ext
running build_ext
CMake Error at CMakeLists.txt:2 (project):
Generator

NMake Makefiles

does not support platform specification, but platform

x64

was specified.

CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
-- Configuring incomplete, errors occurred!
See also "E:/PythonProjects/lightweight-human-pose-estimation/pose_extractor/build/tmp/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "setup.py", line 72, in
cmdclass={'build_ext': CMakeBuild})
File "E:\PythonProjects\lightweight-human-pose-estimation\venv\lib\site-packages\setuptools-39.1.0-py3.6.egg\setuptools
_init_.py", line 129, in setup
File "E:\Anaconda3\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "E:\Anaconda3\lib\distutils\dist.py", line 955, in run_commands
self.run_command(cmd)
File "E:\Anaconda3\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "E:\PythonProjects\lightweight-human-pose-estimation\venv\lib\site-packages\setuptools-39.1.0-py3.6.egg\setuptools
\command\build_ext.py", line 78, in run
File "E:\Anaconda3\lib\distutils\command\build_ext.py", line 339, in run
self.build_extensions()
File "setup.py", line 63, in build_extensions
subprocess.check_call(['cmake', ext.cmake_lists_dir] + cmake_args, cwd=tmp_dir)
File "E:\Anaconda3\lib\subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', 'E:\PythonProjects\lightweight-human-pose-estimation\pose_extractor'
, '-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=E:\PythonProjects\lightweight-human-pose-esti
mation\pose_extractor\build', '-DCMAKE_ARCHIVE_OUTPUT_DIRECTORY_RELEASE=E:\PythonProjects\lightweight-human-pose-esti
mation\pose_extractor\build\tmp', '-DPYTHON_EXECUTABLE=E:\PythonProjects\lightweight-human-pose-estimation\venv\Sc
ripts\python.exe', '-DCMAKE_WINDOWS_EXPORT_ALL_SYMBOLS=TRUE', '-DCMAKE_RUNTIME_OUTPUT_DIRECTORY_RELEASE=E:\PythonProjec
ts\lightweight-human-pose-estimation\pose_extractor\build', '-DCMAKE_GENERATOR_PLATFORM=x64']' returned non-zero exit
status 1.**

The best configuration for realtime application

As far as I understand, we can consider one of these configurations:

  1. OpenVino, CPU. For this configuration I have 3-6 FPS with default input height (256) and about 40 FPS with 128 input height
  2. OpenVino, GPU
  3. PyTorch, CPU
  4. PyTorch, GPU

Could you make an advice, what config is the fastest for you algorithm?

Interpretation of the output

Hi,

First of all, great work! I've managed to run the demo code with OpenVINO, and it can runs at FPS~=20 on single CPU, which is fantastic!
I have a question on the output format. For example, one 2D pose estimation is a 1x58 vector, how to interpret it? I think COCO format has 18 joints, so why isn't the 2D pose estimation 1x54?

3D coordinate calibration

Hi Daniil,

me again ;) I encountered some problems with 3D calibration. I used ROS camera calibration node based on http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration. Then I imported R and T matrices into --extrinsics parameters.

I obtained 3D poses of left shoulder and right shoulder at keypoints 3 and 9.
left shoulder: [-72.70494 27.558527 -7.819101]
right shoulder: [-56.104603 18.041153 -10.176827]
Since the line of two shoulders is parallel to one axis, which means the numbers of one coordinate should be very similar, but the number of three axises look all quite different. Do you have any idea what might be the reason?

Considering of calibration, is there also somewhere in the code for the distorted parameters?

One more question is about the axis direction. In the line of 107 in the demo. py, why do we need to change the direction and sequence of the axis? What do the axises look like after such transition?

Thanks a lot for your help!
Best, Fan

Any plan for adding person tracking function?

Hi @Daniil-Osokin ,

Thanks again for sharing this great work!
I'm wondering if you have any plan to add person tracking functionality to your codebase? The original OpenPose seems have provided single-person tracking feature. Though, my need would be multi-person tracking based on the pose estimation predictions.

Thanks,
Melo

python setup.py build_ext failed

python setup.py build_ext failed with
error: [Errno 2] No such file or directory: '/content/pose_extractor/build'
I am using colab
Would be grateful for any advice

Demo issue

Hi, I'm trying to implement demo.py with default model, but after running python setup.py build_ext,an error occurred.Seems like there's a controversial problem with opencv and opencv_python

Class CaptureDelegate is implemented in both /Users/sxt/environment/python_env/anaconda/test_demo/lib/python3.7/site-packages/cv2/cv2.cpython-37m-darwin.so (0x1124bd048) and /usr/local/opt/opencv/lib/libopencv_videoio.4.5.dylib (0x125aa70f0). One of the two will be used. Which one is undefined.
objc[25079]: Class CVWindow is implemented in both /Users/sxt/environment/python_env/anaconda/test_demo/lib/python3.7/site-packages/cv2/cv2.cpython-37m-darwin.so (0x1124bd098) and /usr/local/opt/opencv/lib/libopencv_highgui.4.5.dylib (0x12527b0b0). One of the two will be used. Which one is undefined.
objc[25079]: Class CVView is implemented in both /Users/sxt/environment/python_env/anaconda/test_demo/lib/python3.7/site-packages/cv2/cv2.cpython-37m-darwin.so (0x1124bd0c0) and /usr/local/opt/opencv/lib/libopencv_highgui.4.5.dylib (0x12527b0d8). One of the two will be used. Which one is undefined.
objc[25079]: Class CVSlider is implemented in both /Users/sxt/environment/python_env/anaconda/test_demo/lib/python3.7/site-packages/cv2/cv2.cpython-37m-darwin.so (0x1124bd0e8) and /usr/local/opt/opencv/lib/libopencv_highgui.4.5.dylib (0x12527b100). One of the two will be used. Which one is undefined.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.