last-one / pytorch_realtime_multi-person_pose_estimation Goto Github PK
View Code? Open in Web Editor NEWPytorch version of Realtime Multi-Person Pose Estimation project
License: MIT License
Pytorch version of Realtime Multi-Person Pose Estimation project
License: MIT License
Hi,
I trained my model by 50,000 iter.
What should I do in order to begin to also learn from the middle?
Hi,
would you mind adding a license to the repo like e.g. the MIT license? We are interested in using part of the code in our open-source project.
Thanks!
Marc
I got low mAP (~30%) on 1160 images (selected by openpose) of coco val2014. I trained the model by myself with pretrained vgg19_10.
I read codes of preprocessing, and have a question.
transform RandomScale only takes the scale of person 0 in annotation to scale the image. But in official code, each person should be scaled as a center person. In your way, the dataset scales are less than that of the original code.
Could it be the reason why I got low mAP?
Your README states:
However, the included MIT license states:
So therefore I wonder if this code can be used for commercial use or not?
With some error with gpu memory, I've got to relance the train part.
In the loading part, a new fc layer is added. What does it mean?
# state_dict = torch.load(args.pretrained)['state_dict']
# from collections import OrderedDict
# new_state_dict = OrderedDict()
# for k, v in state_dict.items():
# name = k[7:]
# new_state_dict[name] = v
# model.load_state_dict(new_state_dict)
# model.fc = nn.Linear(2048, 80)
I downloaded the pytorch model as stated in caffe2pytorch/README.md,
but when I load the model in
model.load_state_dict(state_dict)
,
an error occurs:
KeyError: 'unexpected key "0.weight" in 'state_dict'
Hope to get your response, thanks~
Dear project creators
I followed your instructions and downloaded from the GoogleDrive link the pretrained torch model "coco_pose_iter_440000.pth.tar".
However, I get an error as copied below, some values are incompatible.
Can you please help me solve that issue, I guess there might be 3 reasons for that problem:
the model file in the google drive is obsolete
I have installed another version of Pytorch than required
some issue in your code
Any help is appreciated
thanks
Nikolay
python test_pose.py --image ski.jpg
Traceback (most recent call last):
File "test_pose.py", line 314, in
model = construct_model(args)
File "test_pose.py", line 42, in construct_model
model.load_state_dict(state_dict)
File "/home/njetchev/anaconda2/envs/p27/lib/python2.7/site-packages/torch/nn/modules/module.py", line 331, in load_state_dict
.format(name))
KeyError: 'unexpected key "0.weight" in state_dict'
Hello,
I was running 'test_pose.py' with pytorch model, downloaded from GoogleDrive, based on 'ski.png'. However, there are hundreds of 'noses' detected on the image, while other body parts look fine. Either Python2.7 or Python3.6 ends up with this incorrect result. Could anyone give me any suggestions about this?
Thanks
Is there any problems that the persons skipped are not be mask out?
Hi,
I use my datasets which has 21 key-points ( include background ) and 19 vector.
Does mapIdx's length equal number of keypoints ?
If I use datasets have different number keypoints, where should I change code in test_pose.py or train_pose.py ?
Is there any method to order the people in the result?
In this way, it can be possible to compare each person in the result to the corresponding ground truth.
Hi.
I've made very fast augmentation algorithms for keras version of project
https://github.com/anatolix/keras_Realtime_Multi-Person_Pose_Estimation
It augments 140 images per second in python (C++ augmentation augments 30)
Trick is in made all augmentation(crop/scale/resize/shift) with one warpAffine and vectorized generation of heatmaps and pafs. Please take a look, if you like it i could adapt it and make pull request.
Hi! Thank you for your awesome simplified work!
Could you please help me explain the use and function between libseq and mapidx?
Looking forward to you replying!
I'm really confused about how to get the pose coordination and get the AP/mAP, AR by my trained model. I just know the output of the two branches are the heatmap, and in the deploy prototxt ,the output are also the heat map with 46*46, so, there are any special toolbox or special code for the metrics? And, in my sever , I do not have the caffe-matlab app, so the author's method is not suit for my situation.
Thanks @last-one
Hi,
I want to run train-code with my training datasets, but I don't have mask-data (I have only input-image, key-points and center-position ).
Is mask important ?
Thank you for your replying.
Hi,
If I only have one gpu ,how can I run it?
thank you for your answer
Sorry
I run preproccesing/generate_json_mask.py, but masklist/train2017.txt was not generated.
What is the file ?
And I have no multi gpu but one gpu, should I remove line 49 "model = torch.nn.DataParallel(model, device_ids=args.gpu).cuda()" in train_pose.py ?
Hi , I'm a Mac user and I have no gpu, is it possible for me to train the model without gpu?If yes, how?
Thanks
above is the result I got after running the test_pose.py with the converted pytorch model.
there are many blue points and I'm wondering if the algorithm is working correctly. And what do these blue dots mean here?
My python version is 3.5 and I've adapted the original code according to another issue "Sorry, please Python3 version".
I'm running on CPU and it takes about 40s to finish the processing.
Do you know how long will it take when using cuda?
Thank you in advance.
Hi,
What's the inference time (FPS) and backforward time?
BTW, I get this error when training for some steps.
Traceback (most recent call last) File "train_pose.py", line 258, in <module> train_val(model, args) File "train_pose.py", line 131, in train_val for i, (input, heatmap, vecmap, mask) in enumerate(train_loader): File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 187, in __next__ return self._process_next_batch(batch) File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 221, in _process_next_batch raise batch.exc_type(batch.exc_msg) ValueError: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 40, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "../CocoFolder.py", line 147, in __getitem__ img, mask, kpt, center = self.transformer(img, mask, kpt, center, scale) File "../Mytransforms.py", line 437, in __call__ img, mask, kpt, center = t(img, mask, kpt, center) File "../Mytransforms.py", line 362, in __call__ return crop(img, mask, kpt, center, offset_left, offset_up, self.size[0], self.size[1]) File "../Mytransforms.py", line 306, in crop new_img[st_y: ed_y, st_x: ed_x, :] = img[or_st_y: or_ed_y, or_st_x: or_ed_x, :].copy() ValueError: could not broadcast input array from shape (368,0,3) into shape (368,218,3)
Thanks!
Hi, how to calculate the above value?
Hi,
I ran train.sh, but I had some errors.
I think there are bugs.
Should I add "input = input.cuda()" to around line 138 in train_pose.py and change line 63 "params_dict = dict(model.module.named_parameters())" in train_pose.py to "params_dict = dict(model.named_parameters())" .
Hi~!
I hope to use your nice program
but i don't know how to use it.....
I download coco dataset
but generate_json_mask.py is not work....
could you tell me more specific example?
i used this command
python3 generate_json_mask.py
--ann_path /home/ksg/Downloads/coco/annotations/instances_train2017.json
--json_path /home/ksg/Openpose_file
--mask_dir /home/ksg/Openpose_file
--filelist_path /home/ksg/Openpose_file/filelist
--masklist_path /home/ksg/Openpose_file/masklist
and i saw this error message
loading annotations into memory...
Done (t=11.69s)
creating index...
index created!
Traceback (most recent call last):
File "generate_json_mask.py", line 172, in
processing(args)
File "generate_json_mask.py", line 68, in processing
if img_anns[p]['num_keypoints'] < 5 or img_anns[p]['area'] < 32 * 32:
KeyError: 'num_keypoints'
actually i don't understand how to use it....
could you make a simple guide for me?
Thank you so much
After executing the following script:
python generate_json_mask.py \
--ann_path ~/datasets/COCO2017/annotations/person_keypoints_val2017.json \
--json_path ~/datasets/COCO2017/results/json/val2017.json \
--masklist_path ~/datasets/COCO2017/results/masklist/val2017.txt \
--filelist_path ~/datasets/COCO2017/results/filelist/val2017.txt \
--mask_dir ~/datasets/COCO2017/results/masks/mask_val
The generated filelist and filenames in json files should be absolute path, but i got these:
In filelist.txt
000000472209.jpg
000000019642.jpg
000000268909.jpg
000000547777.jpg
000000049429.jpg
000000375321.jpg
000000250249.jpg
000000545549.jpg
000000126073.jpg
000000374391.jpg
In xxx.json:
[{"filename": "000000391895.jpg", "info": [{"pos": [416.82, 172.525], "keypoints": [[
368, 61, 0], [401.5, 82.5, 1], [435, 81, 1], [446, 125, 1], [0, 0, 2], [368, 84, 1],
[362, 125, 1], [360, 153, 1], [439, 166, 0], [461, 234, 1], [474, 287, 1], [397, 167,
0], [369, 193, 1], [361, 246, 1], [0, 0, 2], [369, 52, 1], [0, 0, 2], [382, 48, 1]],
"scale": 0.8172010869565218}]},
...
I failed to run train_pose.py
, maybe caused by this problem.
I did prepared annotations for training set as well, using similar script.
Did I run the script in wrong directory?
Please tell me how to fix it, other than rewriting the python script.
Line 51: flielist_fp = open(args.filelist_path, 'w')
flielist_fp should be filelist_fp
Also refer to the visualization here:
https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/output.md#pose-output-format-coco
As title, what is the mAP benchmarked on datastes?
hello @last-one ,
please I need some help
I have trained the openpose using this code but when I tried to test the trained model I had the key points of the human body but not the limb connections.
1- I have used this because some loading errors:
model = pose_estimation.PoseModel(num_point=19, num_vector=19)
state_dict = torch.load(args.model)['state_dict']
from collections import OrderedDict
new_state_dict = OrderedDict()
for k, v in state_dict.items():
name = k[7:]
new_state_dict[name] = v
state_dict = model.state_dict()
state_dict.update(new_state_dict)
model.load_state_dict(state_dict)
model = model.cuda()
model.eval()
2- for the limb sequence I have used this code
limbSeq = [[3,4], [4,5], [6,7], [7,8], [9,10], [10,11], [12,13], [13,14], [1,2], [2,9], [2,12], [2,3], [2,6],
[3,17],[6,18],[1,16],[1,15],[16,18],[15,17]]
mapIdx = [[19,20],[21,22],[23,24],[25,26],[27,28],[29,30],[31,32],[33,34],[35,36],[37,38],[39,40],
[41,42],[43,44],[45,46],[47,48],[49,50],[51,52],[53,54],[55,56]]
3- in the for loop over the parts:
for part in range(1,19):
please If any one can help
Hi,
When I run test_pose.py, I get this error:
TypeError: 'dict_keys' object does not support indexing
So, I changed "one_layer.keys()[0]" to "list(one_layer.keys())[0]".
But, I get new error:
KeyError: 'unexpected key "0.weight" in state_dict'
My Python's version is 3 and anaconda's version is 3.
So, Please code of python3 version or solution.
Thank you
I'm confused about downloading what dataset and what annotations. Could you help me?
hi,last-one,
I have tried to run your code,I found after remove multi-lr setting, the loss can be converged, if add multi-lr setting, the loss can't converge.
and I add the bias decay = 0 in the multi-lr part, same as openpose, but I don't think this lead to the bug.
have you faced this problem?
Hi,
I have a dataset that only contains hands and my goal is to detect all the hand keypoints in the image. So in this case, do I need to change a lot to get the work done, e.g., training? Thanks!
Hi,
When I run test_pose.py, I get this error:
No such file or directory: 'openpose_coco_best.pth.tar'
Where do I locate (or how do I generate) this file?
Thanks!
Hi,
I got this following error when downloading the caffe model:
wget: unable to resolve host address "posefs1.perception.cs.cmn.edu.users"
how can I solve this error or where else can I get this model?
thanks in advance.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.