tensorboy / centerpose Goto Github PK
View Code? Open in Web Editor NEWPush the Extreme of the pose estimation
License: MIT License
Push the Extreme of the pose estimation
License: MIT License
i use coco2017 and your pretrained model dla34.pth to train again,the loss is still growing from 3.1 to round 17.why the loss does keep unchanged or change smaller,
I found the code just for single test, but how to batch test?
Hi, I'm still a little confused about func gaussian_radius
, which defined like this:
def gaussian_radius(det_size, min_overlap=0.7):
height, width = det_size
a1 = 1
b1 = (height + width)
c1 = width * height * (1 - min_overlap) / (1 + min_overlap)
sq1 = np.sqrt(b1 ** 2 - 4 * a1 * c1)
r1 = (b1 + sq1) / 2
a2 = 4
b2 = 2 * (height + width)
c2 = (1 - min_overlap) * width * height
sq2 = np.sqrt(b2 ** 2 - 4 * a2 * c2)
r2 = (b2 + sq2) / 2
a3 = 4 * min_overlap
b3 = -2 * min_overlap * (height + width)
c3 = (min_overlap - 1) * width * height
sq3 = np.sqrt(b3 ** 2 - 4 * a3 * c3)
r3 = (b3 + sq3) / 2
return min(r1, r2, r3)
I have test some values like this:
Dose this a little bigger for drawing gaussian distribution? Beacause in keypoints detection job, sigma always be 2.5 or 3.0. What do you think of this?
Hi, I am confused about this. The output feature of HRNet-W32 has 32 channels but the targe maps has more than 51 channel (17 keypoint hmps + 17*2 offset x, y maps).
Usually, the target feature maps have more than 4~5 times the channel number of the output feature maps of the backbone network.
@tensorboy 你好,看你的表格里面DLA-34在coco val上的精度是62.7%,我这边用官方的model测出来是59.2%,请问你做了什么别的优化措施嘛
How do I train the model in a custom dataset?
Hi, I'm still a little confused about func gaussian_radius
, which defined like this:
def gaussian_radius(det_size, min_overlap=0.7):
height, width = det_size
a1 = 1
b1 = (height + width)
c1 = width * height * (1 - min_overlap) / (1 + min_overlap)
sq1 = np.sqrt(b1 ** 2 - 4 * a1 * c1)
r1 = (b1 + sq1) / 2
a2 = 4
b2 = 2 * (height + width)
c2 = (1 - min_overlap) * width * height
sq2 = np.sqrt(b2 ** 2 - 4 * a2 * c2)
r2 = (b2 + sq2) / 2
a3 = 4 * min_overlap
b3 = -2 * min_overlap * (height + width)
c3 = (min_overlap - 1) * width * height
sq3 = np.sqrt(b3 ** 2 - 4 * a3 * c3)
r3 = (b3 + sq3) / 2
return min(r1, r2, r3)
I have test some values like this:
Dose this a little bigger for drawing gaussian distribution? Beacause in keypoints detection job, sigma always be 2.5 or 3.0. What do you think of this?
At 'results, meta = results[0]' ,got this error
你好,模型网盘链接已失效,可以重新上传下吗,万分感激!
@tensorboy hi, i have finish the train with your code ,but when i test the model i get RuntimeError:storage has wrong size:expected 0 got 589824, do u know the reason
Can you send the model in Baidu pan?please
thank you, good people.
Thank the author! The new version of centernet has been released(centernetv2). Do you have any idea of a V2 based pose estimation version? If not, can you provide some ideas for improvement?
Drop parameter dla_up.ida_2.up_1.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.actf.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.actf.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.actf.0.running_mean.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.actf.0.running_var.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.actf.0.num_batches_tracked.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.conv.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.conv.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.conv.conv_offset_mask.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.conv.conv_offset_mask.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.actf.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.actf.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.actf.0.running_mean.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.actf.0.running_var.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.actf.0.num_batches_tracked.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.conv.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.conv.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.conv.conv_offset_mask.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.conv.conv_offset_mask.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Traceback (most recent call last):
File "/home/centerpose/tools/train.py", line 151, in
main(cfg, local_rank)
File "/home/centerpose/tools/train.py", line 120, in main
mAP = val_dataset.run_eval(preds, cfg.OUTPUT_DIR)
File "/home/centerpose/tools/../lib/datasets/coco_hp.py", line 100, in run_eval
coco_dets = self.coco.loadRes(self.convert_eval_format(results))
File "/home/centerpose/tools/../lib/datasets/coco_hp.py", line 64, in convert_eval_format
bbox = dets[:4]
IndexError: invalid index to scalar variable.
The model zoo links are broken now, can you fix it?
Hi, what are the settings corresponding to the results in the table in READE:
Thank you for your help in advance.
Hi, thanks for your work. Is there some error in ReadMe about wrong figure for showing mAP vs Run time? Based your tabel, it may be mAP vs FPS not mAP vs Run time.
请教:如何更优的去设置mobilenetv2的参数呢or策略?看到您yml里面的参数,觉得很多不甚合理
when i load efficientdet model,it loaded a darnet model
I have test the train process using first 100 samples of MSCOCO. The cmd of training is
python3 train.py --cfg ../experiments/mobilenetv3_512x512.yaml
and when run mAP calculate in line 121 of train.py
, I got an error like this:
but when I run
python3 evaluate.py --cfg ../experiments/mobilenetv3_512x512.yaml --NMS false --TESTMODEL ../models/model_zoo/mobilenetV3_1x.pth
It's ok and can output result.
Update:
Maybe change code like this will be work in here
dets_out = np.concatenate(
[detection[1] for detection in dets_out], axis=0).astype(np.float32)
if self.cfg.TEST.NMS or len(self.cfg.TEST.TEST_SCALES) > 1:
soft_nms_39(dets_out, Nt=0.5, method=2)
dets_out = dets_out.tolist()
# results[batch['meta']['img_id'].cpu().numpy()[0]] = dets_out[0]
results[batch['meta']['img_id'].cpu().numpy()[0]] = dets_out
我使用了预训练的High Resolution,得到的ap值如下:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.401
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.617
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.429
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.481
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.386
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.632
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.876
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.680
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.561
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.733
但README中给出的ap值是0.495,远高于测试的结果。请问预训练模型是不是传错了?
Hello, when I was testing your model, I found that the "TEST_SCALES" in "hrnet_ *.yaml" was set to [1, 2] and [1] in other YAML files, which had a great impact on the speed of HRNet model. Is there any special purpose for you to set it like this?
I tried to export themodel to onnx, but the main issue is the DCN layers are not supported in onnx as far as I know.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.