Coder Social home page Coder Social logo

centerpose's People

Contributors

tensorboy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

centerpose's Issues

Re Train

i use coco2017 and your pretrained model dla34.pth to train again,the loss is still growing from 3.1 to round 17.why the loss does keep unchanged or change smaller,

Confused about func gaussian_radius

Hi, I'm still a little confused about func gaussian_radius, which defined like this:

def gaussian_radius(det_size, min_overlap=0.7):
  height, width = det_size

  a1  = 1
  b1  = (height + width)
  c1  = width * height * (1 - min_overlap) / (1 + min_overlap)
  sq1 = np.sqrt(b1 ** 2 - 4 * a1 * c1)
  r1  = (b1 + sq1) / 2

  a2  = 4
  b2  = 2 * (height + width)
  c2  = (1 - min_overlap) * width * height
  sq2 = np.sqrt(b2 ** 2 - 4 * a2 * c2)
  r2  = (b2 + sq2) / 2

  a3  = 4 * min_overlap
  b3  = -2 * min_overlap * (height + width)
  c3  = (min_overlap - 1) * width * height
  sq3 = np.sqrt(b3 ** 2 - 4 * a3 * c3)
  r3  = (b3 + sq3) / 2
  return min(r1, r2, r3)

I have test some values like this:
image

Dose this a little bigger for drawing gaussian distribution? Beacause in keypoints detection job, sigma always be 2.5 or 3.0. What do you think of this?

PE accuracy

@tensorboy 你好,看你的表格里面DLA-34在coco val上的精度是62.7%,我这边用官方的model测出来是59.2%,请问你做了什么别的优化措施嘛

Train

How do I train the model in a custom dataset?

Confused about func gaussian_radius

Hi, I'm still a little confused about func gaussian_radius, which defined like this:

def gaussian_radius(det_size, min_overlap=0.7):
  height, width = det_size

  a1  = 1
  b1  = (height + width)
  c1  = width * height * (1 - min_overlap) / (1 + min_overlap)
  sq1 = np.sqrt(b1 ** 2 - 4 * a1 * c1)
  r1  = (b1 + sq1) / 2

  a2  = 4
  b2  = 2 * (height + width)
  c2  = (1 - min_overlap) * width * height
  sq2 = np.sqrt(b2 ** 2 - 4 * a2 * c2)
  r2  = (b2 + sq2) / 2

  a3  = 4 * min_overlap
  b3  = -2 * min_overlap * (height + width)
  c3  = (min_overlap - 1) * width * height
  sq3 = np.sqrt(b3 ** 2 - 4 * a3 * c3)
  r3  = (b3 + sq3) / 2
  return min(r1, r2, r3)

I have test some values like this:
image

Dose this a little bigger for drawing gaussian distribution? Beacause in keypoints detection job, sigma always be 2.5 or 3.0. What do you think of this?

baidupan

Can you send the model in Baidu pan?please
thank you, good people.

centernet2 pose

Thank the author! The new version of centernet has been released(centernetv2). Do you have any idea of a V2 based pose estimation version? If not, can you provide some ideas for improvement?

运行readme里面的例子出错,是提供的模型和参数对不上吗?

Drop parameter dla_up.ida_2.up_1.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.actf.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.actf.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.actf.0.running_mean.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.actf.0.running_var.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.actf.0.num_batches_tracked.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.conv.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.conv.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.conv.conv_offset_mask.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.node_1.conv.conv_offset_mask.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.actf.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.actf.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.actf.0.running_mean.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.actf.0.running_var.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.actf.0.num_batches_tracked.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.conv.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.conv.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.conv.conv_offset_mask.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_2.proj_2.conv.conv_offset_mask.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.

Error alert when running "val_dataset.run_eval()"

Traceback (most recent call last):
File "/home/centerpose/tools/train.py", line 151, in
main(cfg, local_rank)
File "/home/centerpose/tools/train.py", line 120, in main
mAP = val_dataset.run_eval(preds, cfg.OUTPUT_DIR)
File "/home/centerpose/tools/../lib/datasets/coco_hp.py", line 100, in run_eval
coco_dets = self.coco.loadRes(self.convert_eval_format(results))
File "/home/centerpose/tools/../lib/datasets/coco_hp.py", line 64, in convert_eval_format
bbox = dets[:4]
IndexError: invalid index to scalar variable.

Configuration during evaluation

Hi, what are the settings corresponding to the results in the table in READE:

  1. batch test or single input one by one?
  2. flip augmentation is used when you report the FPS?
  3. what kind of GPU is used?

Thank you for your help in advance.

Figure error in ReadMe

Hi, thanks for your work. Is there some error in ReadMe about wrong figure for showing mAP vs Run time? Based your tabel, it may be mAP vs FPS not mAP vs Run time.

mobilenetv2的参数设置

请教:如何更优的去设置mobilenetv2的参数呢or策略?看到您yml里面的参数,觉得很多不甚合理

error when run val_dataset.run_eval(preds, cfg.OUTPUT_DIR)

I have test the train process using first 100 samples of MSCOCO. The cmd of training is

 python3 train.py --cfg ../experiments/mobilenetv3_512x512.yaml

and when run mAP calculate in line 121 of train.py, I got an error like this:
Selection_207
but when I run

python3 evaluate.py --cfg ../experiments/mobilenetv3_512x512.yaml --NMS false --TESTMODEL ../models/model_zoo/mobilenetV3_1x.pth

It's ok and can output result.
image

Update:
Maybe change code like this will be work in here

dets_out = np.concatenate(
    [detection[1] for detection in dets_out], axis=0).astype(np.float32)           
if self.cfg.TEST.NMS or len(self.cfg.TEST.TEST_SCALES) > 1:
    soft_nms_39(dets_out, Nt=0.5, method=2)
dets_out = dets_out.tolist()
# results[batch['meta']['img_id'].cpu().numpy()[0]] = dets_out[0]
results[batch['meta']['img_id'].cpu().numpy()[0]] = dets_out

ap值相差很大

我使用了预训练的High Resolution,得到的ap值如下:

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.401
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.617
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.429
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.481
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.386
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.632
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.876
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.680
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.561
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.733

但README中给出的ap值是0.495,远高于测试的结果。请问预训练模型是不是传错了?

The speed for HRNet

Hello, when I was testing your model, I found that the "TEST_SCALES" in "hrnet_ *.yaml" was set to [1, 2] and [1] in other YAML files, which had a great impact on the speed of HRNet model. Is there any special purpose for you to set it like this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.