Coder Social home page Coder Social logo

keras-centernet's People

Contributors

see-- avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-centernet's Issues

Using a different backbone network

First of all great stuff! I ran your code and it works perfectly for my images.

I'm fairly new to building NNs but I'm thinking of replacing the hourglass network with the new EfficientNet for feature extraction. How should I do that?

Pointing me in a general direction will be extremely useful. Thank you!

EfficientNet paper: https://arxiv.org/abs/1905.11946

report AttributeError: module 'tensorflow.math' has no attribute 'top_k' when run demo code

~/learnAI/keras-centernet$ PYTHONPATH=. python keras_centernet/bin/ctdet_image.py --fn assets/demo2.jpg --inres 512,512
Using TensorFlow backend.
Traceback (most recent call last):
File "keras_centernet/bin/ctdet_image.py", line 59, in
main()
File "keras_centernet/bin/ctdet_image.py", line 35, in main
model = CtDetDecode(model)
File "/home/ubuntu/learnAI/keras-centernet/keras_centernet/models/decode.py", line 66, in CtDetDecode
output = Lambda(_decode)([model.outputs[i] for i in [hm_index, reg_index, wh_index]])
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/keras/engine/base_layer.py", line 457, in call
output = self.call(inputs, **kwargs)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/keras/layers/core.py", line 687, in call
return self.function(inputs, **arguments)
File "/home/ubuntu/learnAI/keras-centernet/keras_centernet/models/decode.py", line 65, in _decode
return _ctdet_decode(hm, reg, wh, k=k, output_stride=output_stride)
File "/home/ubuntu/learnAI/keras-centernet/keras_centernet/models/decode.py", line 57, in _ctdet_decode
detections = K.map_fn(_process_sample, [hm_flat, reg_flat, wh_flat], dtype=K.floatx())
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 4321, in map_fn
return tf.map_fn(fn, elems, name=name, dtype=dtype)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/functional_ops.py", line 459, in map_fn
maximum_iterations=n)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 3209, in while_loop
result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2941, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2878, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 3179, in
body = lambda i, lv: (i + 1, orig_body(*lv))
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/functional_ops.py", line 448, in compute
packed_fn_values = fn(packed_values)
File "/home/ubuntu/learnAI/keras-centernet/keras_centernet/models/decode.py", line 32, in _process_sample
_scores, _inds = tf.math.top_k(_hm, k=k, sorted=True)
AttributeError: module 'tensorflow.math' has no attribute 'top_k'

do any one meet the same issue?

Training available?

I have looked at the code only briefly, can one train the network up-to-date around it for own (new) objects adapt?

COCO evaluation: mAP quite low if random 1000 images are picked

Refer to keras_center/bin/ctdet_coco.py (line 58), I have made the following change to randomly pick only 1000 samples :

import random
  ...
fns = sorted(glob(os.path.join(args.data, '*.jpg')))
ranom.shuffle(fns)
fns = fns[:1000]
....

bash command:

python keras_centernet/bin/ctdet_coco.py --data val2017 --annotations annotations --inres 384,384 --no-full-resolution

After evaluation,

  Average Precision (AP) @[ IoU=0.50:0.95 | area=  all | maxDets=100 ] = 0.075

which is quite low compare to the all-image evaluation, AP = 0.364

As I have done this to mask-rcnn keras, the max accuracy drop is around 2%. Here, the result dropps too much!!!
Can someone explain anything wrong with the change?
BTW: I have tried more samples, it seems the more images, the AP higher

purpose of letterbox_transformer and possible bug

What is the purpose of letterbox_transformer used in the code?

In your test codes, you have processed the bounding boxes in the following manner,

for d in detections:
    x1, y1, x2, y2, score, cl = d
    if score < 0.3:
        break

Should the break statement be there or it should be a continue statement instead?

0it [00:00, ?it/s] no output

There is no output and display "0it [00:00, ?it/s]" when I run the file "ctdet_image.py" with Pycharm. How can i figure it out please? Would you please give me some help please? Thank you very much.

draw_box() is too slow

To draw detected object labels, convert to Pillow image object. But this process is too heavy cost.
I changed draw_box() in utils.py like below,

  def draw_box(self, img, x1, y1, x2, y2, cl):
    cl = int(cl)
    x1, y1, x2, y2 = int(round(float(x1))), int(round(float(y1))), int(round(float(x2))), int(round(float(y2)))
    h = img.shape[0]
    width = max(1, int(h * 0.006))
    name = self.coco_names[cl].split()[-1]
    bgr_color = get_rgb_color(cl, len(self.coco_names))[::-1]
    # bounding box
    #cv2.rectangle(img, (x1, y1), (x2, y2), bgr_color, width)
    cv2.rectangle(img, (x1, y1), (x2, y2), bgr_color, 2)
    # font background
    #font_width = len(name) * self.char_width
    #cv2.rectangle(img, (x1 - math.ceil(width / 2), y1 - self.font_size), (x1 + font_width, y1), bgr_color, -1)
    # text
    #pil_img = Image.fromarray(img[..., ::-1])
    #draw = ImageDraw.Draw(pil_img)
    #draw.text((x1 + width, y1 - self.font_size), name, font=self.font, fill=(0, 0, 0, 255))
    #img = np.array(pil_img)[..., ::-1].copy()
    # draw text shadow
    cv2.putText(img, name, (x1+width+1, y1-6+1), cv2.FONT_HERSHEY_PLAIN, 1.2, (0,0,0), 2, cv2.LINE_AA)
    # draw text shadow
    cv2.putText(img, name, (x1+width, y1-6), cv2.FONT_HERSHEY_PLAIN, 1.2, bgr_color, 2, cv2.LINE_AA)

    return img

Centernet bbox height width label

hello author,

I am generating custom dataset for centernet model. can you please help me understand how to create height width labels for multi class. I know how to create Gaussian heatmap. For example if I have 20 classes and if output size is 128x128, I will create empty numpy array of size 128x128x20 and append every class heatmap into 3rd channel.

but for height width according to your paper, it is 128,128,2. I know 2 channel for height and width. Does it mean maximum object limit is 128x128=16284. Can you please explain how to append the data and which axis. I am confused how it can work for multi classes.

Thank you

Pytorch to keras weight mapping

Hi, YOu have done a grt job. Could you pls share the scripts for mapping pytoch weights to keras framework. Pytorch2keras is not working propoerly for the arch.

Thanks

how about retraining?

nice work! How about retraining? It will support retraining on new dataset? Thanks

The predicted score is 0.0

When I run the keras_centernet/bin/ctdet_image.py, and input demo.jpg (or demo2.jpg), the predicted score is always 0.0, How can I solve this problem?Thanks very much!

Data Format Convention used by the HourglassNetwork

There is a link which is being mentioned: ~/.keras/keras.json. I'm unable to access this link and this is mentioned in the hourglass.py file(under def HourglassNetwork; Note that the data format convention used by the model is the one specified in your Keras config at ~/.keras/keras.json.). How do I find the data format for the HourglassNetwork architecture?

about weight transplantation

When I try to convert my model weights from PyTorch to Keras, I find the layer names are different. How do I match these weights? Could you please give me some hints?

replace MaxPoll with stride=2

notice that you replace Maxpooling with residual module of stride=2, is it safe compaired with the orginal nerwork?

fails to run in Docker on CPU

Hello , i tried to run keras-centernet , on Docker with cpu . I built Docker image from Ros -melodic docker image. but it gives me following error

Segmentation fault (core dumped)

Full log

Using TensorFlow backend.
2019-12-06 07:22:42.321627: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-06 07:22:42.370633: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3398300000 Hz
2019-12-06 07:22:42.376540: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4622120 executing computations on platform Host. Devices:
2019-12-06 07:22:42.376592: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-12-06 07:22:42.399679: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
Segmentation fault (core dumped)

DLA-34 model

Hi, do you have any plan to implement the DLA-34 (deep layer aggregation) network described in the paper? I think it provide a better speed/accuracy trade off than the hourglass model (3 times faster for 3 to 4 less AP on COCO dataset).

About convert pytorch model to keras or tensorflow

Thanks for your work. I am trying to train an hourglass architecture in official Pytorch implementation on different datasets. I plan to convert my Pytorch model to Keras and run it using your code. How did you convert the official hourglass model to .hdf5 files in your repository and could you share this code? Thanks! @see--

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.