liuwei16 / csp Goto Github PK
View Code? Open in Web Editor NEWHigh-level Semantic Feature Detection: A New Perspective for Pedestrian Detection, CVPR, 2019
High-level Semantic Feature Detection: A New Perspective for Pedestrian Detection, CVPR, 2019
Thanks for your work. But there exists something to be improved in the part of data augmentation, or specifically, the crop/pad part of the code. Your code only considered the comparison between img.shape[0] and c.size_train[0] but did not consider the comparison between img.shape[1] and c.size_train[1].
IN nms_wrapper.py , there is a function cpu_soft_nms(),but i can't find it whrer is defined
Epoch 1/150
get_batch_gt: 'NoneType' object has no attribute 'shape'
get_batch_gt: 'NoneType' object has no attribute 'shape'
get_batch_gt: 'NoneType' object has no attribute 'shape'
get_batch_gt: 'NoneType' object has no attribute 'shape'
get_batch_gt: 'NoneType' object has no attribute 'shape'
get_batch_gt: 'NoneType' object has no attribute 'shape'
get_batch_gt: 'NoneType' object has no attribute 'shape'
get_batch_gt: 'NoneType' object has no attribute 'shape'
What 's wrong with me?
@liuwei16 Hi, thanks for your research.
In generate_cache_city.py
, you cache boxes with label=1 and height >= 50, but do not consider the occlusion level, that means you are not using the reasonable training set. Is it right? If so, what do you think of this setting?
Hi, Liu Wei,
Sorry to bother you. I encountered this problem when testing the h+w(model_CSP/caltech/fromimgnet/h+w/) model with test_caltech.py, which was encountered when loading hdf5 with model.load_weights(weight1, by_name=True). Do you know the reason?
Who can tell me how to download citypersons?
In "https://www.cityscapes-dataset.com/downloads/" I download "gtBbox_cityPersons_trainval.zip (2.2MB)" But it's json not mat?
I make the faster-rcnn and copy utils to CSP. But when running test.py, have the following errors.
My gcc -version: 4.8
(py27) ➜ CSP python test_city.py
Using TensorFlow backend.
/home/mingzhi/anaconda3/envs/py27/lib/python2.7/site-packages/Cython/Compiler/Main.py:367: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /home/mingzhi/Downloads/CSP/keras_csp/nms/gpu_nms.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
In file included from /usr/include/numpy/ndarraytypes.h:1809:0,
from /usr/include/numpy/ndarrayobject.h:18,
from /usr/include/numpy/arrayobject.h:4,
from /home/mingzhi/.pyxbld/temp.linux-x86_64-2.7/pyrex/keras_csp/nms/gpu_nms.c:593:
/usr/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it by "
^
/home/mingzhi/.pyxbld/temp.linux-x86_64-2.7/pyrex/keras_csp/nms/gpu_nms.c:595:23: fatal error: gpu_nms.hpp: No such file or directory
#include "gpu_nms.hpp"
^
compilation terminated.
Traceback (most recent call last):
File "test_city.py", line 7, in
from keras_csp import config, bbox_process
File "/home/mingzhi/Downloads/CSP/keras_csp/bbox_process.py", line 3, in
from nms_wrapper import nms
File "/home/mingzhi/Downloads/CSP/keras_csp/nms_wrapper.py", line 11, in
from nms.gpu_nms import gpu_nms
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/site-packages/pyximport/pyximport.py", line 462, in load_module
language_level=self.language_level)
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/site-packages/pyximport/pyximport.py", line 233, in load_module
exec("raise exc, None, tb", {'exc': exc, 'tb': tb})
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/site-packages/pyximport/pyximport.py", line 215, in load_module
inplace=build_inplace, language_level=language_level)
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/site-packages/pyximport/pyximport.py", line 191, in build_module
reload_support=pyxargs.reload_support)
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/site-packages/pyximport/pyxbuild.py", line 102, in pyx_to_dll
dist.run_commands()
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/site-packages/Cython/Distutils/old_build_ext.py", line 194, in build_extensions
self.build_extension(ext)
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/distutils/command/build_ext.py", line 499, in build_extension
depends=ext.depends)
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/distutils/ccompiler.py", line 574, in compile
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/home/mingzhi/anaconda3/envs/py27/lib/python2.7/distutils/unixccompiler.py", line 124, in _compile
raise CompileError, msg
ImportError: Building module keras_csp.nms.gpu_nms failed: ["CompileError: command 'gcc' failed with exit status 1\n"]
from nms.cpu_nms import cpu_nms
can't import correctly.
so i wonder how to compile in your code, and can you show the makefile
@liuwei16
Is the result of the paper tested on the test set?
Have you tried or do you have plans to try your method on other detection datasets, such as MS-COCO?
I first download from https://www.cityscapes-dataset.com/file-handling/?packageID=28
but I find they are json not png?
Is there anyone who can give me a link?
Thank you!
qq:756506746
Why report an error while running. / eval_caltech/dbEval.m?
Loading detections:. / ResultsEval/dt-.
A field 'type' that does not exist is referenced.
Error dbEval>loadDt (line 304).
Alltype=unique ({algs (:). Type});
It is awesome work. I am very surprising in its high performance on face detection. Would you please release your code? Thank you.
I used the code test_wider_ms.py and your model, but the widerface result are easy 81.2%, medium 82.2%, hard 75.3%.
I directly use the weights you provided which is trained on CityPersons, However, when I test the detector on my own picture, its performance is even worse than my coarsely trained FPN, many overlapped cases cannot be handled properly. So how is the problem generated? I strictly obey the pre/post-process of the input/output. So any idea of the inference in deploy? Thanks!
And the detector also shares the same drawback with other single-stage detectors which is the bounding box is not well fit to the target. Actually, the results of CSP give me an impression as it is generated by Yolo or SSD.
By the way, I also think the idea of heights regression is very original.
Thanks for you great work.
Could you provide the basic idea about estimating the variance of gaussian mask?
sigma = ((kernel-1) * 0.5 - 1) * 0.3 + 0.8
How to know 0.3 and 0.8 used in the estimate of sigma?
Hi,Liu Wei. I have mastered the use of CSP. But I don't seem to find a way to train it to train multiple categories of data sets (such as VOC). can you help me? @liuwei16
When I use the way of pip install -r requirements.txt
it caused error: No matching distribution found for tensorflow-gpu==1.4.1 (from -r requirements.txt (line 1))
I'm using ubuntu 18.04
Could you please release the test code of fddb or tell some hyperparameters?
Thanks for sharing your excellent work. Can you share your training log? Thanks
I tried this code on Python3.5 and the training process has already been done.
But when I test the model, I can't pass through nms.
I found that this part is built on Cython but I have no idea about it.
Does anybody know how to compile this part?
sorry to bother you. I encountered this problem when testing the h+w(model_CSP/caltech/fromimgnet/h+w/) model with “test_caltech.py”, which was encountered when loading hdf5 with ”model.load_weights(weight1, by_name=True)“. Do you know the reason?
Hello @liuwei16, thank you for the excellent work!
I'm trying to reproduce your results from the paper, and would like to try out the parameters you have uploaded to Baidu. However, downloading from Baidu from outside China seems to be almost impossible, and I haven't been able to obtain the files.
Would it be possible for you to host the network weight files somewhere else than Baidu?
Best regards!
When I run the extract_img_anno.m, I only find the training dataset. And I find that this line
https://github.com/liuwei16/CSP/blob/master/eval_caltech/extract_img_anno.m#L12
only extracts the training dataset, is it wrong?
hello,why is there no code?
Using TensorFlow backend.
num of training samples: 1
WARNING:tensorflow:From /home/lulu/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /home/lulu/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:2893: calling l2_normalize (from tensorflow.python.ops.nn_impl) with dim is deprecated and will be removed in a future version.
Instructions for updating:
dim is deprecated, use axis instead
load weights from data/models/resnet50_weights_tf_dim_ordering_tf_kernels.h5
Starting training with lr 0.0002 and alpha 0.999
Epoch 1/150
WARNING:tensorflow:From /home/lulu/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
I can't found the detection mat file
mobilenet_v1 can not be download
Can not reproduce the results in the paper, using resnet-50, I loaded the weight provided by the author (net_e110_l0.0082234138241.hdf5), in the parse_det_offset function, when the threshold corresponding to seman is 0.01, there are countless misdetections, small targets Pedestrians can hardly detect it. Can anyone tell me how to reproduce the results in the author's paper?
Can I run this program just with the CPU?I plan to use my laptop to prove this model.
I've got the pretrained model from google drive,
but I now can't figure out how to actually run them.
What parts of codes should I modify , and what command lines should I use to run them?
@liuwei16 Thanks for sharing your excellent work.
But, when I use two gpu with batch size of 4 images, learning rate with 2e-4 or 4e-4, I can not reproduce the paper results. Can you provide some advice? Any advice will be appreciated, thanks
Hi, thanks for sharing the code.
I have trained on both Caltech and Cityperson dataset based on this code, the results are so unstable that vary a lot between epochs, even with student-teacher averaging way.
I may wonder is it due to the dataset or the method itself?
Hi, thanks for sharing the code.
I have a question about seman_map, defined at calc_gt_center().
It seems to be used to classify the foreground or background. Could anyone further explain the meaning and usage of that? What is the intuition behind that? Why use the Gaussian function?
Thanks!
I am currently working on Caltech dataset, and have completed upto the evaluation stage by running ./eval_caltech/dbEval.m file. I have gotten the eval-newReasonable.txt file as well.
The question is, where and how can I view the final image with bounding boxes?
Please help.
Thanks!!
can not download models
What is the meaning of the data in the txt file under the CSP-master/output/valresults/h/off/ path?
How do I use this data?
thanks
Whether running the code has very high requirement for CPU?
When I run the code, I found the gpu utility is very low. Thanks
Great work! thank you for your code. I have a question about seman_map. seman_map[:,:,1] = 1
https://github.com/liuwei16/CSP/blob/785bc4c5f956860116d8d51754fd76202afe4bcb/keras_csp/data_generators.py#L23
But in https://github.com/liuwei16/CSP/blob/785bc4c5f956860116d8d51754fd76202afe4bcb/keras_csp/data_generators.py#L39
seman_map[y1:y2, x1:x2,1] = 1
, is there have a bug?
Thank you for sharing your code and the trained model, but I only found the weight of epoch 121 in the cityperson you shared, and there is no other epochs weight. Where can I find it? Or can you share me a copy? thank you very much
Using TensorFlow backend.
num of training samples: 2975
WARNING:tensorflow:From /home/mingzhi/anaconda3/envs/py27/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:2893: calling l2_normalize (from tensorflow.python.ops.nn_impl) with dim is deprecated and will be removed in a future version.
Instructions for updating:
dim is deprecated, use axis instead
2019-05-07 19:46:42.470193: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-05-07 19:46:42.547037: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-05-07 19:46:42.547592: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
totalMemory: 10.91GiB freeMemory: 7.22GiB
2019-05-07 19:46:42.547603: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0
2019-05-07 19:46:42.729170: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-07 19:46:42.729195: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0
2019-05-07 19:46:42.729199: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N
2019-05-07 19:46:42.729394: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6964 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
load weights from data/models/resnet50_weights_tf_dim_ordering_tf_kernels.h5
WARNING:tensorflow:From /home/mingzhi/anaconda3/envs/py27/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:1299: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
Starting training with lr 0.0002 and alpha 0.999
Epoch 1/150
('get_batch_gt:', AttributeError("'NoneType' object has no attribute 'shape'",))
('get_batch_gt:', AttributeError("'NoneType' object has no attribute 'shape'",))
('get_batch_gt:', AttributeError("'NoneType' object has no attribute 'shape'",))
('get_batch_gt:', AttributeError("'NoneType' object has no attribute 'shape'",))
('get_batch_gt:', AttributeError("'NoneType' object has no attribute 'shape'",))
('get_batch_gt:', AttributeError("'NoneType' object has no attribute 'shape'",))
('get_batch_gt:', AttributeError("'NoneType' object has no attribute 'shape'",))
('get_batch_gt:', AttributeError("'NoneType' object has no attribute 'shape'",))
Exception: need at least one array to concatenate
('epoch', 0, 'takes', 0.0, 'mins')
Epoch 2/150
Exception:
('epoch', 1, 'takes', 0.0, 'mins')
Epoch 3/150
Exception:
('epoch', 2, 'takes', 0.0, 'mins')
Epoch 4/150
Exception:
('epoch', 3, 'takes', 0.0, 'mins')
Epoch 5/150
Exception:
('epoch', 4, 'takes', 0.0, 'mins')
Epoch 6/150
Exception:
('epoch', 5, 'takes', 0.0, 'mins')
Epoch 7/150
Exception:
('epoch', 6, 'takes', 0.0, 'mins')
Epoch 8/150
Exception:
('epoch', 7, 'takes', 0.0, 'mins')
Epoch 9/150
Exception:
('epoch', 8, 'takes', 0.0, 'mins')
Epoch 10/150
Exception:
('epoch', 9, 'takes', 0.0, 'mins')
Epoch 11/150
Exception:
('epoch', 10, 'takes', 0.0, 'mins')
Epoch 12/150
Exception:
('epoch', 11, 'takes', 0.0, 'mins')
Epoch 13/150
Exception:
('epoch', 12, 'takes', 0.0, 'mins')
Epoch 14/150
Exception:
('epoch', 13, 'takes', 0.0, 'mins')
Epoch 15/150
Exception:
('epoch', 14, 'takes', 0.0, 'mins')
Epoch 16/150
Exception:
('epoch', 15, 'takes', 0.0, 'mins')
Epoch 17/150
Exception:
('epoch', 16, 'takes', 0.0, 'mins')
Epoch 18/150
Exception:
('epoch', 17, 'takes', 0.0, 'mins')
Epoch 19/150
Exception:
Use matlab to run the extract_img_anno.m.
extract_img_anno
Undefined function or variable 'dbExtract'。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.