Coder Social home page Coder Social logo

relation-networks-for-object-detection's Introduction

Relation Networks for Object Detection

The major contributors of this repository include Dazhi Cheng, Jiayuan Gu, Han Hu and Zheng Zhang.

Introduction

Relation Networks for Object Detection is described in an CVPR 2018 oral paper.

Disclaimer

This is an official implementation for Relation Networks for Object Detection based on MXNet. It is worth noting that:

  • This repository is tested on official MXNet v1.1.0@(commit 629bb6). You should be able to use it with any version of MXNET that contains required operators like Deformable Convolution.
  • We trained our model based on the ImageNet pre-trained ResNet-v1-101 using a model converter. The converted model produces slightly lower accuracy (Top-1 Error on ImageNet val: 24.0% v.s. 23.6%).
  • This repository is based on Deformable ConvNets.

License

© Microsoft, 2018. Licensed under an MIT license.

Citing Relation Networks for Object Detection

If you find Relation Networks for Object Detection useful in your research, please consider citing:

@article{hu2017relation,
  title={Relation Networks for Object Detection},
  author={Hu, Han and Gu, Jiayuan and Zhang, Zheng and Dai, Jifeng and Wei, Yichen},
  journal={arXiv preprint arXiv:1711.11575},
  year={2017}
} 

Main Results

Faster RCNN

training data testing data mAP [email protected] [email protected] mAP@S mAP@M mAP@L Inference Time Post Processing Time
2FC + nms(0.5)
ResNet-101
coco trainval35k coco minival 31.8 53.9 32.2 10.5 35.2 51.5 0.168s 0.025s
2FC + softnms(0.6)
ResNet-101
coco trainval35k coco minival 32.3 52.8 34.1 11.1 35.9 51.8 0.200s 0.060s
2FC + Relation Module + softnms
ResNet-101
coco trainval35k coco minival 34.7 55.3 37.2 13.7 38.8 53.6 0.211s 0.059s
2FC + Learn NMS
ResNet-101
coco trainval35k coco minival 32.6 51.8 35.0 11.8 36.6 52.1 0.162s 0.020s
2FC + Relation Module + Learn NMS(e2e)
ResNet-101
coco trainval35k coco minival 35.2 55.5 38.0 15.2 39.2 54.1 0.175s 0.022s

Deformable Faster RCNN

training data testing data mAP [email protected] [email protected] mAP@S mAP@M mAP@L Inference Time NMS Time
2FC + nms(0.5)
ResNet-101
coco trainval35k coco minival 37.2 58.1 40.0 16.4 41.3 55.5 0.180s 0.022s
2FC + softnms(0.6)
ResNet-101
coco trainval35k coco minival 37.5 57.3 41.0 16.6 41.7 55.8 0.208s 0.052s
2FC + Relation Module + Learn NMS(e2e)
ResNet-101
coco trainval35k coco minival 38.4 57.6 41.6 18.2 43.1 56.6 0.188s 0.023s

FPN

training data testing data mAP [email protected] [email protected] mAP@S mAP@M mAP@L Inference Time NMS Time
2FC + nms(0.5)
ResNet-101
coco trainval35k coco minival 36.6 59.3 39.3 20.3 40.5 49.4 0.196s 0.037s
2FC + softnms(0.6)
ResNet-101
coco trainval35k coco minival 36.8 57.8 40.7 20.4 40.8 49.7 0.323s 0.167s
2FC + Relation Module + Learn NMS(e2e)
ResNet-101
coco trainval35k coco minival 38.6 59.9 43.0 22.1 42.3 52.8 0.232s 0.022s

Running time is counted on a single Maxwell Titan X GPU (mini-batch size is 1 in inference).

Requirements: Software

  1. MXNet from the offical repository. We tested our code on MXNet v1.1.0@(commit 629bb6). Due to the rapid development of MXNet, it is recommended to checkout this version if you encounter any issues. We may maintain this repository periodically if MXNet adds important feature in future release.

  2. Python 2.7. We recommend using Anaconda2 as it already includes many common packages. We do not support Python 3 yet, if you want to use Python 3 you need to modify the code to make it work.

  3. The following Python packages:

Cython
EasyDict
mxnet-cu80
opencv-python >= 3.2.0

Requirements: Hardware

Any NVIDIA GPUs with at least 6GB memory should be OK.

Installation

  1. Clone the Relation Networks for Object Detection repository.
git clone https://github.com/msracver/Relation-Networks-for-Object-Detection.git
cd Relation-Networks-for-Object-Detection
  1. Run sh ./init.sh. The scripts will build cython module automatically and create some folders.

  2. Install MXNet:

Quick start

3.1 Install MXNet and all dependencies by

pip install -r requirements.txt

If there is no other error message, MXNet should be installed successfully.

Build from source (alternative way)

3.2 Clone MXNet v1.1.0 by

git clone -b v1.1.0 --recursive https://github.com/apache/incubator-mxnet.git

3.3 Compile MXNet

cd ${MXNET_ROOT}
make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1

3.4 Install the MXNet Python binding by

Note: If you will actively switch between different versions of MXNet, please follow 3.5 instead of 3.4

cd python
sudo python setup.py install

3.5 For advanced users, you may put your Python packge into ./external/mxnet/$(YOUR_MXNET_PACKAGE)/mxnet, and modify MXNET_VERSION in ./experiments/relation_rcnn/cfgs/*.yaml to $(YOUR_MXNET_PACKAGE). Thus you can switch among different versions of MXNet quickly.

Preparation for Training & Testing

  1. Please download COCO datasets, and make sure it looks like this:
./data/coco/
  1. Please download ImageNet-pretrained ResNet-v1-101 backbone model and Faster RCNN ResNet-v1-101 model manually from OneDrive or Baiduyun (password:pech), and put it under folder ./model/pretrained_model. Make sure it looks like this:
./model/pretrained_model/resnet_v1_101-0000.params

We use a pretrained Faster RCNN and fix its params when training Faster RCNN with Learn NMS head. If you are trying to conduct such experiments, please also include the pretrained Faster RCNN model from OneDrive and put it under folder ./model/pretrained_model. Make sure it looks like this:

./model/pretrained_model/coco_resnet_v1_101_rcnn-0008.params
  1. For FPN related experiments, we use proposals generated by a pretrained RPN to speed up our experiments. Please download the proposals from OneDrive or Baiduyun (due to its size constraint) part1 (password:g24u) part2 (password:ipa8) and put it under folder ./proposal/resnet_v1_101_fpn/rpn_data. Make sure it looks like this:

    ./proposal/resnet_v1_101_fpn/rpn_data/COCO_minival2014_rpn.pkl
    ./proposal/resnet_v1_101_fpn/rpn_data/COCO_train2014_rpn.pkl
    ./proposal/resnet_v1_101_fpn/rpn_data/COCO_valminusminival2014_rpn.pkl
    

Demo Models

We provide trained relation network models, covering all settings in the above Main Results table.

  1. To try out our pre-trained relation network models, please download manually from OneDrive or Baiduyun (password:9q6i), and put it under folder output/.

    Make sure it looks like this:

    ./output/rcnn/coco/resnet_v1_101_coco_trainvalminus_rcnn_end2end_8epoch/train2014_valminusminival2014/rcnn_coco-0008.params
    ./output/rcnn/coco/resnet_v1_101_coco_trainvalminus_rcnn_end2end_relation_8epoch/train2014_valminusminival2014/rcnn_coco-0008.params
    ./output/rcnn/coco/resnet_v1_101_coco_trainvalminus_rcnn_end2end_learn_nms_3epoch/train2014_valminusminival2014/rcnn_coco-0003.params
    ./output/rcnn/coco/resnet_v1_101_coco_trainvalminus_rcnn_end2end_relation_learn_nms_8epoch/train2014_valminusminival2014/rcnn_coco-0008.params
    ./output/rcnn/coco/resnet_v1_101_coco_trainvalminus_rcnn_dcn_end2end_8epoch/train2014_valminusminival2014/rcnn_coco-0008.params
    ./output/rcnn/coco/resnet_v1_101_coco_trainvalminus_rcnn_dcn_end2end_relation_learn_nms_8epoch/train2014_valminusminival2014/rcnn_coco-0008.params
    ./output/rcnn/coco/resnet_v1_101_coco_trainvalminus_rcnn_fpn_8epoch/train2014_valminusminival2014/rcnn_fpn_coco-0008.params
    ./output/rcnn/coco/resnet_v1_101_coco_trainvalminus_rcnn_fpn_relation_learn_nms_8epoch/train2014_valminusminival2014/rcnn_fpn_coco-0008.params
    
  2. To run the Faster RCNN with Relation Module and Learn NMS model, run

    python experiments/relation_rcnn/rcnn_test.py --cfg experiments/relation_rcnn/cfgs/resnet_v1_101_coco_trainvalminus_rcnn_end2end_relation_learn_nms_8epoch.yaml --ignore_cache
    

    If you want to try other models, just change the config files. There are ten config files in ./experiments/relation_rcnn/cfg folder, eight of which are provided with pretrained models.

Usage

  1. All of our experiment settings (GPU #, dataset, etc.) are kept in yaml config files at folder ./experiments/relation_rcnn/cfgs.

  2. Ten config files have been provided so far, namely, Faster RCNN, Deformable Faster RCNN and FPN with 2FC head, 2FC + Relation Head and 2FC + Relation + Learn NMS(e2e), and an additional Faster RCNN with 2FC + Learn NMS head. We use 4 GPUs to train our models.

  3. To perform experiments, run the python scripts with the corresponding config file as input. For example, to train and test Faster RCNN with Relation Module and Learn NMS(e2e), use the following command:

    python experiments\relation_rcnn\rcnn_end2end_train_test.py --cfg experiments/relation_rcnn/cfgs/resnet_v1_101_coco_trainvalminus_rcnn_end2end_relation_learn_nms_8epoch.yaml
    

    A cache folder would be created automatically to save the model and the log under output/rcnn/.

    The rcnn_end2end_train_test.py script is for Faster RCNN and Deformable Faster RCNN experiments that train RPN together with RCNN. To train and test FPN which use previously generated proposals, use the following command:

    python experiments\relation_rcnn\rcnn_train_test.py --cfg experiments/relation_rcnn/cfgs/resnet_v1_101_coco_trainvalminus_fpn_relation_learn_nms_8epoch.yaml
    
  4. Please find more details in config files and in our code.

FAQ

Q: I encounter segment fault at the beginning.

A: A compatibility issue has been identified between MXNet and opencv-python 3.0+. We suggest that you always import cv2 first before import mxnet in the entry script.


relation-networks-for-object-detection's People

Contributors

chengdazhi avatar stupidzz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

relation-networks-for-object-detection's Issues

Question about "nongt_dim"

hi! do you know how 'nongt_dim' works?


for example, nongt_dim = 2000(rois per image), when train, there is 2 images per batch, so 2000*2 rois. it will get the relation between every rois and first 2000 rois. but the rois of 2th image also get the relation about rois of 1th image. it is unreasonable.
thanks!

No results store in the result json file after finish run rcnn_test.py?

I have train on my own dataset after changing the configure file. And everything is looking good. Here is the snapshot of training log.
image
image
image
image
image
image
image
image
image

Here is the snapshot of the test log.
image

I have two questions to ask.

  1. From the train log, the Train-RPNACC and the Train-RPNLogLoss are getting worse. Does that mean I need to change the hyper parameters? And how do I change it?
  2. There is not any output in the result json file after finish testing. What happened?

Thanks in advance.

how to use own COCO-like data? And I get an error about num_classes?

I set the YAML as NUM_CLASSES: 189.

And I use tool https://raw.githubusercontent.com/withyou1771/Detectron_FocalLoss/master/tools/xml_to_json.py to get COCO-like data from VOC-like data.

then run

python experiments/relation_rcnn/rcnn_end2end_train_test.py --cfg experiments/relation_rcnn/cfgs/epoch8.yaml

HERE the error info.

('lr', 0.0005, 'lr_epoch_diff', [5.33], 'lr_iters', [79950])
Epoch[0] Batch [100]    Speed: 3.82 samples/sec Train-RPNAcc=0.964979,  RPNLogLoss=0.105434,    RPNL1Loss=0.111539,     RCNNAcc=0.837523,       RCNNLogLoss=1.683923,   RCNNL1Loss=0.280678,    NMSLoss_pos=0.052915,      NMSLoss_neg=0.014347,   NMSAcc_pos=0.000000,    NMSAcc_neg=1.000000,    
Epoch[0] Batch [200]    Speed: 3.83 samples/sec Train-RPNAcc=0.971723,  RPNLogLoss=0.088618,    RPNL1Loss=0.084358,     RCNNAcc=0.865341,       RCNNLogLoss=1.241336,   RCNNL1Loss=0.223327,    NMSLoss_pos=0.056996,      NMSLoss_neg=0.010787,   NMSAcc_pos=0.000000,    NMSAcc_neg=1.000000,    
Error in CustomOp.forward: Traceback (most recent call last):
  File "/home/hri/anaconda3/envs/relations/lib/python2.7/site-packages/mxnet/operator.py", line 987, in forward_entry
    aux=tensors[4])
  File "experiments/relation_rcnn/../../relation_rcnn/operator_py/box_annotator_ohem.py", line 36, in forward
    per_roi_loss_cls = per_roi_loss_cls[np.arange(per_roi_loss_cls.shape[0], dtype='int'), labels.astype('int')]
IndexError: index 189 is out of bounds for axis 1 with size 189

terminate called after throwing an instance of 'dmlc::Error'
  what():  [21:01:01] src/operator/custom/custom.cc:347: Check failed: reinterpret_cast<CustomOpFBFunc>( params.info->callbacks[kCustomOpForward])( ptrs.size(), const_cast<void**>(ptrs.data()), const_cast<int*>(tags.data()), reinterpret_cast<const int*>(req.data()), static_cast<int>(ctx.is_train), params.info->contexts[kCustomOpForward]) 

Stack trace returned 7 entries:
[bt] (0) /home/hri/anaconda3/envs/relations/lib/python2.7/site-packages/mxnet/libmxnet.so(+0x30756a) [0x7f50471e156a]
[bt] (1) /home/hri/anaconda3/envs/relations/lib/python2.7/site-packages/mxnet/libmxnet.so(+0x307b91) [0x7f50471e1b91]
[bt] (2) /home/hri/anaconda3/envs/relations/lib/python2.7/site-packages/mxnet/libmxnet.so(+0x4853f7) [0x7f504735f3f7]
[bt] (3) /home/hri/anaconda3/envs/relations/lib/python2.7/site-packages/mxnet/libmxnet.so(+0x46b128) [0x7f5047345128]
[bt] (4) /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xb8c80) [0x7f50b2816c80]
[bt] (5) /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7f50b903e6ba]
[bt] (6) /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f50b866441d]

Any help? Thanks

Question about "nongt_dim"

I don't know what the meaning of "nongt_dim" is in extract_position_embedding and attention_module_embedding. If it means "the number of rois that are not ground truth", how to determine which roi is not ground truth during training. Maybe I misunderstand it. Could anyone help me?

error: maskApi.c: No such file or directory

When I run sh ./init.sh, an error comes
x86_64-linux-gnu-gcc: error: maskApi.c: No such file or directory
And I have installed cython.
It seems when run in lib/ dataset/pycocotools/
python setup_linux.py build_ext --inplace

NotImplementedError

/home/dlc/anaconda3/envs/torch3_py27/bin/python2.7 /home/dlc/sll/Relation-Networks-for-Object-Detection-master/experiments/relation_rcnn/rcnn_end2end_train_test.py --cfg experiments/relation_rcnn/cfgs/resnet_v1_101_coco_trainvalminus_rcnn_fpn_relation_learn_nms_8epoch.yaml
('Called with argument:', Namespace(cfg='experiments/relation_rcnn/cfgs/resnet_v1_101_coco_trainvalminus_rcnn_fpn_relation_learn_nms_8epoch.yaml', frequent=100))
Traceback (most recent call last):
File "/home/dlc/sll/Relation-Networks-for-Object-Detection-master/experiments/relation_rcnn/rcnn_end2end_train_test.py", line 21, in
train_end2end.main()
File "/home/dlc/sll/Relation-Networks-for-Object-Detection-master/experiments/relation_rcnn/../../relation_rcnn/train_end2end.py", line 184, in main
config.TRAIN.model_prefix,config.TRAIN.begin_epoch, config.TRAIN.end_epoch, config.TRAIN.lr, config.TRAIN.lr_step)
File "/home/dlc/sll/Relation-Networks-for-Object-Detection-master/experiments/relation_rcnn/../../relation_rcnn/train_end2end.py", line 66, in train_net
sym = sym_instance.get_symbol(config, is_train=True)
File "/home/dlc/sll/Relation-Networks-for-Object-Detection-master/experiments/relation_rcnn/../../relation_rcnn/../lib/utils/symbol.py", line 25, in get_symbol
raise NotImplementedError()
NotImplementedError
跪求解决方法呜呜~~

mxnet.base.MXNetError: NaiveEngine only support synchronize Push so far

After training for several epoches, it raised the following error:

Epoch[7] Batch [2280]	Speed: 3.99 samples/sec	Train-RPNAcc=0.995009,	RPNLogLoss=0.016979,	RPNL1Loss=0.034412,	RCNNAcc=0.797916,	RCNNLogLoss=0.433694,	RCNNL1Loss=0.423676,	
Epoch[7] Batch [2300]	Speed: 3.96 samples/sec	Train-RPNAcc=0.995029,	RPNLogLoss=0.016926,	RPNL1Loss=0.034388,	RCNNAcc=0.797866,	RCNNLogLoss=0.433422,	RCNNL1Loss=0.423580,	
Epoch[7] Batch [2320]	Speed: 3.69 samples/sec	Train-RPNAcc=0.995011,	RPNLogLoss=0.017027,	RPNL1Loss=0.034292,	RCNNAcc=0.798346,	RCNNLogLoss=0.432565,	RCNNL1Loss=0.422573,
[17:51:53] /home/fallingdust/workspace/mxnet/dmlc-core/include/dmlc/./logging.h:308: [17:51:53] src/engine/naive_engine.cc:168: Check failed: this->req_completed_ NaiveEngine only support synchronize Push so far

Stack trace returned 10 entries:
[bt] (0) /usr/local/lib/python2.7/dist-packages/mxnet-1.0.0-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine11NaiveEngine9PushAsyncESt8functionIFvNS_10RunContextENS0_18CallbackOnCompleteEEENS_7ContextERKSt6vectorIPNS0_3VarESaISA_EESE_NS_10FnPropertyEiPKc+0x3b3) [0x7f921cb635a3]
[bt] (1) /usr/local/lib/python2.7/dist-packages/mxnet-1.0.0-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine11NaiveEngine4PushEPNS0_3OprENS_7ContextEib+0x8f) [0x7f921cb644af]
[bt] (2) /usr/local/lib/python2.7/dist-packages/mxnet-1.0.0-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet4exec13GraphExecutor6RunOpsEbmm+0x724) [0x7f921cc08e84]
[bt] (3) /usr/local/lib/python2.7/dist-packages/mxnet-1.0.0-py2.7.egg/mxnet/libmxnet.so(MXExecutorForward+0x11) [0x7f921cb9ab81]
[bt] (4) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call_unix64+0x4c) [0x7f9230c02e40]
[bt] (5) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call+0x2eb) [0x7f9230c028ab]
[bt] (6) /usr/lib/python2.7/lib-dynload/_ctypes.x86_64-linux-gnu.so(_ctypes_callproc+0x48f) [0x7f9230e123df]
[bt] (7) /usr/lib/python2.7/lib-dynload/_ctypes.x86_64-linux-gnu.so(+0x11d82) [0x7f9230e16d82]
[bt] (8) python(PyObject_Call+0x43) [0x4b0c93]
[bt] (9) python(PyEval_EvalFrameEx+0x602f) [0x4c9f9f]

Traceback (most recent call last):
  File "experiments/relation_rcnn/rcnn_end2end_train_test.py", line 21, in <module>
    train_end2end.main()
  File "experiments/relation_rcnn/../../relation_rcnn/train_end2end.py", line 193, in main
    config.TRAIN.begin_epoch, config.TRAIN.end_epoch, config.TRAIN.lr, config.TRAIN.lr_step)
  File "experiments/relation_rcnn/../../relation_rcnn/train_end2end.py", line 186, in train_net
    arg_params=arg_params, aux_params=aux_params, begin_epoch=begin_epoch, num_epoch=end_epoch)
  File "experiments/relation_rcnn/../../relation_rcnn/core/module.py", line 999, in fit
    self.forward_backward(data_batch)
  File "/usr/local/lib/python2.7/dist-packages/mxnet-1.0.0-py2.7.egg/mxnet/module/base_module.py", line 191, in forward_backward
    self.forward(data_batch, is_train=True)
  File "experiments/relation_rcnn/../../relation_rcnn/core/module.py", line 1074, in forward
    self._curr_module.forward(data_batch, is_train=is_train)
  File "experiments/relation_rcnn/../../relation_rcnn/core/module.py", line 554, in forward
    self._exec_group.forward(data_batch, is_train)
  File "experiments/relation_rcnn/../../relation_rcnn/core/DataParallelExecutorGroup.py", line 360, in forward
    exec_.forward(is_train=is_train)
  File "/usr/local/lib/python2.7/dist-packages/mxnet-1.0.0-py2.7.egg/mxnet/executor.py", line 150, in forward
    ctypes.c_int(int(is_train))))
  File "/usr/local/lib/python2.7/dist-packages/mxnet-1.0.0-py2.7.egg/mxnet/base.py", line 146, in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [17:51:53] src/engine/naive_engine.cc:168: Check failed: this->req_completed_ NaiveEngine only support synchronize Push so far

Stack trace returned 10 entries:
[bt] (0) /usr/local/lib/python2.7/dist-packages/mxnet-1.0.0-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine11NaiveEngine9PushAsyncESt8functionIFvNS_10RunContextENS0_18CallbackOnCompleteEEENS_7ContextERKSt6vectorIPNS0_3VarESaISA_EESE_NS_10FnPropertyEiPKc+0x3b3) [0x7f921cb635a3]
[bt] (1) /usr/local/lib/python2.7/dist-packages/mxnet-1.0.0-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine11NaiveEngine4PushEPNS0_3OprENS_7ContextEib+0x8f) [0x7f921cb644af]
[bt] (2) /usr/local/lib/python2.7/dist-packages/mxnet-1.0.0-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet4exec13GraphExecutor6RunOpsEbmm+0x724) [0x7f921cc08e84]
[bt] (3) /usr/local/lib/python2.7/dist-packages/mxnet-1.0.0-py2.7.egg/mxnet/libmxnet.so(MXExecutorForward+0x11) [0x7f921cb9ab81]
[bt] (4) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call_unix64+0x4c) [0x7f9230c02e40]
[bt] (5) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call+0x2eb) [0x7f9230c028ab]
[bt] (6) /usr/lib/python2.7/lib-dynload/_ctypes.x86_64-linux-gnu.so(_ctypes_callproc+0x48f) [0x7f9230e123df]
[bt] (7) /usr/lib/python2.7/lib-dynload/_ctypes.x86_64-linux-gnu.so(+0x11d82) [0x7f9230e16d82]
[bt] (8) python(PyObject_Call+0x43) [0x4b0c93]
[bt] (9) python(PyEval_EvalFrameEx+0x602f) [0x4c9f9f]

[17:51:53] src/engine/naive_engine.cc:55: Engine shutdown

I trained the network on just one GPU using CUDA_VISIBLE_DEVICES=0 although I have two GPUs.
Please give me some advice or help to fix it. Thank you.

How to generate "previously proposals" for another dataset beyond COCO to train with RPN

Hello, I am chaojie from Renmin University of China, Beijing.
Thanks for you excellent work in object detection.
But i have a problem when i try it with my own data,
It is all good for rcnn_attention and rcnn_dcn training and test, but when it comes rcnn_fpn symbol,
I have to provide the previously generated proposals in the directory like this "./proposal/resnet_v1_101_fpn/rpn_data/{}_{}_rpn.pkl", so could you please give tips to me about how to generate this file for another data with annotations like COCO. I am dying to know it, thank you very much!

relation code

Did someone run the code? Where is the relationship module code?

In the function train_net(), sys_instance.infer_shape() has a problem with the function slice_axis()

When try to run this network on the VOC datasets, I use the VOC datasets IO function from the Deformable ConvNets (https://github.com/msracver/Deformable-ConvNets)
but always find the problem with the function sys_instance.infer_shape() which in the ./relation_rcnn/train_end2end.py line 99 ,I track this problem to the file "my_mxnet_root/symbol/symbol.py", line 1119, in _infer_shape_impl ctypes.byref(complete), where check_call find error in operator slice_axis1: check failed: (*end <= axis_size) && (*end >=0) invalid begin, end, get begin=0, end =300

If someone knows how to solve this problem, or has done the implementation of this algorithm on the VOC dataset, please help me.

Validation

Hello ,
Does anyone know how to add a validation at the end of each epoch?
Thanks!!

Training on custom dataset?

I'd like to train a Faster R-CNN + Relation module (as in the readme with coco) on my own dataset. How do I go about it?

Usage of NMS

Hi, thanks for your work! I have a question about greedy-NMS usage: do you completely replace NMS with your duplicate removal network or add it complementary to greedy-NMS? From Table 1 it seems that greedy-NMS is still a part of pipeline...

COCO dataset directory layout

Could you please show us how to arrange COCO dataset's directory layout(including train/val/test)?

I found it different between your specification in cfg file and official COCO dataset, e.g. valminusminival2014 and minival2014.

The annotation files I pulled from COCO's official site include annotations_trainval2014.zip and image_info_test2015.zip. The content inside zip file looks like this for trainval2014 and test2015 respectively:
image
image

Therefore, I got error message like this:
image

@chengdazhi

why adding nms_embedding_feat and nms_attention_1 together

I don't understand codes below, in
resnet_v1_101_rcnn_attention_1024_pairwise_position_multi_head_16_learn_nms.py

You added one of Relation Module inputs and Relation Module output together as the input of later layer. But you did not explain why you do this in your paper.

nms_attention_1, nms_softmax_1 = self.attention_module_nms_multi_head(
                nms_embedding_feat, nms_position_matrix,
                num_rois=first_n, index=1, group=16,
                dim=(1024, 1024, 128), fc_dim=(64, 16), feat_dim=128
)
nms_all_feat_1 = nms_embedding_feat + nms_attention_1

I think that the network figure should add the green path
image

btw, nms_softmax_1 is redundant, I personally recommend deleting it in your release code.

FPN baseline

Hi! This is a great work!
But I wonder why the FPN baseline (2FC + softnms(0.6) ResNet-101) 36.8 mAP is lower than the Detectron FPN baseline (R-101-FPN) 38.5 mAP ? You both use ResNet101 and pre-computed proposals. Is there anything different in implementation?

cannot run scirpt due to package compatibility

Hi, I have prepared all the data/pretrained model/environment setting, then I run the command below:
python experiments\relation_rcnn\rcnn_end2end_train_test.py --cfg experiments/relation_rcnn/cfgs/resnet_v1_101_coco_trainvalminus_rcnn_end2end_relation_learn_nms_8epoch.yamllminus_rcnn_end2end_relation_learn_nms_8epoch.yaml
However, I got the error message below:
image

After I uninstalled and reinstalled numpy, it turned to be wrong with skimage.

errors when calling output_shapes

Hi,

there is an error when calling output_shapes (https://github.com/msracver/Relation-Networks-for-Object-Detection/blob/master/relation_rcnn/core/module.py#L216)

  File "experiments/relation_rcnn/rcnn_end2end_train_test.py", line 21, in <module>
    train_end2end.main()
  File "experiments/relation_rcnn/../../relation_rcnn/train_end2end.py", line 184, in main
    config.TRAIN.begin_epoch, config.TRAIN.end_epoch, config.TRAIN.lr, config.TRAIN.lr_step)
  File "experiments/relation_rcnn/../../relation_rcnn/train_end2end.py", line 177, in train_net
    arg_params=arg_params, aux_params=aux_params, begin_epoch=begin_epoch, num_epoch=end_epoch)
  File "experiments/relation_rcnn/../../relation_rcnn/core/module.py", line 1001, in fit
    print ('output shape {}'.format(self.output_shapes))
  File "experiments/relation_rcnn/../../relation_rcnn/core/module.py", line 801, in output_shapes
    return self._curr_module.output_shapes
  File "experiments/relation_rcnn/../../relation_rcnn/core/module.py", line 223, in output_shapes
    return self._exec_group.get_output_shapes()
AttributeError: 'DataParallelExecutorGroup' object has no attribute 'get_output_shapes'

Any advise?

one problem in the network training

When I'm training on the coco as the README declared,I meet this problem just like the blod log,and then the NMSLoss_pos and the NMSLoss_neg become nan,does anyone meet the same problem and give me some help?

_20180702195240

('lr', 0.0005, 'lr_epoch_diff', [5.33], 'lr_iters', [625027])
Epoch[0] Batch [100] Speed: 5.08 samples/sec Train-RPNAcc=0.847250, RPNLogLoss=0.376764, RPNL1Loss=0.187504, RCNNAcc=0.801361, RCNNLogLoss=1.674762, RCNNL1Loss=0.311297, NMSLoss_pos=0.035744, NMSLoss_neg=0.016391, NMSAcc_pos=0.000000, NMSAcc_neg=1.000000,
Epoch[0] Batch [200] Speed: 5.10 samples/sec Train-RPNAcc=0.865089, RPNLogLoss=0.328289, RPNL1Loss=0.176516, RCNNAcc=0.811237, RCNNLogLoss=1.380794, RCNNL1Loss=0.316205, NMSLoss_pos=0.048681, NMSLoss_neg=0.013534, NMSAcc_pos=0.000000, NMSAcc_neg=1.000000,
Epoch[0] Batch [300] Speed: 5.11 samples/sec Train-RPNAcc=0.874916, RPNLogLoss=0.302038, RPNL1Loss=0.159570, RCNNAcc=0.802546, RCNNLogLoss=1.319950, RCNNL1Loss=0.352934, NMSLoss_pos=0.057433, NMSLoss_neg=0.013499, NMSAcc_pos=0.000000, NMSAcc_neg=1.000000,
experiments/relation_rcnn/../../relation_rcnn/../lib/bbox/bbox_transform.py:128: RuntimeWarning: overflow encountered in exp
pred_w = np.exp(dw) * widths[:, np.newaxis]
experiments/relation_rcnn/../../relation_rcnn/../lib/bbox/bbox_transform.py:129: RuntimeWarning: overflow encountered in exp
pred_h = np.exp(dh) * heights[:, np.newaxis]
experiments/relation_rcnn/../../relation_rcnn/../lib/bbox/bbox_transform.py:133: RuntimeWarning: invalid value encountered in subtract
pred_boxes[:, 0::4] = pred_ctr_x - 0.5 * (pred_w - 1.0)
experiments/relation_rcnn/../../relation_rcnn/../lib/bbox/bbox_transform.py:135: RuntimeWarning: invalid value encountered in subtract
pred_boxes[:, 1::4] = pred_ctr_y - 0.5 * (pred_h - 1.0)
experiments/relation_rcnn/../../relation_rcnn/../lib/bbox/bbox_transform.py:137: RuntimeWarning: invalid value encountered in add
pred_boxes[:, 2::4] = pred_ctr_x + 0.5 * (pred_w - 1.0)
experiments/relation_rcnn/../../relation_rcnn/../lib/bbox/bbox_transform.py:139: RuntimeWarning: invalid value encountered in add
pred_boxes[:, 3::4] = pred_ctr_y + 0.5 * (pred_h - 1.0)
experiments/relation_rcnn/../../relation_rcnn/operator_py/proposal.py:180: RuntimeWarning: invalid value encountered in greater_equal
keep = np.where((ws >= min_size) & (hs >= min_size))[0]

Epoch[0] Batch [400] Speed: 5.02 samples/sec Train-RPNAcc=0.871289, RPNLogLoss=nan, RPNL1Loss=nan, RCNNAcc=0.810123, RCNNLogLoss=1.576645, RCNNL1Loss=0.334166, NMSLoss_pos=0.054120, NMSLoss_neg=nan, NMSAcc_pos=0.000000, NMSAcc_neg=0.999650,
Epoch[0] Batch [500] Speed: 4.91 samples/sec Train-RPNAcc=0.859804, RPNLogLoss=nan, RPNL1Loss=nan, RCNNAcc=0.836702, RCNNLogLoss=1.888214, RCNNL1Loss=0.267614, NMSLoss_pos=nan, NMSLoss_neg=nan, NMSAcc_pos=0.000000, NMSAcc_neg=0.999720,
Epoch[0] Batch [600] Speed: 4.99 samples/sec Train-RPNAcc=0.850682, RPNLogLoss=nan, RPNL1Loss=nan, RCNNAcc=0.853031, RCNNLogLoss=1.725999, RCNNL1Loss=0.223882, NMSLoss_pos=nan, NMSLoss_neg=nan, NMSAcc_pos=0.000000, NMSAcc_neg=0.999767,
Epoch[0] Batch [700] Speed: 4.98 samples/sec Train-RPNAcc=0.844466, RPNLogLoss=nan, RPNL1Loss=nan, RCNNAcc=0.865544, RCNNLogLoss=1.547918, RCNNL1Loss=0.192278, NMSLoss_pos=nan, NMSLoss_neg=nan, NMSAcc_pos=0.000000, NMSAcc_neg=0.999800,

does it works based on vgg16?

I recurrent the code based on vgg16 and I found that the map is the same as the faster-rcnn . Does the relation module works based on vgg16? thank you!

mAP Issue

There are several mAP in the paper, for example [email protected], [email protected], mAP@S, does any one know that what does it mean? Is it mean average precision, but if so, does it so low for the detection result? Because the mAP for the faster RCNN is more than 70%, but the best result for this paper is just around 50%. So I think it has different meaning. Many thanks.

Using rcnn_end2end_tain_test.py to train my own dataset

2018-06-26 10:15:50,538 {'bbox_pred_bias': (8L,),
'bbox_pred_weight': (8L, 1024L),
'bbox_target': (1L, 48L, 38L, 38L),
'bbox_weight': (1L, 48L, 38L, 38L),
'bn2a_branch1_beta': (256L,),
'bn2a_branch1_gamma': (256L,),
'bn2a_branch2a_beta': (64L,),
'bn2a_branch2a_gamma': (64L,),
'bn2a_branch2b_beta': (64L,),
'bn2a_branch2b_gamma': (64L,),
'bn2a_branch2c_beta': (256L,),
'bn2a_branch2c_gamma': (256L,),
'bn2b_branch2a_beta': (64L,),
'bn2b_branch2a_gamma': (64L,),
'bn2b_branch2b_beta': (64L,),
'bn2b_branch2b_gamma': (64L,),
'bn2b_branch2c_beta': (256L,),
'bn2b_branch2c_gamma': (256L,),
'bn2c_branch2a_beta': (64L,),
'bn2c_branch2a_gamma': (64L,),
'bn2c_branch2b_beta': (64L,),
'bn2c_branch2b_gamma': (64L,),
'bn2c_branch2c_beta': (256L,),
'bn2c_branch2c_gamma': (256L,),
'bn3a_branch1_beta': (512L,),
'bn3a_branch1_gamma': (512L,),
'bn3a_branch2a_beta': (128L,),
'bn3a_branch2a_gamma': (128L,),
'bn3a_branch2b_beta': (128L,),
'bn3a_branch2b_gamma': (128L,),
'bn3a_branch2c_beta': (512L,),
'bn3a_branch2c_gamma': (512L,),
'bn3b1_branch2a_beta': (128L,),
'bn3b1_branch2a_gamma': (128L,),
'bn3b1_branch2b_beta': (128L,),
'bn3b1_branch2b_gamma': (128L,),
'bn3b1_branch2c_beta': (512L,),
'bn3b1_branch2c_gamma': (512L,),
'bn3b2_branch2a_beta': (128L,),
'bn3b2_branch2a_gamma': (128L,),
'bn3b2_branch2b_beta': (128L,),
'bn3b2_branch2b_gamma': (128L,),
'bn3b2_branch2c_beta': (512L,),
'bn3b2_branch2c_gamma': (512L,),
'bn3b3_branch2a_beta': (128L,),
'bn3b3_branch2a_gamma': (128L,),
'bn3b3_branch2b_beta': (128L,),
'bn3b3_branch2b_gamma': (128L,),
'bn3b3_branch2c_beta': (512L,),
'bn3b3_branch2c_gamma': (512L,),
'bn4a_branch1_beta': (1024L,),
'bn4a_branch1_gamma': (1024L,),
'bn4a_branch2a_beta': (256L,),
'bn4a_branch2a_gamma': (256L,),
'bn4a_branch2b_beta': (256L,),
'bn4a_branch2b_gamma': (256L,),
'bn4a_branch2c_beta': (1024L,),
'bn4a_branch2c_gamma': (1024L,),
'bn4b10_branch2a_beta': (256L,),
'bn4b10_branch2a_gamma': (256L,),
'bn4b10_branch2b_beta': (256L,),
'bn4b10_branch2b_gamma': (256L,),
'bn4b10_branch2c_beta': (1024L,),
'bn4b10_branch2c_gamma': (1024L,),
'bn4b11_branch2a_beta': (256L,),
'bn4b11_branch2a_gamma': (256L,),
'bn4b11_branch2b_beta': (256L,),
'bn4b11_branch2b_gamma': (256L,),
'bn4b11_branch2c_beta': (1024L,),
'bn4b11_branch2c_gamma': (1024L,),
'bn4b12_branch2a_beta': (256L,),
'bn4b12_branch2a_gamma': (256L,),
'bn4b12_branch2b_beta': (256L,),
'bn4b12_branch2b_gamma': (256L,),
'bn4b12_branch2c_beta': (1024L,),
'bn4b12_branch2c_gamma': (1024L,),
'bn4b13_branch2a_beta': (256L,),
'bn4b13_branch2a_gamma': (256L,),
'bn4b13_branch2b_beta': (256L,),
'bn4b13_branch2b_gamma': (256L,),
'bn4b13_branch2c_beta': (1024L,),
'bn4b13_branch2c_gamma': (1024L,),
'bn4b14_branch2a_beta': (256L,),
'bn4b14_branch2a_gamma': (256L,),
'bn4b14_branch2b_beta': (256L,),
'bn4b14_branch2b_gamma': (256L,),
'bn4b14_branch2c_beta': (1024L,),
'bn4b14_branch2c_gamma': (1024L,),
'bn4b15_branch2a_beta': (256L,),
'bn4b15_branch2a_gamma': (256L,),
'bn4b15_branch2b_beta': (256L,),
'bn4b15_branch2b_gamma': (256L,),
'bn4b15_branch2c_beta': (1024L,),
'bn4b15_branch2c_gamma': (1024L,),
'bn4b16_branch2a_beta': (256L,),
'bn4b16_branch2a_gamma': (256L,),
'bn4b16_branch2b_beta': (256L,),
'bn4b16_branch2b_gamma': (256L,),
'bn4b16_branch2c_beta': (1024L,),
'bn4b16_branch2c_gamma': (1024L,),
'bn4b17_branch2a_beta': (256L,),
'bn4b17_branch2a_gamma': (256L,),
'bn4b17_branch2b_beta': (256L,),
'bn4b17_branch2b_gamma': (256L,),
'bn4b17_branch2c_beta': (1024L,),
'bn4b17_branch2c_gamma': (1024L,),
'bn4b18_branch2a_beta': (256L,),
'bn4b18_branch2a_gamma': (256L,),
'bn4b18_branch2b_beta': (256L,),
'bn4b18_branch2b_gamma': (256L,),
'bn4b18_branch2c_beta': (1024L,),
'bn4b18_branch2c_gamma': (1024L,),
'bn4b19_branch2a_beta': (256L,),
'bn4b19_branch2a_gamma': (256L,),
'bn4b19_branch2b_beta': (256L,),
'bn4b19_branch2b_gamma': (256L,),
'bn4b19_branch2c_beta': (1024L,),
'bn4b19_branch2c_gamma': (1024L,),
'bn4b1_branch2a_beta': (256L,),
'bn4b1_branch2a_gamma': (256L,),
'bn4b1_branch2b_beta': (256L,),
'bn4b1_branch2b_gamma': (256L,),
'bn4b1_branch2c_beta': (1024L,),
'bn4b1_branch2c_gamma': (1024L,),
'bn4b20_branch2a_beta': (256L,),
'bn4b20_branch2a_gamma': (256L,),
'bn4b20_branch2b_beta': (256L,),
'bn4b20_branch2b_gamma': (256L,),
'bn4b20_branch2c_beta': (1024L,),
'bn4b20_branch2c_gamma': (1024L,),
'bn4b21_branch2a_beta': (256L,),
'bn4b21_branch2a_gamma': (256L,),
'bn4b21_branch2b_beta': (256L,),
'bn4b21_branch2b_gamma': (256L,),
'bn4b21_branch2c_beta': (1024L,),
'bn4b21_branch2c_gamma': (1024L,),
'bn4b22_branch2a_beta': (256L,),
'bn4b22_branch2a_gamma': (256L,),
'bn4b22_branch2b_beta': (256L,),
'bn4b22_branch2b_gamma': (256L,),
'bn4b22_branch2c_beta': (1024L,),
'bn4b22_branch2c_gamma': (1024L,),
'bn4b2_branch2a_beta': (256L,),
'bn4b2_branch2a_gamma': (256L,),
'bn4b2_branch2b_beta': (256L,),
'bn4b2_branch2b_gamma': (256L,),
'bn4b2_branch2c_beta': (1024L,),
'bn4b2_branch2c_gamma': (1024L,),
'bn4b3_branch2a_beta': (256L,),
'bn4b3_branch2a_gamma': (256L,),
'bn4b3_branch2b_beta': (256L,),
'bn4b3_branch2b_gamma': (256L,),
'bn4b3_branch2c_beta': (1024L,),
'bn4b3_branch2c_gamma': (1024L,),
'bn4b4_branch2a_beta': (256L,),
'bn4b4_branch2a_gamma': (256L,),
'bn4b4_branch2b_beta': (256L,),
'bn4b4_branch2b_gamma': (256L,),
'bn4b4_branch2c_beta': (1024L,),
'bn4b4_branch2c_gamma': (1024L,),
'bn4b5_branch2a_beta': (256L,),
'bn4b5_branch2a_gamma': (256L,),
'bn4b5_branch2b_beta': (256L,),
'bn4b5_branch2b_gamma': (256L,),
'bn4b5_branch2c_beta': (1024L,),
'bn4b5_branch2c_gamma': (1024L,),
'bn4b6_branch2a_beta': (256L,),
'bn4b6_branch2a_gamma': (256L,),
'bn4b6_branch2b_beta': (256L,),
'bn4b6_branch2b_gamma': (256L,),
'bn4b6_branch2c_beta': (1024L,),
'bn4b6_branch2c_gamma': (1024L,),
'bn4b7_branch2a_beta': (256L,),
'bn4b7_branch2a_gamma': (256L,),
'bn4b7_branch2b_beta': (256L,),
'bn4b7_branch2b_gamma': (256L,),
'bn4b7_branch2c_beta': (1024L,),
'bn4b7_branch2c_gamma': (1024L,),
'bn4b8_branch2a_beta': (256L,),
'bn4b8_branch2a_gamma': (256L,),
'bn4b8_branch2b_beta': (256L,),
'bn4b8_branch2b_gamma': (256L,),
'bn4b8_branch2c_beta': (1024L,),
'bn4b8_branch2c_gamma': (1024L,),
'bn4b9_branch2a_beta': (256L,),
'bn4b9_branch2a_gamma': (256L,),
'bn4b9_branch2b_beta': (256L,),
'bn4b9_branch2b_gamma': (256L,),
'bn4b9_branch2c_beta': (1024L,),
'bn4b9_branch2c_gamma': (1024L,),
'bn5a_branch1_beta': (2048L,),
'bn5a_branch1_gamma': (2048L,),
'bn5a_branch2a_beta': (512L,),
'bn5a_branch2a_gamma': (512L,),
'bn5a_branch2b_beta': (512L,),
'bn5a_branch2b_gamma': (512L,),
'bn5a_branch2c_beta': (2048L,),
'bn5a_branch2c_gamma': (2048L,),
'bn5b_branch2a_beta': (512L,),
'bn5b_branch2a_gamma': (512L,),
'bn5b_branch2b_beta': (512L,),
'bn5b_branch2b_gamma': (512L,),
'bn5b_branch2c_beta': (2048L,),
'bn5b_branch2c_gamma': (2048L,),
'bn5c_branch2a_beta': (512L,),
'bn5c_branch2a_gamma': (512L,),
'bn5c_branch2b_beta': (512L,),
'bn5c_branch2b_gamma': (512L,),
'bn5c_branch2c_beta': (2048L,),
'bn5c_branch2c_gamma': (2048L,),
'bn_conv1_beta': (64L,),
'bn_conv1_gamma': (64L,),
'cls_score_bias': (11L,),
'cls_score_weight': (11L, 1024L),
'conv1_weight': (64L, 3L, 7L, 7L),
'conv_new_1_bias': (256L,),
'conv_new_1_weight': (256L, 2048L, 1L, 1L),
'data': (1L, 3L, 600L, 600L),
'fc_new_1_bias': (1024L,),
'fc_new_1_weight': (1024L, 12544L),
'fc_new_2_bias': (1024L,),
'fc_new_2_weight': (1024L, 1024L),
'gt_boxes': (1L, 19L, 5L),
'im_info': (1L, 3L),
'key_1_bias': (1024L,),
'key_1_weight': (1024L, 1024L),
'key_2_bias': (1024L,),
'key_2_weight': (1024L, 1024L),
'label': (1L, 17328L),
'linear_out_1_bias': (1024L,),
'linear_out_1_weight': (1024L, 1024L, 1L, 1L),
'linear_out_2_bias': (1024L,),
'linear_out_2_weight': (1024L, 1024L, 1L, 1L),
'nms_key_1_bias': (1024L,),
'nms_key_1_weight': (1024L, 128L),
'nms_linear_out_1_bias': (128L,),
'nms_linear_out_1_weight': (128L, 128L, 1L, 1L),
'nms_logit_bias': (5L,),
'nms_logit_weight': (5L, 128L),
'nms_pair_pos_fc1_1_bias': (16L,),
'nms_pair_pos_fc1_1_weight': (16L, 64L),
'nms_query_1_bias': (1024L,),
'nms_query_1_weight': (1024L, 128L),
'nms_rank_bias': (128L,),
'nms_rank_weight': (128L, 1024L),
'pair_pos_fc1_1_bias': (16L,),
'pair_pos_fc1_1_weight': (16L, 64L),
'pair_pos_fc1_2_bias': (16L,),
'pair_pos_fc1_2_weight': (16L, 64L),
'query_1_bias': (1024L,),
'query_1_weight': (1024L, 1024L),
'query_2_bias': (1024L,),
'query_2_weight': (1024L, 1024L),
'res2a_branch1_weight': (256L, 64L, 1L, 1L),
'res2a_branch2a_weight': (64L, 64L, 1L, 1L),
'res2a_branch2b_weight': (64L, 64L, 3L, 3L),
'res2a_branch2c_weight': (256L, 64L, 1L, 1L),
'res2b_branch2a_weight': (64L, 256L, 1L, 1L),
'res2b_branch2b_weight': (64L, 64L, 3L, 3L),
'res2b_branch2c_weight': (256L, 64L, 1L, 1L),
'res2c_branch2a_weight': (64L, 256L, 1L, 1L),
'res2c_branch2b_weight': (64L, 64L, 3L, 3L),
'res2c_branch2c_weight': (256L, 64L, 1L, 1L),
'res3a_branch1_weight': (512L, 256L, 1L, 1L),
'res3a_branch2a_weight': (128L, 256L, 1L, 1L),
'res3a_branch2b_weight': (128L, 128L, 3L, 3L),
'res3a_branch2c_weight': (512L, 128L, 1L, 1L),
'res3b1_branch2a_weight': (128L, 512L, 1L, 1L),
'res3b1_branch2b_weight': (128L, 128L, 3L, 3L),
'res3b1_branch2c_weight': (512L, 128L, 1L, 1L),
'res3b2_branch2a_weight': (128L, 512L, 1L, 1L),
'res3b2_branch2b_weight': (128L, 128L, 3L, 3L),
'res3b2_branch2c_weight': (512L, 128L, 1L, 1L),
'res3b3_branch2a_weight': (128L, 512L, 1L, 1L),
'res3b3_branch2b_weight': (128L, 128L, 3L, 3L),
'res3b3_branch2c_weight': (512L, 128L, 1L, 1L),
'res4a_branch1_weight': (1024L, 512L, 1L, 1L),
'res4a_branch2a_weight': (256L, 512L, 1L, 1L),
'res4a_branch2b_weight': (256L, 256L, 3L, 3L),
'res4a_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b10_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b10_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b10_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b11_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b11_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b11_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b12_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b12_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b12_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b13_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b13_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b13_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b14_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b14_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b14_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b15_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b15_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b15_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b16_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b16_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b16_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b17_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b17_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b17_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b18_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b18_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b18_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b19_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b19_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b19_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b1_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b1_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b1_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b20_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b20_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b20_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b21_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b21_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b21_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b22_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b22_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b22_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b2_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b2_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b2_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b3_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b3_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b3_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b4_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b4_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b4_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b5_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b5_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b5_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b6_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b6_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b6_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b7_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b7_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b7_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b8_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b8_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b8_branch2c_weight': (1024L, 256L, 1L, 1L),
'res4b9_branch2a_weight': (256L, 1024L, 1L, 1L),
'res4b9_branch2b_weight': (256L, 256L, 3L, 3L),
'res4b9_branch2c_weight': (1024L, 256L, 1L, 1L),
'res5a_branch1_weight': (2048L, 1024L, 1L, 1L),
'res5a_branch2a_weight': (512L, 1024L, 1L, 1L),
'res5a_branch2b_weight': (512L, 512L, 3L, 3L),
'res5a_branch2c_weight': (2048L, 512L, 1L, 1L),
'res5b_branch2a_weight': (512L, 2048L, 1L, 1L),
'res5b_branch2b_weight': (512L, 512L, 3L, 3L),
'res5b_branch2c_weight': (2048L, 512L, 1L, 1L),
'res5c_branch2a_weight': (512L, 2048L, 1L, 1L),
'res5c_branch2b_weight': (512L, 512L, 3L, 3L),
'res5c_branch2c_weight': (2048L, 512L, 1L, 1L),
'roi_feat_embedding_bias': (128L,),
'roi_feat_embedding_weight': (128L, 1024L),
'rpn_bbox_pred_bias': (48L,),
'rpn_bbox_pred_weight': (48L, 512L, 1L, 1L),
'rpn_cls_score_bias': (24L,),
'rpn_cls_score_weight': (24L, 512L, 1L, 1L),
'rpn_conv_3x3_bias': (512L,),
'rpn_conv_3x3_weight': (512L, 1024L, 3L, 3L)}
2018-06-26 10:15:51,964 conv1_weight is fixed.
2018-06-26 10:15:51,965 bn_conv1_gamma is fixed.
2018-06-26 10:15:51,966 bn_conv1_beta is fixed.
2018-06-26 10:15:51,966 res2a_branch1_weight is fixed.
2018-06-26 10:15:51,966 bn2a_branch1_gamma is fixed.
2018-06-26 10:15:51,966 bn2a_branch1_beta is fixed.
2018-06-26 10:15:51,967 res2a_branch2a_weight is fixed.
2018-06-26 10:15:51,967 bn2a_branch2a_gamma is fixed.
2018-06-26 10:15:51,967 bn2a_branch2a_beta is fixed.
2018-06-26 10:15:51,967 res2a_branch2b_weight is fixed.
2018-06-26 10:15:51,967 bn2a_branch2b_gamma is fixed.
2018-06-26 10:15:51,967 bn2a_branch2b_beta is fixed.
2018-06-26 10:15:51,967 res2a_branch2c_weight is fixed.
2018-06-26 10:15:51,967 bn2a_branch2c_gamma is fixed.
2018-06-26 10:15:51,967 bn2a_branch2c_beta is fixed.
2018-06-26 10:15:51,968 res2b_branch2a_weight is fixed.
2018-06-26 10:15:51,968 bn2b_branch2a_gamma is fixed.
2018-06-26 10:15:51,968 bn2b_branch2a_beta is fixed.
2018-06-26 10:15:51,968 res2b_branch2b_weight is fixed.
2018-06-26 10:15:51,968 bn2b_branch2b_gamma is fixed.
2018-06-26 10:15:51,968 bn2b_branch2b_beta is fixed.
2018-06-26 10:15:51,968 res2b_branch2c_weight is fixed.
2018-06-26 10:15:51,968 bn2b_branch2c_gamma is fixed.
2018-06-26 10:15:51,969 bn2b_branch2c_beta is fixed.
2018-06-26 10:15:51,969 res2c_branch2a_weight is fixed.
2018-06-26 10:15:51,969 bn2c_branch2a_gamma is fixed.
2018-06-26 10:15:51,969 bn2c_branch2a_beta is fixed.
2018-06-26 10:15:51,969 res2c_branch2b_weight is fixed.
2018-06-26 10:15:51,969 bn2c_branch2b_gamma is fixed.
2018-06-26 10:15:51,969 bn2c_branch2b_beta is fixed.
2018-06-26 10:15:51,969 res2c_branch2c_weight is fixed.
2018-06-26 10:15:51,969 bn2c_branch2c_gamma is fixed.
2018-06-26 10:15:51,970 bn2c_branch2c_beta is fixed.
2018-06-26 10:15:51,970 bn3a_branch1_gamma is fixed.
2018-06-26 10:15:51,970 bn3a_branch1_beta is fixed.
2018-06-26 10:15:51,970 bn3a_branch2a_gamma is fixed.
2018-06-26 10:15:51,970 bn3a_branch2a_beta is fixed.
2018-06-26 10:15:51,970 bn3a_branch2b_gamma is fixed.
2018-06-26 10:15:51,970 bn3a_branch2b_beta is fixed.
2018-06-26 10:15:51,970 bn3a_branch2c_gamma is fixed.
2018-06-26 10:15:51,970 bn3a_branch2c_beta is fixed.
2018-06-26 10:15:51,970 bn3b1_branch2a_gamma is fixed.
2018-06-26 10:15:51,970 bn3b1_branch2a_beta is fixed.
2018-06-26 10:15:51,970 bn3b1_branch2b_gamma is fixed.
2018-06-26 10:15:51,970 bn3b1_branch2b_beta is fixed.
2018-06-26 10:15:51,970 bn3b1_branch2c_gamma is fixed.
2018-06-26 10:15:51,970 bn3b1_branch2c_beta is fixed.
2018-06-26 10:15:51,970 bn3b2_branch2a_gamma is fixed.
2018-06-26 10:15:51,970 bn3b2_branch2a_beta is fixed.
2018-06-26 10:15:51,970 bn3b2_branch2b_gamma is fixed.
2018-06-26 10:15:51,971 bn3b2_branch2b_beta is fixed.
2018-06-26 10:15:51,971 bn3b2_branch2c_gamma is fixed.
2018-06-26 10:15:51,971 bn3b2_branch2c_beta is fixed.
2018-06-26 10:15:51,971 bn3b3_branch2a_gamma is fixed.
2018-06-26 10:15:51,971 bn3b3_branch2a_beta is fixed.
2018-06-26 10:15:51,971 bn3b3_branch2b_gamma is fixed.
2018-06-26 10:15:51,971 bn3b3_branch2b_beta is fixed.
2018-06-26 10:15:51,971 bn3b3_branch2c_gamma is fixed.
2018-06-26 10:15:51,971 bn3b3_branch2c_beta is fixed.
2018-06-26 10:15:51,971 bn4a_branch1_gamma is fixed.
2018-06-26 10:15:51,971 bn4a_branch1_beta is fixed.
2018-06-26 10:15:51,971 bn4a_branch2a_gamma is fixed.
2018-06-26 10:15:51,971 bn4a_branch2a_beta is fixed.
2018-06-26 10:15:51,971 bn4a_branch2b_gamma is fixed.
2018-06-26 10:15:51,972 bn4a_branch2b_beta is fixed.
2018-06-26 10:15:51,972 bn4a_branch2c_gamma is fixed.
2018-06-26 10:15:51,972 bn4a_branch2c_beta is fixed.
2018-06-26 10:15:51,972 bn4b1_branch2a_gamma is fixed.
2018-06-26 10:15:51,972 bn4b1_branch2a_beta is fixed.
2018-06-26 10:15:51,972 bn4b1_branch2b_gamma is fixed.
2018-06-26 10:15:51,972 bn4b1_branch2b_beta is fixed.
2018-06-26 10:15:51,972 bn4b1_branch2c_gamma is fixed.
2018-06-26 10:15:51,972 bn4b1_branch2c_beta is fixed.
2018-06-26 10:15:51,972 bn4b2_branch2a_gamma is fixed.
2018-06-26 10:15:51,972 bn4b2_branch2a_beta is fixed.
2018-06-26 10:15:51,972 bn4b2_branch2b_gamma is fixed.
2018-06-26 10:15:51,972 bn4b2_branch2b_beta is fixed.
2018-06-26 10:15:51,972 bn4b2_branch2c_gamma is fixed.
2018-06-26 10:15:51,973 bn4b2_branch2c_beta is fixed.
2018-06-26 10:15:51,973 bn4b3_branch2a_gamma is fixed.
2018-06-26 10:15:51,973 bn4b3_branch2a_beta is fixed.
2018-06-26 10:15:51,973 bn4b3_branch2b_gamma is fixed.
2018-06-26 10:15:51,973 bn4b3_branch2b_beta is fixed.
2018-06-26 10:15:51,973 bn4b3_branch2c_gamma is fixed.
2018-06-26 10:15:51,973 bn4b3_branch2c_beta is fixed.
2018-06-26 10:15:51,973 bn4b4_branch2a_gamma is fixed.
2018-06-26 10:15:51,973 bn4b4_branch2a_beta is fixed.
2018-06-26 10:15:51,973 bn4b4_branch2b_gamma is fixed.
2018-06-26 10:15:51,973 bn4b4_branch2b_beta is fixed.
2018-06-26 10:15:51,973 bn4b4_branch2c_gamma is fixed.
2018-06-26 10:15:51,975 bn4b4_branch2c_beta is fixed.
2018-06-26 10:15:51,976 bn4b5_branch2a_gamma is fixed.
2018-06-26 10:15:51,976 bn4b5_branch2a_beta is fixed.
2018-06-26 10:15:51,976 bn4b5_branch2b_gamma is fixed.
2018-06-26 10:15:51,976 bn4b5_branch2b_beta is fixed.
2018-06-26 10:15:51,976 bn4b5_branch2c_gamma is fixed.
2018-06-26 10:15:51,978 bn4b5_branch2c_beta is fixed.
2018-06-26 10:15:51,979 bn4b6_branch2a_gamma is fixed.
2018-06-26 10:15:51,979 bn4b6_branch2a_beta is fixed.
2018-06-26 10:15:51,979 bn4b6_branch2b_gamma is fixed.
2018-06-26 10:15:51,979 bn4b6_branch2b_beta is fixed.
2018-06-26 10:15:51,979 bn4b6_branch2c_gamma is fixed.
2018-06-26 10:15:51,979 bn4b6_branch2c_beta is fixed.
2018-06-26 10:15:51,979 bn4b7_branch2a_gamma is fixed.
2018-06-26 10:15:51,980 bn4b7_branch2a_beta is fixed.
2018-06-26 10:15:51,980 bn4b7_branch2b_gamma is fixed.
2018-06-26 10:15:51,980 bn4b7_branch2b_beta is fixed.
2018-06-26 10:15:51,980 bn4b7_branch2c_gamma is fixed.
2018-06-26 10:15:51,980 bn4b7_branch2c_beta is fixed.
2018-06-26 10:15:51,980 bn4b8_branch2a_gamma is fixed.
2018-06-26 10:15:51,981 bn4b8_branch2a_beta is fixed.
2018-06-26 10:15:51,981 bn4b8_branch2b_gamma is fixed.
2018-06-26 10:15:51,981 bn4b8_branch2b_beta is fixed.
2018-06-26 10:15:51,981 bn4b8_branch2c_gamma is fixed.
2018-06-26 10:15:51,981 bn4b8_branch2c_beta is fixed.
2018-06-26 10:15:51,981 bn4b9_branch2a_gamma is fixed.
2018-06-26 10:15:51,981 bn4b9_branch2a_beta is fixed.
2018-06-26 10:15:51,982 bn4b9_branch2b_gamma is fixed.
2018-06-26 10:15:51,982 bn4b9_branch2b_beta is fixed.
2018-06-26 10:15:51,982 bn4b9_branch2c_gamma is fixed.
2018-06-26 10:15:51,982 bn4b9_branch2c_beta is fixed.
2018-06-26 10:15:51,982 bn4b10_branch2a_gamma is fixed.
2018-06-26 10:15:51,982 bn4b10_branch2a_beta is fixed.
2018-06-26 10:15:51,982 bn4b10_branch2b_gamma is fixed.
2018-06-26 10:15:51,982 bn4b10_branch2b_beta is fixed.
2018-06-26 10:15:51,983 bn4b10_branch2c_gamma is fixed.
2018-06-26 10:15:51,983 bn4b10_branch2c_beta is fixed.
2018-06-26 10:15:51,983 bn4b11_branch2a_gamma is fixed.
2018-06-26 10:15:51,983 bn4b11_branch2a_beta is fixed.
2018-06-26 10:15:51,983 bn4b11_branch2b_gamma is fixed.
2018-06-26 10:15:51,983 bn4b11_branch2b_beta is fixed.
2018-06-26 10:15:51,983 bn4b11_branch2c_gamma is fixed.
2018-06-26 10:15:51,983 bn4b11_branch2c_beta is fixed.
2018-06-26 10:15:51,983 bn4b12_branch2a_gamma is fixed.
2018-06-26 10:15:51,984 bn4b12_branch2a_beta is fixed.
2018-06-26 10:15:51,984 bn4b12_branch2b_gamma is fixed.
2018-06-26 10:15:51,984 bn4b12_branch2b_beta is fixed.
2018-06-26 10:15:51,984 bn4b12_branch2c_gamma is fixed.
2018-06-26 10:15:51,984 bn4b12_branch2c_beta is fixed.
2018-06-26 10:15:51,986 bn4b13_branch2a_gamma is fixed.
2018-06-26 10:15:51,987 bn4b13_branch2a_beta is fixed.
2018-06-26 10:15:51,987 bn4b13_branch2b_gamma is fixed.
2018-06-26 10:15:51,987 bn4b13_branch2b_beta is fixed.
2018-06-26 10:15:51,987 bn4b13_branch2c_gamma is fixed.
2018-06-26 10:15:51,987 bn4b13_branch2c_beta is fixed.
2018-06-26 10:15:51,987 bn4b14_branch2a_gamma is fixed.
2018-06-26 10:15:51,988 bn4b14_branch2a_beta is fixed.
2018-06-26 10:15:51,988 bn4b14_branch2b_gamma is fixed.
2018-06-26 10:15:51,988 bn4b14_branch2b_beta is fixed.
2018-06-26 10:15:51,988 bn4b14_branch2c_gamma is fixed.
2018-06-26 10:15:51,988 bn4b14_branch2c_beta is fixed.
2018-06-26 10:15:51,988 bn4b15_branch2a_gamma is fixed.
2018-06-26 10:15:51,988 bn4b15_branch2a_beta is fixed.
2018-06-26 10:15:51,988 bn4b15_branch2b_gamma is fixed.
2018-06-26 10:15:51,989 bn4b15_branch2b_beta is fixed.
2018-06-26 10:15:51,989 bn4b15_branch2c_gamma is fixed.
2018-06-26 10:15:51,989 bn4b15_branch2c_beta is fixed.
2018-06-26 10:15:51,989 bn4b16_branch2a_gamma is fixed.
2018-06-26 10:15:51,989 bn4b16_branch2a_beta is fixed.
2018-06-26 10:15:51,997 bn4b16_branch2b_gamma is fixed.
2018-06-26 10:15:51,998 bn4b16_branch2b_beta is fixed.
2018-06-26 10:15:51,998 bn4b16_branch2c_gamma is fixed.
2018-06-26 10:15:51,998 bn4b16_branch2c_beta is fixed.
2018-06-26 10:15:51,998 bn4b17_branch2a_gamma is fixed.
2018-06-26 10:15:51,998 bn4b17_branch2a_beta is fixed.
2018-06-26 10:15:51,998 bn4b17_branch2b_gamma is fixed.
2018-06-26 10:15:51,998 bn4b17_branch2b_beta is fixed.
2018-06-26 10:15:51,999 bn4b17_branch2c_gamma is fixed.
2018-06-26 10:15:51,999 bn4b17_branch2c_beta is fixed.
2018-06-26 10:15:51,999 bn4b18_branch2a_gamma is fixed.
2018-06-26 10:15:51,999 bn4b18_branch2a_beta is fixed.
2018-06-26 10:15:51,999 bn4b18_branch2b_gamma is fixed.
2018-06-26 10:15:51,999 bn4b18_branch2b_beta is fixed.
2018-06-26 10:15:51,999 bn4b18_branch2c_gamma is fixed.
2018-06-26 10:15:51,999 bn4b18_branch2c_beta is fixed.
2018-06-26 10:15:51,999 bn4b19_branch2a_gamma is fixed.
2018-06-26 10:15:51,999 bn4b19_branch2a_beta is fixed.
2018-06-26 10:15:51,999 bn4b19_branch2b_gamma is fixed.
2018-06-26 10:15:51,999 bn4b19_branch2b_beta is fixed.
2018-06-26 10:15:51,999 bn4b19_branch2c_gamma is fixed.
2018-06-26 10:15:51,999 bn4b19_branch2c_beta is fixed.
2018-06-26 10:15:51,999 bn4b20_branch2a_gamma is fixed.
2018-06-26 10:15:51,999 bn4b20_branch2a_beta is fixed.
2018-06-26 10:15:51,999 bn4b20_branch2b_gamma is fixed.
2018-06-26 10:15:51,999 bn4b20_branch2b_beta is fixed.
2018-06-26 10:15:51,999 bn4b20_branch2c_gamma is fixed.
2018-06-26 10:15:51,999 bn4b20_branch2c_beta is fixed.
2018-06-26 10:15:52,000 bn4b21_branch2a_gamma is fixed.
2018-06-26 10:15:52,000 bn4b21_branch2a_beta is fixed.
2018-06-26 10:15:52,000 bn4b21_branch2b_gamma is fixed.
2018-06-26 10:15:52,000 bn4b21_branch2b_beta is fixed.
2018-06-26 10:15:52,000 bn4b21_branch2c_gamma is fixed.
2018-06-26 10:15:52,000 bn4b21_branch2c_beta is fixed.
2018-06-26 10:15:52,000 bn4b22_branch2a_gamma is fixed.
2018-06-26 10:15:52,000 bn4b22_branch2a_beta is fixed.
2018-06-26 10:15:52,000 bn4b22_branch2b_gamma is fixed.
2018-06-26 10:15:52,000 bn4b22_branch2b_beta is fixed.
2018-06-26 10:15:52,000 bn4b22_branch2c_gamma is fixed.
2018-06-26 10:15:52,000 bn4b22_branch2c_beta is fixed.
2018-06-26 10:15:52,000 bn5a_branch1_gamma is fixed.
2018-06-26 10:15:52,000 bn5a_branch1_beta is fixed.
2018-06-26 10:15:52,000 bn5a_branch2a_gamma is fixed.
2018-06-26 10:15:52,000 bn5a_branch2a_beta is fixed.
2018-06-26 10:15:52,000 bn5a_branch2b_gamma is fixed.
2018-06-26 10:15:52,001 bn5a_branch2b_beta is fixed.
2018-06-26 10:15:52,001 bn5a_branch2c_gamma is fixed.
2018-06-26 10:15:52,001 bn5a_branch2c_beta is fixed.
2018-06-26 10:15:52,001 bn5b_branch2a_gamma is fixed.
2018-06-26 10:15:52,001 bn5b_branch2a_beta is fixed.
2018-06-26 10:15:52,001 bn5b_branch2b_gamma is fixed.
2018-06-26 10:15:52,001 bn5b_branch2b_beta is fixed.
2018-06-26 10:15:52,001 bn5b_branch2c_gamma is fixed.
2018-06-26 10:15:52,001 bn5b_branch2c_beta is fixed.
2018-06-26 10:15:52,002 bn5c_branch2a_gamma is fixed.
2018-06-26 10:15:52,002 bn5c_branch2a_beta is fixed.
2018-06-26 10:15:52,002 bn5c_branch2b_gamma is fixed.
2018-06-26 10:15:52,002 bn5c_branch2b_beta is fixed.
2018-06-26 10:15:52,002 bn5c_branch2c_gamma is fixed.
2018-06-26 10:15:52,002 bn5c_branch2c_beta is fixed.
2018-06-26 10:15:52,002 data is not fixed.
2018-06-26 10:15:52,003 res3a_branch1_weight is not fixed.
2018-06-26 10:15:52,003 res3a_branch2a_weight is not fixed.
2018-06-26 10:15:52,003 res3a_branch2b_weight is not fixed.
2018-06-26 10:15:52,003 res3a_branch2c_weight is not fixed.
2018-06-26 10:15:52,003 res3b1_branch2a_weight is not fixed.
2018-06-26 10:15:52,003 res3b1_branch2b_weight is not fixed.
2018-06-26 10:15:52,003 res3b1_branch2c_weight is not fixed.
2018-06-26 10:15:52,003 res3b2_branch2a_weight is not fixed.
2018-06-26 10:15:52,004 res3b2_branch2b_weight is not fixed.
2018-06-26 10:15:52,004 res3b2_branch2c_weight is not fixed.
2018-06-26 10:15:52,004 res3b3_branch2a_weight is not fixed.
2018-06-26 10:15:52,004 res3b3_branch2b_weight is not fixed.
2018-06-26 10:15:52,004 res3b3_branch2c_weight is not fixed.
2018-06-26 10:15:52,004 res4a_branch1_weight is not fixed.
2018-06-26 10:15:52,004 res4a_branch2a_weight is not fixed.
2018-06-26 10:15:52,004 res4a_branch2b_weight is not fixed.
2018-06-26 10:15:52,005 res4a_branch2c_weight is not fixed.
2018-06-26 10:15:52,005 res4b1_branch2a_weight is not fixed.
2018-06-26 10:15:52,005 res4b1_branch2b_weight is not fixed.
2018-06-26 10:15:52,005 res4b1_branch2c_weight is not fixed.
2018-06-26 10:15:52,005 res4b2_branch2a_weight is not fixed.
2018-06-26 10:15:52,005 res4b2_branch2b_weight is not fixed.
2018-06-26 10:15:52,005 res4b2_branch2c_weight is not fixed.
2018-06-26 10:15:52,005 res4b3_branch2a_weight is not fixed.
2018-06-26 10:15:52,005 res4b3_branch2b_weight is not fixed.
2018-06-26 10:15:52,006 res4b3_branch2c_weight is not fixed.
2018-06-26 10:15:52,006 res4b4_branch2a_weight is not fixed.
2018-06-26 10:15:52,006 res4b4_branch2b_weight is not fixed.
2018-06-26 10:15:52,006 res4b4_branch2c_weight is not fixed.
2018-06-26 10:15:52,006 res4b5_branch2a_weight is not fixed.
2018-06-26 10:15:52,006 res4b5_branch2b_weight is not fixed.
2018-06-26 10:15:52,006 res4b5_branch2c_weight is not fixed.
2018-06-26 10:15:52,006 res4b6_branch2a_weight is not fixed.
2018-06-26 10:15:52,006 res4b6_branch2b_weight is not fixed.
2018-06-26 10:15:52,007 res4b6_branch2c_weight is not fixed.
2018-06-26 10:15:52,007 res4b7_branch2a_weight is not fixed.
2018-06-26 10:15:52,007 res4b7_branch2b_weight is not fixed.
2018-06-26 10:15:52,007 res4b7_branch2c_weight is not fixed.
2018-06-26 10:15:52,007 res4b8_branch2a_weight is not fixed.
2018-06-26 10:15:52,007 res4b8_branch2b_weight is not fixed.
2018-06-26 10:15:52,007 res4b8_branch2c_weight is not fixed.
2018-06-26 10:15:52,007 res4b9_branch2a_weight is not fixed.
2018-06-26 10:15:52,007 res4b9_branch2b_weight is not fixed.
2018-06-26 10:15:52,007 res4b9_branch2c_weight is not fixed.
2018-06-26 10:15:52,007 res4b10_branch2a_weight is not fixed.
2018-06-26 10:15:52,007 res4b10_branch2b_weight is not fixed.
2018-06-26 10:15:52,007 res4b10_branch2c_weight is not fixed.
2018-06-26 10:15:52,007 res4b11_branch2a_weight is not fixed.
2018-06-26 10:15:52,007 res4b11_branch2b_weight is not fixed.
2018-06-26 10:15:52,007 res4b11_branch2c_weight is not fixed.
2018-06-26 10:15:52,008 res4b12_branch2a_weight is not fixed.
2018-06-26 10:15:52,008 res4b12_branch2b_weight is not fixed.
2018-06-26 10:15:52,008 res4b12_branch2c_weight is not fixed.
2018-06-26 10:15:52,008 res4b13_branch2a_weight is not fixed.
2018-06-26 10:15:52,008 res4b13_branch2b_weight is not fixed.
2018-06-26 10:15:52,008 res4b13_branch2c_weight is not fixed.
2018-06-26 10:15:52,008 res4b14_branch2a_weight is not fixed.
2018-06-26 10:15:52,008 res4b14_branch2b_weight is not fixed.
2018-06-26 10:15:52,012 res4b14_branch2c_weight is not fixed.
2018-06-26 10:15:52,012 res4b15_branch2a_weight is not fixed.
2018-06-26 10:15:52,012 res4b15_branch2b_weight is not fixed.
2018-06-26 10:15:52,012 res4b15_branch2c_weight is not fixed.
2018-06-26 10:15:52,012 res4b16_branch2a_weight is not fixed.
2018-06-26 10:15:52,012 res4b16_branch2b_weight is not fixed.
2018-06-26 10:15:52,012 res4b16_branch2c_weight is not fixed.
2018-06-26 10:15:52,013 res4b17_branch2a_weight is not fixed.
2018-06-26 10:15:52,013 res4b17_branch2b_weight is not fixed.
2018-06-26 10:15:52,013 res4b17_branch2c_weight is not fixed.
2018-06-26 10:15:52,013 res4b18_branch2a_weight is not fixed.
2018-06-26 10:15:52,013 res4b18_branch2b_weight is not fixed.
2018-06-26 10:15:52,013 res4b18_branch2c_weight is not fixed.
2018-06-26 10:15:52,013 res4b19_branch2a_weight is not fixed.
2018-06-26 10:15:52,013 res4b19_branch2b_weight is not fixed.
2018-06-26 10:15:52,013 res4b19_branch2c_weight is not fixed.
2018-06-26 10:15:52,013 res4b20_branch2a_weight is not fixed.
2018-06-26 10:15:52,013 res4b20_branch2b_weight is not fixed.
2018-06-26 10:15:52,013 res4b20_branch2c_weight is not fixed.
2018-06-26 10:15:52,013 res4b21_branch2a_weight is not fixed.
2018-06-26 10:15:52,013 res4b21_branch2b_weight is not fixed.
2018-06-26 10:15:52,013 res4b21_branch2c_weight is not fixed.
2018-06-26 10:15:52,013 res4b22_branch2a_weight is not fixed.
2018-06-26 10:15:52,014 res4b22_branch2b_weight is not fixed.
2018-06-26 10:15:52,014 res4b22_branch2c_weight is not fixed.
2018-06-26 10:15:52,014 rpn_conv_3x3_weight is not fixed.
2018-06-26 10:15:52,014 rpn_conv_3x3_bias is not fixed.
2018-06-26 10:15:52,014 rpn_cls_score_weight is not fixed.
2018-06-26 10:15:52,014 rpn_cls_score_bias is not fixed.
2018-06-26 10:15:52,014 label is not fixed.
2018-06-26 10:15:52,014 bbox_weight is not fixed.
2018-06-26 10:15:52,014 rpn_bbox_pred_weight is not fixed.
2018-06-26 10:15:52,014 rpn_bbox_pred_bias is not fixed.
2018-06-26 10:15:52,014 bbox_target is not fixed.
2018-06-26 10:15:52,014 res5a_branch1_weight is not fixed.
2018-06-26 10:15:52,014 res5a_branch2a_weight is not fixed.
2018-06-26 10:15:52,014 res5a_branch2b_weight is not fixed.
2018-06-26 10:15:52,014 res5a_branch2c_weight is not fixed.
2018-06-26 10:15:52,014 res5b_branch2a_weight is not fixed.
2018-06-26 10:15:52,014 res5b_branch2b_weight is not fixed.
2018-06-26 10:15:52,014 res5b_branch2c_weight is not fixed.
2018-06-26 10:15:52,014 res5c_branch2a_weight is not fixed.
2018-06-26 10:15:52,015 res5c_branch2b_weight is not fixed.
2018-06-26 10:15:52,015 res5c_branch2c_weight is not fixed.
2018-06-26 10:15:52,015 conv_new_1_weight is not fixed.
2018-06-26 10:15:52,015 conv_new_1_bias is not fixed.
2018-06-26 10:15:52,015 im_info is not fixed.
2018-06-26 10:15:52,015 gt_boxes is not fixed.
2018-06-26 10:15:52,015 fc_new_1_weight is not fixed.
2018-06-26 10:15:52,015 fc_new_1_bias is not fixed.
2018-06-26 10:15:52,015 pair_pos_fc1_1_weight is not fixed.
2018-06-26 10:15:52,015 pair_pos_fc1_1_bias is not fixed.
2018-06-26 10:15:52,015 query_1_weight is not fixed.
2018-06-26 10:15:52,015 query_1_bias is not fixed.
2018-06-26 10:15:52,015 key_1_weight is not fixed.
2018-06-26 10:15:52,015 key_1_bias is not fixed.
2018-06-26 10:15:52,015 linear_out_1_weight is not fixed.
2018-06-26 10:15:52,015 linear_out_1_bias is not fixed.
2018-06-26 10:15:52,015 fc_new_2_weight is not fixed.
2018-06-26 10:15:52,015 fc_new_2_bias is not fixed.
2018-06-26 10:15:52,016 pair_pos_fc1_2_weight is not fixed.
2018-06-26 10:15:52,016 pair_pos_fc1_2_bias is not fixed.
2018-06-26 10:15:52,016 query_2_weight is not fixed.
2018-06-26 10:15:52,016 query_2_bias is not fixed.
2018-06-26 10:15:52,016 key_2_weight is not fixed.
2018-06-26 10:15:52,016 key_2_bias is not fixed.
2018-06-26 10:15:52,016 linear_out_2_weight is not fixed.
2018-06-26 10:15:52,016 linear_out_2_bias is not fixed.
2018-06-26 10:15:52,016 cls_score_weight is not fixed.
2018-06-26 10:15:52,016 cls_score_bias is not fixed.
2018-06-26 10:15:52,016 bbox_pred_weight is not fixed.
2018-06-26 10:15:52,016 bbox_pred_bias is not fixed.
2018-06-26 10:15:52,016 roi_feat_embedding_weight is not fixed.
2018-06-26 10:15:52,016 roi_feat_embedding_bias is not fixed.
2018-06-26 10:15:52,020 nms_rank_weight is not fixed.
2018-06-26 10:15:52,020 nms_rank_bias is not fixed.
2018-06-26 10:15:52,020 nms_pair_pos_fc1_1_weight is not fixed.
2018-06-26 10:15:52,020 nms_pair_pos_fc1_1_bias is not fixed.
2018-06-26 10:15:52,020 nms_query_1_weight is not fixed.
2018-06-26 10:15:52,020 nms_query_1_bias is not fixed.
2018-06-26 10:15:52,020 nms_key_1_weight is not fixed.
2018-06-26 10:15:52,021 nms_key_1_bias is not fixed.
2018-06-26 10:15:52,021 nms_linear_out_1_weight is not fixed.
2018-06-26 10:15:52,021 nms_linear_out_1_bias is not fixed.
2018-06-26 10:15:52,021 nms_logit_weight is not fixed.
2018-06-26 10:15:52,021 nms_logit_bias is not fixed.

when I use my own dataset to train the model, in the terminal I input python experiments/relation_rcnn/rcnn_end2end_train_test.py --cfg experiments/relation_rcnn/cfgs/resnet_v1_101_coco_trainvalminus_rcnn_end2end_relation_learn_nms_8epoch.yaml --ignore_cache , there are some problems. Can you tell me the reason? Thanks in advance.

Compile MXNet

compilation terminated.
Makefile:393: recipe for target 'build/src/operator/contrib/multibox_detection.o' failed
make: *** [build/src/operator/contrib/multibox_detection.o] Error 1
In file included from /home/fsr/Relation-Networks-for-Object-Detection-master/incubator-mxnet/mshadow/mshadow/tensor.h:16:0,
from include/mxnet/./base.h:32,
from include/mxnet/operator.h:38,
from src/operator/contrib/./deformable_psroi_pooling-inl.h:32,
from src/operator/contrib/deformable_psroi_pooling.cc:27:
/home/fsr/Relation-Networks-for-Object-Detection-master/incubator-mxnet/mshadow/mshadow/./base.h:147:23: fatal error: cblas.h: 没有那个文件或目录
compilation terminated.
Makefile:393: recipe for target 'build/src/operator/contrib/deformable_psroi_pooling.o' failed
make: *** [build/src/operator/contrib/deformable_psroi_pooling.o] Error 1
In file included from /home/fsr/Relation-Networks-for-Object-Detection-master/incubator-mxnet/mshadow/mshadow/tensor.h:16:0,
from include/mxnet/./base.h:32,
from include/mxnet/operator.h:38,
from src/operator/contrib/./psroi_pooling-inl.h:14,
from src/operator/contrib/psroi_pooling.cc:28:
/home/fsr/Relation-Networks-for-Object-Detection-master/incubator-mxnet/mshadow/mshadow/./base.h:147:23: fatal error: cblas.h: 没有那个文件或目录
compilation terminated.
Makefile:393: recipe for target 'build/src/operator/contrib/psroi_pooling.o' failed
make: *** [build/src/operator/contrib/psroi_pooling.o] Error 1

mxnet.base.MXNetError: Error in operator _plus2: [06:13:23] src/operator/contrib/./../elemwise_op_common.h:135: Check failed: assign(&dattr, vec.at(i)): Incompatible attr in node _plus2 at 1-th input: expected [313,16,300], got [19,16,18]

Hello, I converted the scripts to python3. My mxnet is mxnet-cu101 version 1.6.0 for CUDA10.1. But when I ran the train_endtoend.py in ./relation_rcnn I am running into the following error after the config file is read:

'''
'symbol': 'resnet_v1_101_rcnn_dcn_attention_1024_pairwise_position_multi_head_16_learn_nms'}
loading annotations into memory...
Done (t=0.81s)
creating index...
index created!
num_images 7017
wrote gt roidb to ./cache/COCO_train_800x800_gt_roidb.pkl
filtered 0 roidb entries: 7017 -> 7017
[('data', (1, 3, 800, 800))]
[('data', (1, 3, 800, 800))]
[('label', (1, 30000)), ('bbox_target', (1, 48, 50, 50)), ('bbox_weight', (1, 48, 50, 50))]
providing maximum shape [('data', (1, 3, 800, 800)), ('gt_boxes', (1, 100, 5))] [('label', (1, 30000)), ('bbox_target', (1, 48, 50, 50)), ('bbox_weight', (1, 48, 50, 50))]
*********************Input Dictionary *********************
{'data': (1, 3, 800, 800), 'im_info': (1, 3), 'gt_boxes': (1, 13, 5), 'label': (1, 30000), 'bbox_target': (1, 48, 50, 50), 'bbox_weight': (1, 48, 50, 50)}
infer_shape error. Arguments:
data: (1, 3, 800, 800)
im_info: (1, 3)
gt_boxes: (1, 13, 5)
label: (1, 30000)
bbox_target: (1, 48, 50, 50)
bbox_weight: (1, 48, 50, 50)
Traceback (most recent call last):
File "rcnn_end2end_train_test.py", line 23, in
train_end2end.main()
File "../../relation_rcnn/train_end2end.py", line 188, in main
config.TRAIN.begin_epoch, config.TRAIN.end_epoch, config.TRAIN.lr, config.TRAIN.lr_step)
File "../../relation_rcnn/train_end2end.py", line 101, in train_net
sym_instance.infer_shape(data_shape_dict)
File "../../relation_rcnn/../lib/utils/symbol.py", line 39, in infer_shape
arg_shape, out_shape, aux_shape = self.sym.infer_shape(**data_shape_dict)
File "/mnt/keshav/relnet_python3/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1103, in infer_shape
res = self._infer_shape_impl(False, *args, **kwargs)
File "/mnt/keshav/relnet_python3/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1267, in _infer_shape_impl
ctypes.byref(complete)))
File "/mnt/keshav/relnet_python3/lib/python3.6/site-packages/mxnet/base.py", line 255, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: Error in operator _plus2: [06:13:23] src/operator/contrib/./../elemwise_op_common.h:135: Check failed: assign(&dattr, vec.at(i)): Incompatible attr in node _plus2 at 1-th input: expected [313,16,300], got [19,16,18]
Stack trace:
[bt] (0) /mnt/keshav/relnet_python3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x6b8b5b) [0x7f1934352b5b]
[bt] (1) /mnt/keshav/relnet_python3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x878f39) [0x7f1934512f39]
[bt] (2) /mnt/keshav/relnet_python3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x8797db) [0x7f19345137db]
[bt] (3) /mnt/keshav/relnet_python3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0xb48036) [0x7f19347e2036]
[bt] (4) /mnt/keshav/relnet_python3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x382fe3c) [0x7f19374c9e3c]
[bt] (5) /mnt/keshav/relnet_python3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x38336a8) [0x7f19374cd6a8]
[bt] (6) /mnt/keshav/relnet_python3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x377bc31) [0x7f1937415c31]
[bt] (7) /mnt/keshav/relnet_python3/lib/python3.6/site-packages/mxnet/libmxnet.so(MXSymbolInferShapeEx+0xc1) [0x7f19374162c1]
[bt] (8) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call_unix64+0x4c) [0x7f1973c75dae]
'''

Can someone please explain what might the issue exactly be?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.