Coder Social home page Coder Social logo

josephkj / iod Goto Github PK

View Code? Open in Web Editor NEW
112.0 4.0 20.0 1.5 MB

(TPAMI 2021) iOD: Incremental Object Detection via Meta-Learning

Home Page: https://josephkj.in

License: Apache License 2.0

Shell 0.54% Python 87.93% C++ 3.86% Cuda 7.57% Dockerfile 0.10%
incremental-learning object-detection continual-learning meta-learning detectron2 pami-2021 tpami

iod's Introduction

Incremental Object Detection via Meta-Learning

Published in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)

DOI 10.1109/TPAMI.2021.3124133

Early access on IEEE Xplore: https://ieeexplore.ieee.org/document/9599446

arXiv paper: https://arxiv.org/abs/2003.08798

Abstract

In a real-world setting, object instances from new classes can be continuously encountered by object detectors. When existing object detectors are applied to such scenarios, their performance on old classes deteriorates significantly. A few efforts have been reported to address this limitation, all of which apply variants of knowledge distillation to avoid catastrophic forgetting.

We note that although distillation helps to retain previous learning, it obstructs fast adaptability to new tasks, which is a critical requirement for incremental learning. In this pursuit, we propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared. This ensures a seamless information transfer via a meta-learned gradient preconditioning that minimizes forgetting and maximizes knowledge transfer. In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.

We evaluate our approach on a variety of incremental learning settings defined on PASCAL-VOC and MS COCO datasets, where our approach performs favourably well against state-of-the-art methods.

Figure: Qualitative results of our incremental object detector trained in a 10+10 setting where the first task contain instances of aeroplane, bicycle, bird, boat, bottle, bus, car, cat, chair and cow, while the second task learns instance from diningtable, dog, horse, motorbike, person, pottedplant, sheep, sofa, train and tvmonitor. Our model is able to detect instances of both tasks alike, without forgetting.

Installation and setup

  • Install the Detectron2 library that is packages along with this code base. See INSTALL.md.
  • Download and extract Pascal VOC 2007 to ./datasets/VOC2007/
  • Use the starter script: run.sh

Trained Models and Logs

Setting Reported mAP Reproduced mAP Commands Models and logs
19+1 70.2 70.4 run.sh Google Drive
15+5 67.8 69.6 run.sh Google Drive
10+10 66.3 67.3 run.sh Google Drive
Configurations with which the above results were reproduced:
  • Python version: 3.6.7
  • PyTorch version: 1.3.0
  • CUDA version: 11.0
  • GPUs: 4 x NVIDIA GTX 1080-ti

Acknowledgement

The code is build on top of Detectron2 library.

Citation

If you find our research useful, please consider citing us:

@ARTICLE {joseph2021incremental,
author = {Joseph. KJ and Jathushan. Rajasegaran and Salman. Khan and Fahad. Khan and Vineeth. N Balasubramanian},
journal = {IEEE Transactions on Pattern Analysis & Machine Intelligence},
title = {Incremental Object Detection via Meta-Learning},
year = {2021},
issn = {1939-3539},
doi = {10.1109/TPAMI.2021.3124133},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
month = {nov}
}

iod's People

Contributors

josephkj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

iod's Issues

how to more times incremental learning

Thank you for you good working! i try to work on my datasets 2cls + 5cls, and it's work!. But when i use ft after 2+5 to train more 4 cls, before 7 cls is not work , ap = 0, so i want to know how to work on ((2 + 5) + 4 )+ 6 ......

What does the output model mean after each script

Hi, thanks for your ama~~~zing job!

I'd like to reproduce the 10+10 experiment according to Table 2 in your paper, which means i am running the scripts as follow:

(Base 10)
python tools/train_net.py --num-gpus 1 --config-file ./configs/PascalVOC-Detection/iOD/base_10.yaml SOLVER.IMS_PER_BATCH 1 SOLVER.BASE_LR 0.005

(10 + 10)
python tools/train_net.py --num-gpus 1 --config-file ./configs/PascalVOC-Detection/iOD/10_p_10.yaml SOLVER.IMS_PER_BATCH 1 SOLVER.BASE_LR 0.005

(10 + 10 _ ft)
python tools/train_net.py --num-gpus 1 --config-file ./configs/PascalVOC-Detection/iOD/ft_10_p_10.yaml SOLVER.IMS_PER_BATCH 1 SOLVER.BASE_LR 0.005

(because of limited devices, I set --num-gpus and bs to 1. )
According to my understanding, the Base 10 script conducts the First 10 class training, refer to the 3-th line of table. The trained model can only detect the first 10 class.

However, the 10+10 script output a model that can only detect the last 10 class object, is it not the final model after continual learning?
And, what does the 10+10_ft do, though i am running this script.

Look forward to your reply!

No detailed explanation on running the code

i want to load a pre trained model trained on coco and then using only the new object class train the model to detect the old + new object classes . can you please let me know how to do this

Key error problem

python3 tools/train_net.py --config-file ./configs/PascalVOC-Detection/iOD/base_19.yaml SOLVER.IMS_PER_BATCH 8 SOLVER.BASE_LR 0.005
raiseerror
File "tools/train_net.py", line 169, in
args=(args,),
File "/workspace/detectron2_repo/detectron2/engine/launch.py", line 82, in launch
main_func(*args)
File "tools/train_net.py", line 132, in main
cfg = setup(args)
File "tools/train_net.py", line 124, in setup
cfg.merge_from_file(args.config_file)
File "/workspace/detectron2_repo/detectron2/config/config.py", line 69, in merge_from_file
self.merge_from_other_cfg(loaded_cfg)
File "/usr/local/lib/python3.6/dist-packages/fvcore/common/config.py", line 123, in merge_from_other_cfg
return super().merge_from_other_cfg(cfg_other)
File "/usr/local/lib/python3.6/dist-packages/yacs-0.1.8-py3.6.egg/yacs/config.py", line 217, in merge_from_other_cfg
_merge_a_into_b(cfg_other, self, self, [])
File "/usr/local/lib/python3.6/dist-packages/yacs-0.1.8-py3.6.egg/yacs/config.py", line 478, in _merge_a_into_b
_merge_a_into_b(v, b[k], root, key_list + [k])
File "/usr/local/lib/python3.6/dist-packages/yacs-0.1.8-py3.6.egg/yacs/config.py", line 478, in _merge_a_into_b
_merge_a_into_b(v, b[k], root, key_list + [k])
File "/usr/local/lib/python3.6/dist-packages/yacs-0.1.8-py3.6.egg/yacs/config.py", line 491, in _merge_a_into_b
raise KeyError("Non-existent config key: {}".format(full_key))
KeyError: 'Non-existent config key: MODEL.RPN.FREEZE_WEIGHTS'

Clarification regarding environment setup

Thank you for you amazing working!

in my docker i can setup the official detectron2 ,and run the demo test

i can also run your anothor great work OWOD

But i cannot seteup to build from this rep. any help will be greate thanks.

running build_ext
building 'detectron2._C' extension
creating /mnt/iOD-main/build/temp.linux-x86_64-3.8
creating /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt
creating /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main
creating /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2
creating /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers
creating /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc
creating /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/nms_rotated
creating /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/box_iou_rotated
creating /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/ROIAlign
creating /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/ROIAlignRotated
creating /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/deformable
Emitting ninja build file /mnt/iOD-main/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
1.10.2
g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/vision.o /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/nms_rotated/nms_rotated_cpu.o /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cpu.o /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/ROIAlign/ROIAlign_cpu.o /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated_cpu.o /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/nms_rotated/nms_rotated_cuda.o /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/deformable/deform_conv_cuda.o /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/deformable/deform_conv_cuda_kernel.o /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cuda.o /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/ROIAlign/ROIAlign_cuda.o /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated_cuda.o /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/cuda_version.o -L/opt/conda/lib/python3.8/site-packages/torch/lib -L/usr/local/cuda/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/detectron2/_C.cpython-38-x86_64-linux-gnu.so
g++: error: /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/vision.o: No such file or directory
g++: error: /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/nms_rotated/nms_rotated_cpu.o: No such file or directory
g++: error: /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cpu.o: No such file or directory
g++: error: /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/ROIAlign/ROIAlign_cpu.o: No such file or directory
g++: error: /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated_cpu.o: No such file or directory
g++: error: /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/nms_rotated/nms_rotated_cuda.o: No such file or directory
g++: error: /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/deformable/deform_conv_cuda.o: No such file or directory
g++: error: /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/deformable/deform_conv_cuda_kernel.o: No such file or directory
g++: error: /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cuda.o: No such file or directory
g++: error: /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/ROIAlign/ROIAlign_cuda.o: No such file or directory
g++: error: /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated_cuda.o: No such file or directory
g++: error: /mnt/iOD-main/build/temp.linux-x86_64-3.8/mnt/iOD-main/detectron2/layers/csrc/cuda_version.o: No such file or directory
error: command 'g++' failed with exit status 1
root@aa5c88906608:/mnt/iOD-main#

Evaluate problem

May I ask if the results are not evaluated after training, can I directly evaluate the results?

Some question about configs and datasets

First of all, thank you for this great work, but I still have some doubts about configs and datasets, I hope you can give me some suggestions. I plan to use my own datasets for training. So I first look at base_19.yaml and 19_p_1.yaml and have the following questions..

  1. I found that in base_19.yam ,LEARN_INCREMENTALLY is set to True, shouldn't it be set to False in the first base training stage?
  2. NUM_CLASSES is set to 20, so when doing the first step of training, I have to determine the total number of classes(19+1) before doing the incremental learning ?
  3. If I want to use a customized datasets, do you have any suggestions on what I needs to be changed?
    looking forward to your reply

COCO results not good

Would you offer some details about iOD on the COCO dataset? it seems that iOD performed not very well on COCO dataset. I wonder whether I have to adjust some hyper-parameters(for example, steps, iterations and so on) for better performance?

The problem of maximum of classes

Thank you for this great work!
I have some questions about configs and datasets:
I add my own data on the basis of VOC. Finally, there are 23 classes. Here are my settings:

For learning the base (20 classes) use:

NUM_CLASSES: 50
NUM_BASE_CLASSES: 20
NUM_NOVEL_CLASSES: 30
TRAIN_ON_BASE_CLASSES: True

For an incremental step with 3 class:

NUM_CLASSES: 50
NUM_BASE_CLASSES: 20
NUM_NOVEL_CLASSES: 3
TRAIN_ON_BASE_CLASSES: False

But when I reached the second stage of training, the program made the following mistakes:

Traceback (most recent call last):
File "tools/train_net.py", line 161, in
args=(args,),
File "/home/yff/Desktop/iOD/detectron2/engine/launch.py", line 52, in launch
main_func(*args)
File "tools/train_net.py", line 149, in main
return trainer.train()
File "/home/yff/Desktop/iOD/detectron2/engine/defaults.py", line 407, in train
super().train(self.start_iter, self.max_iter)
File "/home/yff/Desktop/iOD/detectron2/engine/train_loop.py", line 152, in train
self.run_step()
File "/home/yff/Desktop/iOD/detectron2/engine/train_loop.py", line 294, in run_step
self.update_image_store(data)
File "/home/yff/Desktop/iOD/detectron2/engine/train_loop.py", line 235, in update_image_store
self.image_store.add((image,), (cls,))
File "/home/yff/Desktop/iOD/detectron2/utils/store.py", line 16, in add
self.store[class_id].append(items[idx])
IndexError: list index out of range

Can I do more than 20 classes of incremental training?

Warp training

Is Warp training only in the incremental phase? Or did you participate in the training when training the old class?
In other words, is the Istore of the trait store the new class plus the old class? Or does it only contain the new class for the tth task?

Lightweight

Can the PB model finally generated by this framework be converted to ONNX for lightweight based on Tensorrt?

How to compile detectron2 version 0.1 under RTX3090?

This is the output log when I compile detectron2
`(iOD) yu@jinx:/data/yu/code/iOD-main$ pip install -e .
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Obtaining file:///data/yu/code/iOD-main
Requirement already satisfied: termcolor>=1.1 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from detectron2==0.1) (1.1.0)
Requirement already satisfied: Pillow>=6.0 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from detectron2==0.1) (8.4.0)
Requirement already satisfied: yacs>=0.1.6 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from detectron2==0.1) (0.1.8)
Requirement already satisfied: tabulate in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from detectron2==0.1) (0.8.9)
Requirement already satisfied: cloudpickle in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from detectron2==0.1) (2.0.0)
Requirement already satisfied: matplotlib in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from detectron2==0.1) (3.3.4)
Requirement already satisfied: tqdm>4.29.0 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from detectron2==0.1) (4.63.1)
Requirement already satisfied: tensorboard in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from detectron2==0.1) (2.8.0)
Requirement already satisfied: importlib-resources in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tqdm>4.29.0->detectron2==0.1) (5.4.0)
Requirement already satisfied: PyYAML in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from yacs>=0.1.6->detectron2==0.1) (6.0)
Requirement already satisfied: zipp>=3.1.0 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from importlib-resources->tqdm>4.29.0->detectron2==0.1) (3.6.0)
Requirement already satisfied: numpy>=1.15 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from matplotlib->detectron2==0.1) (1.19.5)
Requirement already satisfied: python-dateutil>=2.1 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from matplotlib->detectron2==0.1) (2.8.2)
Requirement already satisfied: cycler>=0.10 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from matplotlib->detectron2==0.1) (0.11.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from matplotlib->detectron2==0.1) (1.3.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from matplotlib->detectron2==0.1) (3.0.7)
Requirement already satisfied: six>=1.5 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from python-dateutil>=2.1->matplotlib->detectron2==0.1) (1.16.0)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tensorboard->detectron2==0.1) (0.6.1)
Requirement already satisfied: protobuf>=3.6.0 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tensorboard->detectron2==0.1) (3.19.4)
Requirement already satisfied: wheel>=0.26 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tensorboard->detectron2==0.1) (0.37.1)
Requirement already satisfied: grpcio>=1.24.3 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tensorboard->detectron2==0.1) (1.45.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tensorboard->detectron2==0.1) (1.8.1)
Requirement already satisfied: werkzeug>=0.11.15 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tensorboard->detectron2==0.1) (2.0.3)
Requirement already satisfied: google-auth<3,>=1.6.3 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tensorboard->detectron2==0.1) (2.6.2)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tensorboard->detectron2==0.1) (0.4.6)
Requirement already satisfied: markdown>=2.6.8 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tensorboard->detectron2==0.1) (3.3.6)
Requirement already satisfied: requests<3,>=2.21.0 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tensorboard->detectron2==0.1) (2.27.1)
Requirement already satisfied: setuptools>=41.0.0 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tensorboard->detectron2==0.1) (58.0.4)
Requirement already satisfied: absl-py>=0.4 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from tensorboard->detectron2==0.1) (1.0.0)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from google-auth<3,>=1.6.3->tensorboard->detectron2==0.1) (4.2.4)
Requirement already satisfied: rsa<5,>=3.1.4 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from google-auth<3,>=1.6.3->tensorboard->detectron2==0.1) (4.8)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from google-auth<3,>=1.6.3->tensorboard->detectron2==0.1) (0.2.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->detectron2==0.1) (1.3.1)
Requirement already satisfied: importlib-metadata>=4.4 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from markdown>=2.6.8->tensorboard->detectron2==0.1) (4.8.3)
Requirement already satisfied: typing-extensions>=3.6.4 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard->detectron2==0.1) (4.1.1)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard->detectron2==0.1) (0.4.8)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.1) (1.26.9)
Requirement already satisfied: certifi>=2017.4.17 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.1) (2020.6.20)
Requirement already satisfied: charset-normalizer~=2.0.0 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.1) (2.0.12)
Requirement already satisfied: idna<4,>=2.5 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.1) (3.3)
Requirement already satisfied: oauthlib>=3.0.0 in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->detectron2==0.1) (3.2.0)
Requirement already satisfied: dataclasses in /home/yu/.conda/envs/iOD/lib/python3.6/site-packages (from werkzeug>=0.11.15->tensorboard->detectron2==0.1) (0.8)
Installing collected packages: detectron2
Running setup.py develop for detectron2
ERROR: Command errored out with exit status 1:
command: /home/yu/.conda/envs/iOD/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/data/yu/code/iOD-main/setup.py'"'"'; file='"'"'/data/yu/code/iOD-main/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' develop --no-deps
cwd: /data/yu/code/iOD-main/
Complete output (521 lines):
running develop
running egg_info
writing detectron2.egg-info/PKG-INFO
writing dependency_links to detectron2.egg-info/dependency_links.txt
writing requirements to detectron2.egg-info/requires.txt
writing top-level names to detectron2.egg-info/top_level.txt
reading manifest file 'detectron2.egg-info/SOURCES.txt'
adding license file 'LICENSE'
writing manifest file 'detectron2.egg-info/SOURCES.txt'
running build_ext
building 'detectron2._C' extension
creating build
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/data
creating build/temp.linux-x86_64-3.6/data/yu
creating build/temp.linux-x86_64-3.6/data/yu/code
creating build/temp.linux-x86_64-3.6/data/yu/code/iOD-main
creating build/temp.linux-x86_64-3.6/data/yu/code/iOD-main/detectron2
creating build/temp.linux-x86_64-3.6/data/yu/code/iOD-main/detectron2/layers
creating build/temp.linux-x86_64-3.6/data/yu/code/iOD-main/detectron2/layers/csrc
creating build/temp.linux-x86_64-3.6/data/yu/code/iOD-main/detectron2/layers/csrc/ROIAlignRotated
creating build/temp.linux-x86_64-3.6/data/yu/code/iOD-main/detectron2/layers/csrc/ROIAlign
creating build/temp.linux-x86_64-3.6/data/yu/code/iOD-main/detectron2/layers/csrc/nms_rotated
creating build/temp.linux-x86_64-3.6/data/yu/code/iOD-main/detectron2/layers/csrc/box_iou_rotated
creating build/temp.linux-x86_64-3.6/data/yu/code/iOD-main/detectron2/layers/csrc/deformable
gcc -pthread -B /home/yu/.conda/envs/iOD/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/data/yu/code/iOD-main/detectron2/layers/csrc -I/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include -I/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/TH -I/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-11.3/include -I/home/yu/.conda/envs/iOD/include/python3.6m -c /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp -o build/temp.linux-x86_64-3.6/data/yu/code/iOD-main/detectron2/layers/csrc/vision.o -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Parallel.h:140:0,
from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ParallelOpenMP.h:87:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)

In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:4:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/ROIAlign/ROIAlign.h: In function ‘at::Tensor detectron2::ROIAlign_forward(const at::Tensor&, const at::Tensor&, float, int, int, int, bool)’:
/data/yu/code/iOD-main/detectron2/layers/csrc/ROIAlign/ROIAlign.h:62:18: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
   if (input.type().is_cuda()) {
                  ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:4:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/ROIAlign/ROIAlign.h: In function ‘at::Tensor detectron2::ROIAlign_backward(const at::Tensor&, const at::Tensor&, float, int, int, int, int, int, int, int, bool)’:
/data/yu/code/iOD-main/detectron2/layers/csrc/ROIAlign/ROIAlign.h:98:17: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
   if (grad.type().is_cuda()) {
                 ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:5:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h: In function ‘at::Tensor detectron2::ROIAlignRotated_forward(const at::Tensor&, const at::Tensor&, float, int, int, int)’:
/data/yu/code/iOD-main/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h:57:18: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
   if (input.type().is_cuda()) {
                  ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:5:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h: In function ‘at::Tensor detectron2::ROIAlignRotated_backward(const at::Tensor&, const at::Tensor&, float, int, int, int, int, int, int, int)’:
/data/yu/code/iOD-main/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h:85:17: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
   if (grad.type().is_cuda()) {
                 ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h: In function ‘int detectron2::deform_conv_forward(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int, int, int, int, int, int, int, int, int, int)’:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:134:18: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
   if (input.type().is_cuda()) {
                  ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:136:26: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(weight.type().is_cuda(), "weight tensor is not on GPU!");
                          ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:136:5: error: ‘AT_CHECK’ was not declared in this scope
     AT_CHECK(weight.type().is_cuda(), "weight tensor is not on GPU!");
     ^~~~~~~~
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:136:5: note: suggested alternative: ‘DCHECK’
     AT_CHECK(weight.type().is_cuda(), "weight tensor is not on GPU!");
     ^~~~~~~~
     DCHECK
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:137:26: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(offset.type().is_cuda(), "offset tensor is not on GPU!");
                          ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h: In function ‘int detectron2::deform_conv_backward_input(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int, int, int, int, int, int, int, int, int, int)’:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:182:23: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
   if (gradOutput.type().is_cuda()) {
                       ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:184:25: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(input.type().is_cuda(), "input tensor is not on GPU!");
                         ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:184:5: error: ‘AT_CHECK’ was not declared in this scope
     AT_CHECK(input.type().is_cuda(), "input tensor is not on GPU!");
     ^~~~~~~~
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:184:5: note: suggested alternative: ‘DCHECK’
     AT_CHECK(input.type().is_cuda(), "input tensor is not on GPU!");
     ^~~~~~~~
     DCHECK
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:185:26: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(weight.type().is_cuda(), "weight tensor is not on GPU!");
                          ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:186:26: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(offset.type().is_cuda(), "offset tensor is not on GPU!");
                          ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h: In function ‘int detectron2::deform_conv_backward_filter(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int, int, int, int, int, int, int, int, int, float, int)’:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:232:23: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
   if (gradOutput.type().is_cuda()) {
                       ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:234:25: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(input.type().is_cuda(), "input tensor is not on GPU!");
                         ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:234:5: error: ‘AT_CHECK’ was not declared in this scope
     AT_CHECK(input.type().is_cuda(), "input tensor is not on GPU!");
     ^~~~~~~~
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:234:5: note: suggested alternative: ‘DCHECK’
     AT_CHECK(input.type().is_cuda(), "input tensor is not on GPU!");
     ^~~~~~~~
     DCHECK
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:235:26: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(offset.type().is_cuda(), "offset tensor is not on GPU!");
                          ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h: In function ‘void detectron2::modulated_deform_conv_forward(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int, int, int, int, int, int, int, int, int, bool)’:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:282:18: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
   if (input.type().is_cuda()) {
                  ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:284:26: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(weight.type().is_cuda(), "weight tensor is not on GPU!");
                          ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:284:5: error: ‘AT_CHECK’ was not declared in this scope
     AT_CHECK(weight.type().is_cuda(), "weight tensor is not on GPU!");
     ^~~~~~~~
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:284:5: note: suggested alternative: ‘DCHECK’
     AT_CHECK(weight.type().is_cuda(), "weight tensor is not on GPU!");
     ^~~~~~~~
     DCHECK
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:285:24: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(bias.type().is_cuda(), "bias tensor is not on GPU!");
                        ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:286:26: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(offset.type().is_cuda(), "offset tensor is not on GPU!");
                          ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h: In function ‘void detectron2::modulated_deform_conv_backward(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int, int, int, int, int, int, int, int, int, bool)’:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:339:24: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
   if (grad_output.type().is_cuda()) {
                        ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:341:25: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(input.type().is_cuda(), "input tensor is not on GPU!");
                         ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:341:5: error: ‘AT_CHECK’ was not declared in this scope
     AT_CHECK(input.type().is_cuda(), "input tensor is not on GPU!");
     ^~~~~~~~
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:341:5: note: suggested alternative: ‘DCHECK’
     AT_CHECK(input.type().is_cuda(), "input tensor is not on GPU!");
     ^~~~~~~~
     DCHECK
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:342:26: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(weight.type().is_cuda(), "weight tensor is not on GPU!");
                          ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:343:24: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(bias.type().is_cuda(), "bias tensor is not on GPU!");
                        ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
In file included from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:7:0:
/data/yu/code/iOD-main/detectron2/layers/csrc/deformable/deform_conv.h:344:26: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     AT_CHECK(offset.type().is_cuda(), "offset tensor is not on GPU!");
                          ^
In file included from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
                 from /data/yu/code/iOD-main/detectron2/layers/csrc/vision.cpp:3:
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:338:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
/home/yu/.conda/envs/iOD/lib/python3.6/site-packages/torch/utils/cpp_extension.py:370: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
  warnings.warn(msg.format('we could not find ninja.'))
error: command 'gcc' failed with exit status 1
----------------------------------------

ERROR: Command errored out with exit status 1: /home/yu/.conda/envs/iOD/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/data/yu/code/iOD-main/setup.py'"'"'; file='"'"'/data/yu/code/iOD-main/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.`

RuntimeError: unexpected EOF, expected 8 more bytes. The file might be corrupted.

I tried to run the commands in the run.sh

# Base 15
sleep 10
python tools/train_net.py --num-gpus 4 --config-file ./configs/PascalVOC-Detection/iOD/base_15.yaml SOLVER.IMS_PER_BATCH 8 SOLVER.BASE_LR 0.005
# 15 + 5
sleep 10
python tools/train_net.py --num-gpus 4 --config-file ./configs/PascalVOC-Detection/iOD/15_p_5.yaml SOLVER.IMS_PER_BATCH 8 SOLVER.BASE_LR 0.005

The first command is ok (base 15), but the second command went something wrong.
Here is my log:

(IODML) yupeng@compute01:~/IODML/iOD$ 
(IODML) yupeng@compute01:~/IODML/iOD$ python tools/train_net.py --num-gpus 1 --config-file ./configs/PascalVOC-Detection/iOD/15_p_5.yaml SOLVER
R.IMS_PER_BATCH 8 SOLVER.BASE_LR 0.005�M�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C4

�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C�[C
Command Line Args: Namespace(config_file='./configs/PascalVOC-Detection/iOD/15_p_5.yaml', dist_url='tcp://127.0.0.1:50252', eval_only=False, machine_rank=0, num_gpus=4, num_machines=1, opts=['SOLVER.IMS_PER_BATCH', '8', 'SOLVER.BASE_LR', '0.005'], resume=False)
�[32m[01/22 20:49:55 detectron2]: �[0mRank of current process: 0. World size: 4
�[32m[01/22 20:49:55 detectron2]: �[0mEnvironment info:
------------------------  --------------------------------------------------------------------
sys.platform              linux
Python                    3.6.13 |Anaconda, Inc.| (default, Jun  4 2021, 14:25:59) [GCC 7.5.0]
Numpy                     1.19.5
Detectron2 Compiler       GCC 7.5
Detectron2 CUDA Compiler  10.1
DETECTRON2_ENV_MODULE     <not set>
PyTorch                   1.3.0
PyTorch Debug Build       False
torchvision               0.4.1
CUDA available            True
GPU 0,1,2,3               GeForce RTX 2080 Ti
CUDA_HOME                 /home/yupeng/zzy/cuda-10.1
NVCC                      Cuda compilation tools, release 10.1, V10.1.105
Pillow                    8.4.0
cv2                       4.4.0
------------------------  --------------------------------------------------------------------
PyTorch built with:
  - GCC 7.3
  - Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v0.20.5 (Git Hash 0125f28c61c1f822fd48570b4c1066f96fcb9b2e)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CUDA Runtime 10.1
  - NVCC architecture flags: -gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_50,code=compute_50
  - CuDNN 7.6.3
  - Magma 2.5.1
  - Build settings: BLAS=MKL, BUILD_NAMEDTENSOR=OFF, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow, DISABLE_NUMA=1, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=True, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, 

�[32m[01/22 20:49:55 detectron2]: �[0mCommand line arguments: Namespace(config_file='./configs/PascalVOC-Detection/iOD/15_p_5.yaml', dist_url='tcp://127.0.0.1:50252', eval_only=False, machine_rank=0, num_gpus=4, num_machines=1, opts=['SOLVER.IMS_PER_BATCH', '8', 'SOLVER.BASE_LR', '0.005'], resume=False)
�[32m[01/22 20:49:55 detectron2]: �[0mContents of args.config_file=./configs/PascalVOC-Detection/iOD/15_p_5.yaml:
_BASE_: "../../Base-RCNN-C4.yaml"
MODEL:
  WEIGHTS: "./output/first_15/model_final.pth"
  BASE_WEIGHTS: "./output/first_15/model_final.pth"
  MASK_ON: False
  RESNETS:
    DEPTH: 50
  ROI_HEADS:
    # Maximum number of foreground classes to expect
    NUM_CLASSES: 20
    # Flag to turn on/off Incremental Learning
    LEARN_INCREMENTALLY: True
    # Flag to select whether to learn base classes or iOD expanded classes
    TRAIN_ON_BASE_CLASSES: False
    # Number of base classes; these classes would be trained if TRAIN_ON_BASE_CLASSES is set to True
    NUM_BASE_CLASSES: 15
    # Number of novel classes; these classes would be trained if TRAIN_ON_BASE_CLASSES is set to False
    NUM_NOVEL_CLASSES: 5
    POSITIVE_FRACTION: 0.25
    NMS_THRESH_TEST: 0.3
  RPN:
    FREEZE_WEIGHTS: False
  ROI_BOX_HEAD:
    CLS_AGNOSTIC_BBOX_REG: True
INPUT:
  MIN_SIZE_TRAIN: (480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800)
  MIN_SIZE_TEST: 800
DATASETS:
  TRAIN: ('voc_2007_trainval',)
  TEST: ('voc_2007_test',)
SOLVER:
  STEPS: (30000, 34000) # 21000, 22000
  MAX_ITER: 20000  # 36000
  WARMUP_ITERS: 100 # 100
  LR_SCHEDULER_NAME: WarmupMultiStepLR
OUTPUT_DIR: ./output/15_p_5
VIS_PERIOD: 17000
DISTILL:
  ENABLE: True
  BACKBONE: True
  RPN: False
  ROI_HEADS: True
  ONLY_FG_ROIS: False
  # (1-LOSS_WEIGHT) (CLF / REG loss) + (LOSS_WEIGHT) ROI-Distillation
  LOSS_WEIGHT: 0.2
# Warp Grad
WG:
  ENABLE: True
  TRAIN_WARP_AT_ITR_NO: 20
  WARP_LAYERS: ("module.roi_heads.res5.2.conv3.weight",)
  NUM_FEATURES_PER_CLASS: 100
  NUM_IMAGES_PER_CLASS: 10
  BATCH_SIZE: 2
  USE_FEATURE_STORE: True
  IMAGE_STORE_LOC: './15_p_5.pth'

SEED: 9999
VERSION: 2
�[32m[01/22 20:49:55 detectron2]: �[0mRunning with full config:
CUDNN_BENCHMARK: False
DATALOADER:
  ASPECT_RATIO_GROUPING: True
  FILTER_EMPTY_ANNOTATIONS: True
  NUM_WORKERS: 4
  REPEAT_THRESHOLD: 0.0
  SAMPLER_TRAIN: TrainingSampler
DATASETS:
  PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
  PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000
  PROPOSAL_FILES_TEST: ()
  PROPOSAL_FILES_TRAIN: ()
  TEST: ('voc_2007_test',)
  TRAIN: ('voc_2007_trainval',)
DISTILL:
  BACKBONE: True
  ENABLE: True
  LOSS_WEIGHT: 0.2
  MEAN_TEACHER: False
  MEAN_TEACHER_ALPHA: 0.9
  ONLY_FG_ROIS: False
  ROI_HEADS: True
  RPN: False
FINETUNE:
  BATCH_SIZE: 2
  ENABLE: False
  MIN_NUM_IMG_PER_CLASS: -1
  USE_IMAGE_STORE: False
GLOBAL:
  HACK: 1.0
INPUT:
  CROP:
    ENABLED: False
    SIZE: [0.9, 0.9]
    TYPE: relative_range
  FORMAT: BGR
  MASK_FORMAT: polygon
  MAX_SIZE_TEST: 1333
  MAX_SIZE_TRAIN: 1333
  MIN_SIZE_TEST: 800
  MIN_SIZE_TRAIN: (480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800)
  MIN_SIZE_TRAIN_SAMPLING: choice
MODEL:
  ANCHOR_GENERATOR:
    ANGLES: [[-90, 0, 90]]
    ASPECT_RATIOS: [[0.5, 1.0, 2.0]]
    NAME: DefaultAnchorGenerator
    OFFSET: 0.0
    SIZES: [[32, 64, 128, 256, 512]]
  BACKBONE:
    FREEZE_AT: 2
    NAME: build_resnet_backbone
  BASE_WEIGHTS: ./output/first_15/model_final.pth
  DEVICE: cuda
  FPN:
    FUSE_TYPE: sum
    IN_FEATURES: []
    NORM: 
    OUT_CHANNELS: 256
  KEYPOINT_ON: False
  LOAD_PROPOSALS: False
  MASK_ON: False
  META_ARCHITECTURE: GeneralizedRCNN
  PANOPTIC_FPN:
    COMBINE:
      ENABLED: True
      INSTANCES_CONFIDENCE_THRESH: 0.5
      OVERLAP_THRESH: 0.5
      STUFF_AREA_LIMIT: 4096
    INSTANCE_LOSS_WEIGHT: 1.0
  PIXEL_MEAN: [103.53, 116.28, 123.675]
  PIXEL_STD: [1.0, 1.0, 1.0]
  PROPOSAL_GENERATOR:
    MIN_SIZE: 0
    NAME: RPN
  RESNETS:
    DEFORM_MODULATED: False
    DEFORM_NUM_GROUPS: 1
    DEFORM_ON_PER_STAGE: [False, False, False, False]
    DEPTH: 50
    NORM: FrozenBN
    NUM_GROUPS: 1
    OUT_FEATURES: ['res4']
    RES2_OUT_CHANNELS: 256
    RES5_DILATION: 1
    STEM_OUT_CHANNELS: 64
    STRIDE_IN_1X1: True
    WIDTH_PER_GROUP: 64
  RETINANET:
    BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0)
    FOCAL_LOSS_ALPHA: 0.25
    FOCAL_LOSS_GAMMA: 2.0
    IN_FEATURES: ['p3', 'p4', 'p5', 'p6', 'p7']
    IOU_LABELS: [0, -1, 1]
    IOU_THRESHOLDS: [0.4, 0.5]
    NMS_THRESH_TEST: 0.5
    NUM_CLASSES: 80
    NUM_CONVS: 4
    PRIOR_PROB: 0.01
    SCORE_THRESH_TEST: 0.05
    SMOOTH_L1_LOSS_BETA: 0.1
    TOPK_CANDIDATES_TEST: 1000
  ROI_BOX_CASCADE_HEAD:
    BBOX_REG_WEIGHTS: ((10.0, 10.0, 5.0, 5.0), (20.0, 20.0, 10.0, 10.0), (30.0, 30.0, 15.0, 15.0))
    IOUS: (0.5, 0.6, 0.7)
  ROI_BOX_HEAD:
    BBOX_REG_WEIGHTS: (10.0, 10.0, 5.0, 5.0)
    CLS_AGNOSTIC_BBOX_REG: True
    CONV_DIM: 256
    FC_DIM: 1024
    NAME: 
    NORM: 
    NUM_CONV: 0
    NUM_FC: 0
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
    SMOOTH_L1_BETA: 0.0
  ROI_HEADS:
    BATCH_SIZE_PER_IMAGE: 512
    IN_FEATURES: ['res4']
    IOU_LABELS: [0, 1]
    IOU_THRESHOLDS: [0.5]
    LEARN_INCREMENTALLY: True
    NAME: Res5ROIHeads
    NMS_THRESH_TEST: 0.3
    NUM_BASE_CLASSES: 15
    NUM_CLASSES: 20
    NUM_NOVEL_CLASSES: 5
    POSITIVE_FRACTION: 0.25
    PROPOSAL_APPEND_GT: True
    SCORE_THRESH_TEST: 0.05
    TRAIN_ON_BASE_CLASSES: False
  ROI_KEYPOINT_HEAD:
    CONV_DIMS: (512, 512, 512, 512, 512, 512, 512, 512)
    LOSS_WEIGHT: 1.0
    MIN_KEYPOINTS_PER_IMAGE: 1
    NAME: KRCNNConvDeconvUpsampleHead
    NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS: True
    NUM_KEYPOINTS: 17
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
  ROI_MASK_HEAD:
    CLS_AGNOSTIC_MASK: False
    CONV_DIM: 256
    NAME: MaskRCNNConvUpsampleHead
    NORM: 
    NUM_CONV: 0
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 0
    POOLER_TYPE: ROIAlignV2
  RPN:
    BATCH_SIZE_PER_IMAGE: 256
    BBOX_REG_WEIGHTS: (1.0, 1.0, 1.0, 1.0)
    BOUNDARY_THRESH: -1
    FREEZE_WEIGHTS: False
    HEAD_NAME: StandardRPNHead
    IN_FEATURES: ['res4']
    IOU_LABELS: [0, -1, 1]
    IOU_THRESHOLDS: [0.3, 0.7]
    LOSS_WEIGHT: 1.0
    NMS_THRESH: 0.7
    POSITIVE_FRACTION: 0.5
    POST_NMS_TOPK_TEST: 1000
    POST_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 6000
    PRE_NMS_TOPK_TRAIN: 12000
    SMOOTH_L1_BETA: 0.0
  SEM_SEG_HEAD:
    COMMON_STRIDE: 4
    CONVS_DIM: 128
    IGNORE_VALUE: 255
    IN_FEATURES: ['p2', 'p3', 'p4', 'p5']
    LOSS_WEIGHT: 1.0
    NAME: SemSegFPNHead
    NORM: GN
    NUM_CLASSES: 54
  WEIGHTS: ./output/first_15/model_final.pth
OUTPUT_DIR: ./output/15_p_5
SEED: 9999
SOLVER:
  BASE_LR: 0.005
  BIAS_LR_FACTOR: 1.0
  CHECKPOINT_PERIOD: 5000
  EXPLICIT_LR: 0.0
  GAMMA: 0.1
  IMS_PER_BATCH: 8
  LR_SCHEDULER_NAME: WarmupMultiStepLR
  MAX_ITER: 20000
  MOMENTUM: 0.9
  STEPS: (30000, 34000)
  WARMUP_FACTOR: 0.001
  WARMUP_ITERS: 100
  WARMUP_METHOD: linear
  WEIGHT_DECAY: 0.0001
  WEIGHT_DECAY_BIAS: 0.0001
  WEIGHT_DECAY_NORM: 0.0
TEST:
  AUG:
    ENABLED: False
    FLIP: True
    MAX_SIZE: 4000
    MIN_SIZES: (400, 500, 600, 700, 800, 900, 1000, 1100, 1200)
  DETECTIONS_PER_IMAGE: 100
  EVAL_PERIOD: 0
  EXPECTED_RESULTS: []
  KEYPOINT_OKS_SIGMAS: []
  PRECISE_BN:
    ENABLED: False
    NUM_ITER: 200
VERSION: 2
VIS_PERIOD: 17000
WG:
  BATCH_SIZE: 2
  ENABLE: True
  IMAGE_STORE_LOC: ./15_p_5.pth
  NUM_FEATURES_PER_CLASS: 100
  NUM_IMAGES_PER_CLASS: 10
  TRAIN_WARP: False
  TRAIN_WARP_AT_ITR_NO: 20
  USE_FEATURE_STORE: True
  WARP_LAYERS: ('module.roi_heads.res5.2.conv3.weight',)
�[32m[01/22 20:49:55 detectron2]: �[0mFull config saved to /home/yupeng/IODML/iOD/output/15_p_5/config.yaml
�[32m[01/22 20:49:56 d2.modeling.roi_heads.roi_heads]: �[0mInvalid class range: []
�[32m[01/22 20:49:56 d2.engine.defaults]: �[0mModel:
GeneralizedRCNN(
  (backbone): ResNet(
    (stem): BasicStem(
      (conv1): Conv2d(
        3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
        (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
      )
    )
    (res2): Sequential(
      (0): BottleneckBlock(
        (shortcut): Conv2d(
          64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv1): Conv2d(
          64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv2): Conv2d(
          64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv3): Conv2d(
          64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
      )
      (1): BottleneckBlock(
        (conv1): Conv2d(
          256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv2): Conv2d(
          64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv3): Conv2d(
          64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
      )
      (2): BottleneckBlock(
        (conv1): Conv2d(
          256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv2): Conv2d(
          64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv3): Conv2d(
          64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
      )
    )
    (res3): Sequential(
      (0): BottleneckBlock(
        (shortcut): Conv2d(
          256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv1): Conv2d(
          256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv2): Conv2d(
          128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv3): Conv2d(
          128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
      )
      (1): BottleneckBlock(
        (conv1): Conv2d(
          512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv2): Conv2d(
          128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv3): Conv2d(
          128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
      )
      (2): BottleneckBlock(
        (conv1): Conv2d(
          512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv2): Conv2d(
          128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv3): Conv2d(
          128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
      )
      (3): BottleneckBlock(
        (conv1): Conv2d(
          512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv2): Conv2d(
          128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv3): Conv2d(
          128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
      )
    )
    (res4): Sequential(
      (0): BottleneckBlock(
        (shortcut): Conv2d(
          512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
        (conv1): Conv2d(
          512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (1): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (2): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (3): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (4): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (5): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
    )
  )
  (proposal_generator): RPN(
    (anchor_generator): DefaultAnchorGenerator(
      (cell_anchors): BufferList()
    )
    (rpn_head): StandardRPNHead(
      (conv): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (objectness_logits): Conv2d(1024, 15, kernel_size=(1, 1), stride=(1, 1))
      (anchor_deltas): Conv2d(1024, 60, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (roi_heads): Res5ROIHeads(
    (pooler): ROIPooler(
      (level_poolers): ModuleList(
        (0): ROIAlign(output_size=(14, 14), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
      )
    )
    (res5): Sequential(
      (0): BottleneckBlock(
        (shortcut): Conv2d(
          1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
        )
        (conv1): Conv2d(
          1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv2): Conv2d(
          512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv3): Conv2d(
          512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
        )
      )
      (1): BottleneckBlock(
        (conv1): Conv2d(
          2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv2): Conv2d(
          512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv3): Conv2d(
          512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
        )
      )
      (2): BottleneckBlock(
        (conv1): Conv2d(
          2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv2): Conv2d(
          512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv3): Conv2d(
          512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
        )
      )
    )
    (box_predictor): FastRCNNOutputLayers(
      (cls_score): Linear(in_features=2048, out_features=21, bias=True)
      (bbox_pred): Linear(in_features=2048, out_features=4, bias=True)
    )
  )
)
�[32m[01/22 20:49:57 d2.data.build]: �[0mRemoved 0 images with no usable annotations. 5011 images left.
�[32m[01/22 20:49:57 d2.data.build]: �[0mDistribution of instances among all 20 categories:
�[36m|  category   | #instances   |  category   | #instances   |  category  | #instances   |
|:-----------:|:-------------|:-----------:|:-------------|:----------:|:-------------|
|  aeroplane  | 331          |   bicycle   | 418          |    bird    | 599          |
|    boat     | 398          |   bottle    | 634          |    bus     | 272          |
|     car     | 1644         |     cat     | 389          |   chair    | 1432         |
|     cow     | 356          | diningtable | 310          |    dog     | 538          |
|    horse    | 406          |  motorbike  | 390          |   person   | 5447         |
| pottedplant | 625          |    sheep    | 353          |    sofa    | 425          |
|    train    | 328          |  tvmonitor  | 367          |            |              |
|    total    | 15662        |             |              |            |              |�[0m
�[32m[01/22 20:49:57 d2.data.build]: �[0mNumber of images: 5011
�[32m[01/22 20:49:58 d2.data.build]: �[0mDistribution of instances among all 20 categories:
�[36m|  category   | #instances   |  category   | #instances   |  category  | #instances   |
|:-----------:|:-------------|:-----------:|:-------------|:----------:|:-------------|
|  aeroplane  | 0            |   bicycle   | 0            |    bird    | 0            |
|    boat     | 0            |   bottle    | 0            |    bus     | 0            |
|     car     | 0            |     cat     | 0            |   chair    | 0            |
|     cow     | 0            | diningtable | 0            |    dog     | 0            |
|    horse    | 0            |  motorbike  | 0            |   person   | 0            |
| pottedplant | 625          |    sheep    | 353          |    sofa    | 425          |
|    train    | 328          |  tvmonitor  | 367          |            |              |
|    total    | 2098         |             |              |            |              |�[0m
�[32m[01/22 20:49:58 d2.data.build]: �[0mNumber of images: 1152
�[32m[01/22 20:49:58 d2.data.detection_utils]: �[0mTransformGens used in training: [ResizeShortestEdge(short_edge_length=(480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
�[32m[01/22 20:49:58 d2.data.build]: �[0mUsing training sampler TrainingSampler
�[32m[01/22 20:49:58 d2.engine.defaults]: �[0mCreating base model for distillation.
�[32m[01/22 20:49:58 d2.modeling.roi_heads.roi_heads]: �[0mInvalid class range: []
�[32m[01/22 20:49:58 d2.engine.defaults]: �[0mModel:
GeneralizedRCNN(
  (backbone): ResNet(
    (stem): BasicStem(
      (conv1): Conv2d(
        3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
        (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
      )
    )
    (res2): Sequential(
      (0): BottleneckBlock(
        (shortcut): Conv2d(
          64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv1): Conv2d(
          64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv2): Conv2d(
          64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv3): Conv2d(
          64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
      )
      (1): BottleneckBlock(
        (conv1): Conv2d(
          256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv2): Conv2d(
          64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv3): Conv2d(
          64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
      )
      (2): BottleneckBlock(
        (conv1): Conv2d(
          256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv2): Conv2d(
          64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv3): Conv2d(
          64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
      )
    )
    (res3): Sequential(
      (0): BottleneckBlock(
        (shortcut): Conv2d(
          256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv1): Conv2d(
          256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv2): Conv2d(
          128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv3): Conv2d(
          128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
      )
      (1): BottleneckBlock(
        (conv1): Conv2d(
          512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv2): Conv2d(
          128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv3): Conv2d(
          128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
      )
      (2): BottleneckBlock(
        (conv1): Conv2d(
          512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv2): Conv2d(
          128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv3): Conv2d(
          128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
      )
      (3): BottleneckBlock(
        (conv1): Conv2d(
          512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv2): Conv2d(
          128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv3): Conv2d(
          128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
      )
    )
    (res4): Sequential(
      (0): BottleneckBlock(
        (shortcut): Conv2d(
          512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
        (conv1): Conv2d(
          512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (1): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (2): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (3): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (4): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (5): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
    )
  )
  (proposal_generator): RPN(
    (anchor_generator): DefaultAnchorGenerator(
      (cell_anchors): BufferList()
    )
    (rpn_head): StandardRPNHead(
      (conv): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (objectness_logits): Conv2d(1024, 15, kernel_size=(1, 1), stride=(1, 1))
      (anchor_deltas): Conv2d(1024, 60, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (roi_heads): Res5ROIHeads(
    (pooler): ROIPooler(
      (level_poolers): ModuleList(
        (0): ROIAlign(output_size=(14, 14), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
      )
    )
    (res5): Sequential(
      (0): BottleneckBlock(
        (shortcut): Conv2d(
          1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
        )
        (conv1): Conv2d(
          1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv2): Conv2d(
          512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv3): Conv2d(
          512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
        )
      )
      (1): BottleneckBlock(
        (conv1): Conv2d(
          2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv2): Conv2d(
          512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv3): Conv2d(
          512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
        )
      )
      (2): BottleneckBlock(
        (conv1): Conv2d(
          2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv2): Conv2d(
          512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv3): Conv2d(
          512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
        )
      )
    )
    (box_predictor): FastRCNNOutputLayers(
      (cls_score): Linear(in_features=2048, out_features=21, bias=True)
      (bbox_pred): Linear(in_features=2048, out_features=4, bias=True)
    )
  )
)
Traceback (most recent call last):
  File "tools/train_net.py", line 161, in <module>
    args=(args,),
  File "/home/yupeng/IODML/iOD/detectron2/engine/launch.py", line 49, in launch
    daemon=False,
  File "/home/yupeng/anaconda3/envs/IODML/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
    while not spawn_context.join():
  File "/home/yupeng/anaconda3/envs/IODML/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 118, in join
    raise Exception(msg)
Exception: 

-- Process 2 terminated with the following error:
Traceback (most recent call last):
  File "/home/yupeng/anaconda3/envs/IODML/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
    fn(i, *args)
  File "/home/yupeng/IODML/iOD/detectron2/engine/launch.py", line 84, in _distributed_worker
    main_func(*args)
  File "/home/yupeng/IODML/iOD/tools/train_net.py", line 143, in main
    trainer = Trainer(cfg)
  File "/home/yupeng/IODML/iOD/detectron2/engine/defaults.py", line 296, in __init__
    self.image_store = torch.load(f)
  File "/home/yupeng/anaconda3/envs/IODML/lib/python3.6/site-packages/torch/serialization.py", line 426, in load
    return _load(f, map_location, pickle_module, **pickle_load_args)
  File "/home/yupeng/anaconda3/envs/IODML/lib/python3.6/site-packages/torch/serialization.py", line 620, in _load
    deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 8 more bytes. The file might be corrupted.

(IODML) yupeng@compute01:~/IODML/iOD$ 

The f in "self.image_store = torch.load(f)" refers to "./15_p_5.pth"

Box regression deltas become infinite or NaN

`[05/05 09:41:28 d2.engine.train_loop]: Starting training from iteration 0
[05/05 09:41:47 d2.utils.events]: eta: 4:25:15 iter: 19 total_loss: 1.521 loss_cls: 0.634 loss_box_reg: 0.112 loss_rpn_cls: 0.673 loss_rpn_loc: 0.145 time: 0.9152 data_time: 0.0076 lr: 0.000954 max_mem: 4685M size_of_ImageStore: N/A
[05/05 09:42:05 d2.utils.events]: eta: 4:28:50 iter: 39 total_loss: 0.875 loss_cls: 0.216 loss_box_reg: 0.119 loss_rpn_cls: 0.398 loss_rpn_loc: 0.139 time: 0.9152 data_time: 0.0133 lr: 0.001953 max_mem: 4685M size_of_ImageStore: N/A
[05/05 09:42:24 d2.utils.events]: eta: 4:30:12 iter: 59 total_loss: 0.744 loss_cls: 0.173 loss_box_reg: 0.110 loss_rpn_cls: 0.278 loss_rpn_loc: 0.170 time: 0.9189 data_time: 0.0046 lr: 0.002952 max_mem: 4685M size_of_ImageStore: N/A
[05/05 09:42:37 d2.engine.hooks]: Overall training speed: 72 iterations in 0:01:06 (0.9225 s / it)
[05/05 09:42:37 d2.engine.hooks]: Total training time: 0:01:07 (0:00:00 on hooks)
Traceback (most recent call last):
File "tools/train_net.py", line 162, in
args=(args,),
File "/data/yu/code/iOD/detectron2/engine/launch.py", line 49, in launch
daemon=False,
File "/root/anaconda3/envs/iOD/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
while not spawn_context.join():
File "/root/anaconda3/envs/iOD/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 118, in join
raise Exception(msg)
Exception:

-- Process 3 terminated with the following error:
Traceback (most recent call last):
File "/root/anaconda3/envs/iOD/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/data/yu/code/iOD/detectron2/engine/launch.py", line 84, in _distributed_worker
main_func(*args)
File "/data/yu/code/iOD/tools/train_net.py", line 150, in main
return trainer.train()
File "/data/yu/code/iOD/detectron2/engine/defaults.py", line 406, in train
super().train(self.start_iter, self.max_iter)
File "/data/yu/code/iOD/detectron2/engine/train_loop.py", line 152, in train
self.run_step()
File "/data/yu/code/iOD/detectron2/engine/train_loop.py", line 281, in run_step
loss_dict = self.model(data)
File "/root/anaconda3/envs/iOD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/iOD/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 447, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/root/anaconda3/envs/iOD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/data/yu/code/iOD/detectron2/modeling/meta_arch/rcnn.py", line 179, in forward
proposals, proposal_losses = self.proposal_generator(images, features, gt_instances)
File "/root/anaconda3/envs/iOD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/data/yu/code/iOD/detectron2/modeling/proposal_generator/rpn.py", line 201, in forward
outputs.predict_proposals(),
File "/data/yu/code/iOD/detectron2/modeling/proposal_generator/rpn_outputs.py", line 422, in predict_proposals
pred_anchor_deltas_i, anchors_i.tensor
File "/data/yu/code/iOD/detectron2/modeling/box_regression.py", line 79, in apply_deltas
assert torch.isfinite(deltas).all().item(), "Box regression deltas become infinite or NaN!"
AssertionError: Box regression deltas become infinite or NaN!`

Base40 results not good

I cannnot reopen issue #12.
Actually, when I only train the first 40 base classes using the configs in the file "warp_faster_rcnn_R_50_C4_1x.yaml", the AP50 was about 30%. I think it is not reasonable.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.