Coder Social home page Coder Social logo

dronedetectron2's Introduction

Hi there 👋

About my GitHub Statistics

Profile Views

Akhil's GitHub stats GitHub Streak

dronedetectron2's People

Contributors

akhilpm avatar chihyaoma avatar ycliu93 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

dronedetectron2's Issues

FPS measuring

as I analyse your paper, you must have better FPS than other complex models such as GLSAN and etc.
but you didnt report any thing.
how can I measure your trained model inference time and FPS?

appreciate if help.
thanks.

Inquiry Regarding dataset_name + "_filenames.txt" Files in Your Code

I am currently in the process of reproducing your code. During this process, I encountered a question regarding the dataseed folder in your codebase, which contains files named dataset_name + "_filenames.txt".
I am unsure about the purpose and generation method of these files and would greatly appreciate your guidance on the matter.

1.Are these dataset_name + "_filenames.txt" files meant to be manually created by me, or does your code generate them automatically?
2.If I am to create them manually, what content should they contain? For instance, should they include image file names or file paths? What is the expected format of the content?
3.If they are generated by the code, could you please direct me to the script responsible for their generation? Additionally, how should I go about running this script?

Thank you very much for your patience and assistance. I am looking forward to your response.
Best regards,

train with other detectrors

hello
as you mentioned in your paper, it is possible to use any detector to train with your model.
how can I do it?
I mean how to train yolo with your method?

thank you

requirements

hello

after installing PyTorch and building detectron2, which packages should be installed?
I got this error :
cannot import name 'bbox_inside_old' from 'utils.box_utils'

it seems there is no function called "bbox_inside_old" in box_utils.py.
no requirement.txt I find...and also no setup.py
thank you for your response.

result

Hello, I downloaded the R-50.pkl pre-training weight (red box) from MODEL_ZOO.md of detectron2, my Base-RCNN-FPN.yaml, RCNN-FPN-CROP.yaml, and the experimental results are as follows,
1694054052987

Base-RCNN-FPN:
MODEL:
META_ARCHITECTURE: "CropRCNN"
WEIGHTS: "/root/data/yhj/detectron2/pretrained_models/R-50.pkl"
BACKBONE:
NAME: "build_retinanet_resnet_fpn_backbone"
RESNETS:
OUT_FEATURES: ["res3", "res4", "res5"]
DEPTH: 50
FPN:
IN_FEATURES: ["res3", "res4", "res5"]
ANCHOR_GENERATOR:
SIZES: [[32], [64], [128], [256], [512]] # One size for each in feature map
ASPECT_RATIOS: [[0.5, 1.0, 2.0]] # Three aspect ratios (same for all in feature maps)
RPN:
IN_FEATURES: ["p3", "p4", "p5", "p6", "p7"]
PRE_NMS_TOPK_TRAIN: 2000 # Per FPN level
PRE_NMS_TOPK_TEST: 1000 # Per FPN level
# Detectron1 uses 2000 proposals per-batch,
# (See "modeling/rpn/rpn_outputs.py" for details of this legacy issue)
# which is approximately 1000 proposals per-image since the default batch size for FPN is 2.
POST_NMS_TOPK_TRAIN: 1000
POST_NMS_TOPK_TEST: 1000
ROI_HEADS:
NAME: "StandardROIHeads"
IN_FEATURES: ["p3", "p4", "p5", "p6", "p7"]
NUM_CLASSES: 10
SCORE_THRESH_TEST: 0.001
NMS_THRESH_TEST: 0.5
IOU_THRESHOLDS: [0.5]
ROI_BOX_HEAD:
NAME: "FastRCNNConvFCHead"
NUM_FC: 2
POOLER_RESOLUTION: 7
ROI_MASK_HEAD:
NAME: "MaskRCNNConvUpsampleHead"
NUM_CONV: 4
POOLER_RESOLUTION: 14

DATASETS:
TRAIN: ("visdrone_2019_train",)
TEST: ("visdrone_2019_val",)
DATALOADER:
NUM_WORKERS: 2
SOLVER:
IMS_PER_BATCH: 8
BASE_LR: 0.01
STEPS: (50000, 70000)
MAX_ITER: 90000
CHECKPOINT_PERIOD: 3000
CLIP_GRADIENTS:
ENABLED: True
CLIP_TYPE: "norm"
CLIP_VALUE: 35.0
INPUT:
MIN_SIZE_TRAIN: (800, 900, 1000, 1100, 1200)
MAX_SIZE_TRAIN: 1999
MIN_SIZE_TEST: 1200
MAX_SIZE_TEST: 1999
VERSION: 2
TEST:
EVAL_PERIOD: 3000
DETECTIONS_PER_IMAGE: 800
CROPTRAIN:
USE_CROPS: True

RCNN-FPN-CROP.yaml
MODEL:
META_ARCHITECTURE: "CropRCNN"
WEIGHTS: "/root/data/yhj/detectron2/pretrained_models/R-50.pkl"
BACKBONE:
NAME: "build_retinanet_resnet_fpn_backbone"
RESNETS:
OUT_FEATURES: ["res3", "res4", "res5"]
DEPTH: 50
FPN:
IN_FEATURES: ["res3", "res4", "res5"]
ANCHOR_GENERATOR:
SIZES: [[32], [64], [128], [256], [512]] # One size for each in feature map
ASPECT_RATIOS: [[0.5, 1.0, 2.0]] # Three aspect ratios (same for all in feature maps)
RPN:
IN_FEATURES: ["p3", "p4", "p5", "p6", "p7"]
PRE_NMS_TOPK_TRAIN: 2000 # Per FPN level
PRE_NMS_TOPK_TEST: 1000 # Per FPN level
# Detectron1 uses 2000 proposals per-batch,
# (See "modeling/rpn/rpn_outputs.py" for details of this legacy issue)
# which is approximately 1000 proposals per-image since the default batch size for FPN is 2.
POST_NMS_TOPK_TRAIN: 1000
POST_NMS_TOPK_TEST: 1000
ROI_HEADS:
NAME: "StandardROIHeads"
IN_FEATURES: ["p3", "p4", "p5", "p6", "p7"]
NUM_CLASSES: 10
SCORE_THRESH_TEST: 0.001
NMS_THRESH_TEST: 0.5
IOU_THRESHOLDS: [0.5]
ROI_BOX_HEAD:
NAME: "FastRCNNConvFCHead"
NUM_FC: 2
POOLER_RESOLUTION: 7
ROI_MASK_HEAD:
NAME: "MaskRCNNConvUpsampleHead"
NUM_CONV: 4
POOLER_RESOLUTION: 14

DATASETS:
TRAIN: ("visdrone_2019_train",)
TEST: ("visdrone_2019_val",)
DATALOADER:
NUM_WORKERS: 2
SOLVER:
IMS_PER_BATCH: 8
BASE_LR: 0.01
STEPS: (50000, 70000)
MAX_ITER: 90000
CHECKPOINT_PERIOD: 3000
CLIP_GRADIENTS:
ENABLED: True
CLIP_TYPE: "norm"
CLIP_VALUE: 35.0
INPUT:
MIN_SIZE_TRAIN: (800, 900, 1000, 1100, 1200)
MAX_SIZE_TRAIN: 1999
MIN_SIZE_TEST: 1200
MAX_SIZE_TEST: 1999
VERSION: 2
TEST:
EVAL_PERIOD: 3000
DETECTIONS_PER_IMAGE: 800
CROPTRAIN:
USE_CROPS: True

experimental results:
1694053748800

My experimental results are still about 2 different from those given in the original paper. What is the reason for this?

evaluation per epoch

Hello
Although I know this error is not related to the implementation, but after spending too much time I appreciate if help me if you know.
I get this error after 3000 iteration for evaluation per epoch:

Fast COCO eval is not built. Falling back to official COCO eval.
INFO:croptrain.engine.inference:Start inference on 274 batches
Traceback (most recent call last):
File "train_net.py", line 94, in
launch(
File "/media/2TB_1/Miniconda/conda/envs/Detectron2/lib/python3.8/site-packages/detectron2/engine/launch.py", line 69, in launch
mp.start_processes(
File "/home/moghaddami/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/moghaddami/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/home/moghaddami/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/media/2TB_1/Miniconda/conda/envs/Detectron2/lib/python3.8/site-packages/detectron2/engine/launch.py", line 123, in _distributed_worker
main_func(*args)
File "/media/2TB_1/CZDet/DroneDetectron2/train_net.py", line 85, in main
return trainer.train()
File "/media/2TB_1/Miniconda/conda/envs/Detectron2/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 484, in train
super().train(self.start_iter, self.max_iter)
File "/media/2TB_1/Miniconda/conda/envs/Detectron2/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 156, in train
self.after_step()
File "/media/2TB_1/Miniconda/conda/envs/Detectron2/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 190, in after_step
h.after_step()
File "/media/2TB_1/Miniconda/conda/envs/Detectron2/lib/python3.8/site-packages/detectron2/engine/hooks.py", line 556, in after_step
self._do_eval()
File "/media/2TB_1/Miniconda/conda/envs/Detectron2/lib/python3.8/site-packages/detectron2/engine/hooks.py", line 529, in _do_eval
results = self._func()
File "/media/2TB_1/CZDet/DroneDetectron2/croptrain/engine/trainer.py", line 233, in test_and_save_results
self._last_eval_results = self.test_crop(self.cfg, self.model, self.iter)
File "/media/2TB_1/CZDet/DroneDetectron2/croptrain/engine/trainer.py", line 277, in test_crop
results_i = inference_with_crops(model, data_loader, evaluator, cfg, iter)
File "/media/2TB_1/CZDet/DroneDetectron2/croptrain/engine/inference.py", line 67, in inference_with_crops
all_outputs = model(inputs, infer_on_crops=True, cfg=cfg)
File "/home/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 885, in forward
inputs, kwargs = self.to_kwargs(inputs, kwargs, self.device_ids[0])
File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 993, in to_kwargs
kwargs = self._recursive_to(kwargs, device_id) if kwargs else []
File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 986, in _recursive_to
res = to_map(inputs)
File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 981, in to_map
return [type(obj)(i) for i in zip(*map(to_map, obj.items()))]
File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 977, in to_map
return list(zip(*map(to_map, obj)))
File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 981, in to_map
return [type(obj)(i) for i in zip(*map(to_map, obj.items()))]
File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 977, in to_map
return list(zip(*map(to_map, obj)))
File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 981, in to_map
return [type(obj)(i) for i in zip(*map(to_map, obj.items()))]
File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 977, in to_map
return list(zip(*map(to_map, obj)))
File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 981, in to_map
return [type(obj)(i) for i in zip(*map(to_map, obj.items()))]
File "/home/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 981, in
return [type(obj)(i) for i in zip(*map(to_map, obj.items()))]
File "/media/2TB_1/Miniconda/conda/envs/Detectron2/lib/python3.8/site-packages/yacs/config.py", line 86, in init
init_dict = self._create_config_tree_from_dict(init_dict, key_list)
File "/media/2TB_1/Miniconda/conda/envs/Detectron2/lib/python3.8/site-packages/yacs/config.py", line 123, in _create_config_tree_from_dict
for k, v in dic.items():
AttributeError: 'tuple' object has no attribute 'items'

error while loading images

hello

Im using your implementation for training VisDrone with RCNN-FPN-CROP.yaml config file.
After reading annotation succesfully, I got this error while Im sure my data directory is like you mentioned in your repository.
its just before start to train...........
help me please.
thank you.

Traceback (most recent call last):
File "train_net.py", line 94, in
launch(
File "../Detectron2/lib/python3.8/site-packages/detectron2/engine/launch.py", line 84, in launch
main_func(*args)
File "train_net.py", line 85, in main
return trainer.train()
File "../Detectron2/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 484, in train
super().train(self.start_iter, self.max_iter)
File "../Detectron2/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 155, in train
self.run_step()
File "../DroneDetectron2/croptrain/engine/trainer.py", line 135, in run_step
data = next(self._trainer._data_loader_iter)
File "../Detectron2/lib/python3.8/site-packages/detectron2/data/common.py", line 291, in iter
for d in self.dataset:
File "../.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/home/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/home/.local/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
raise exception
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/.local/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "../Detectron2/lib/python3.8/site-packages/detectron2/data/common.py", line 258, in iter
yield self.dataset[idx]
File "../Detectron2/lib/python3.8/site-packages/detectron2/data/common.py", line 95, in getitem
data = self._map_func(self._dataset[cur_idx])
File "../Detectron2/lib/python3.8/site-packages/detectron2/utils/serialize.py", line 26, in call
return self._obj(*args, **kwargs)
File "../DroneDetectron2/croptrain/data/dataset_mapper.py", line 62, in call
image = read_image(dataset_dict)
File "../DroneDetectron2/croptrain/data/detection_utils.py", line 18, in read_image
if len(image.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'

my dataset directory:
image

train with Retinanet

hello
is RetinaNet-ResNet.yaml complete config file for cascadezoom-in training on Retinanet?

Some issues with experimentation and visualization

Hello, excuse me, in your paper CZ Det., is the baseline in Table 1 the result of crop being false? If I want to write it clearly, can I just write it as Faster rcnn? If not, which algorithm should I write? Then when I want to visualize, how should I enter the command? Thank you for your answer and thank you for your hard work.

error

Hello, I haven't solved this error for a long time. I used the json file of the DOTA data set you provided for training, and the following error occurred. What is the cause of this? Thank you
ERROR [03/20 19:26:58 d2.engine.train_loop]: Exception during training:
Traceback (most recent call last):
File "/root/siton-data-zhangmingData/YHJ/detectron2/detectron2/detectron2/engine/train_loop.py", line 156, in train
self.after_step()
File "/root/siton-data-zhangmingData/YHJ/detectron2/detectron2/detectron2/engine/train_loop.py", line 190, in after_step
h.after_step()
File "/root/siton-data-zhangmingData/YHJ/detectron2/detectron2/detectron2/engine/hooks.py", line 556, in after_step
self._do_eval()
File "/root/siton-data-zhangmingData/YHJ/detectron2/detectron2/detectron2/engine/hooks.py", line 529, in _do_eval
results = self._func()
File "/root/siton-data-zhangmingData/YHJ/detectron2/croptrain/engine/trainer.py", line 233, in test_and_save_results
self._last_eval_results = self.test_crop(self.cfg, self.model, self.iter)
File "/root/siton-data-zhangmingData/YHJ/detectron2/croptrain/engine/trainer.py", line 275, in test_crop
results_i = inference_dota(model, data_loader, evaluator, cfg, iter)
File "/root/siton-data-zhangmingData/YHJ/detectron2/croptrain/engine/inference_tile.py", line 66, in inference_dota
for idx, inputs in enumerate(data_loader):
File "/root/siton-data-zhangmingData/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/root/siton-data-zhangmingData/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1183, in _next_data
return self._process_data(data)
File "/root/siton-data-zhangmingData/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/root/siton-data-zhangmingData/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/root/siton-data-zhangmingData/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/root/siton-data-zhangmingData/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/siton-data-zhangmingData/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/siton-data-zhangmingData/YHJ/detectron2/detectron2/detectron2/data/common.py", line 95, in getitem
data = self._map_func(self._dataset[cur_idx])
File "/root/siton-data-zhangmingData/YHJ/detectron2/detectron2/detectron2/utils/serialize.py", line 26, in call
return self._obj(*args, **kwargs)
File "/root/siton-data-zhangmingData/YHJ/detectron2/croptrain/data/dataset_mapper.py", line 62, in call
image = read_image(dataset_dict)
File "/root/siton-data-zhangmingData/YHJ/detectron2/croptrain/data/detection_utils.py", line 18, in read_image
if len(image.shape) == 2:
AttributeError: 'NoneType' object has no attribute 'shape'

Problems during reimplementation

First of all, thank you very much for your excellent work. I've been trying to replicate your results, but I've encountered some issues, and I hope you can help me with them. Firstly, the results I obtained during replication did not match the results mentioned in the paper (AP 33.02, AP50 57.87, AP75 33.09), and there is a significant difference. Below are the training and testing commands I used:

Training:

python train_net.py --num-gpus 1 --config-file configs/RCNN-FPN-CROP.yaml OUTPUT_DIR outputs_FPN_CROP_VisDrone

Testing:

python train_net.py --eval-only --num-gpus 1 --config-file configs/RCNN-FPN-CROP.yaml MODEL.WEIGHTS <your weight>.pth

I hope you can point out any mistakes I may have made. Thanks again.

voc2coco

Hello, when I used the voc2coco.py file you provided to convert it into a json file, the three xml files 0000293_03401_d_0000939.xml, 9999985_00000_d_0000020.xml, and 9999999_00590_d_0000267.xml would appear assert xmax > xmin
assert ymax > ymin error here, how did you solve it, thank you

train FCOS

Hello
I tried to train FCOS but I get this error:

INFO:croptrain.data.datasets.visdrone:Loaded 548 images in COCO format from /home/mahilamoghadami.mut/CZDet/dataset/VisDrone/annotations_VisDrone_val.json
INFO:croptrain.engine.inference_fcos:Start inference on 548 batches
INFO:croptrain.engine.inference_fcos:Inference done 11/548. Dataloading: 0.0010 s/iter. Inference: 0.1016 s/iter. Eval: 0.0006 s/iter. Total: 0.1032 s/iter. ETA=0:00:55
INFO:croptrain.engine.inference_fcos:Inference done 59/548. Dataloading: 0.0017 s/iter. Inference: 0.1027 s/iter. Eval: 0.0006 s/iter. Total: 0.1051 s/iter. ETA=0:00:51
INFO:croptrain.engine.inference_fcos:Inference done 107/548. Dataloading: 0.0020 s/iter. Inference: 0.1026 s/iter. Eval: 0.0006 s/iter. Total: 0.1053 s/iter. ETA=0:00:46
INFO:croptrain.engine.inference_fcos:Inference done 155/548. Dataloading: 0.0020 s/iter. Inference: 0.1026 s/iter. Eval: 0.0006 s/iter. Total: 0.1052 s/iter. ETA=0:00:41
INFO:croptrain.engine.inference_fcos:Inference done 202/548. Dataloading: 0.0020 s/iter. Inference: 0.1027 s/iter. Eval: 0.0010 s/iter. Total: 0.1057 s/iter. ETA=0:00:36
INFO:croptrain.engine.inference_fcos:Inference done 249/548. Dataloading: 0.0020 s/iter. Inference: 0.1031 s/iter. Eval: 0.0009 s/iter. Total: 0.1061 s/iter. ETA=0:00:31
INFO:croptrain.engine.inference_fcos:Inference done 296/548. Dataloading: 0.0020 s/iter. Inference: 0.1032 s/iter. Eval: 0.0009 s/iter. Total: 0.1062 s/iter. ETA=0:00:26
INFO:croptrain.engine.inference_fcos:Inference done 344/548. Dataloading: 0.0020 s/iter. Inference: 0.1033 s/iter. Eval: 0.0008 s/iter. Total: 0.1061 s/iter. ETA=0:00:21
INFO:croptrain.engine.inference_fcos:Inference done 392/548. Dataloading: 0.0020 s/iter. Inference: 0.1033 s/iter. Eval: 0.0008 s/iter. Total: 0.1061 s/iter. ETA=0:00:16
INFO:croptrain.engine.inference_fcos:Inference done 439/548. Dataloading: 0.0020 s/iter. Inference: 0.1034 s/iter. Eval: 0.0011 s/iter. Total: 0.1066 s/iter. ETA=0:00:11
INFO:croptrain.engine.inference_fcos:Inference done 486/548. Dataloading: 0.0021 s/iter. Inference: 0.1035 s/iter. Eval: 0.0010 s/iter. Total: 0.1066 s/iter. ETA=0:00:06
INFO:croptrain.engine.inference_fcos:Inference done 533/548. Dataloading: 0.0021 s/iter. Inference: 0.1035 s/iter. Eval: 0.0010 s/iter. Total: 0.1067 s/iter. ETA=0:00:01
INFO:croptrain.engine.inference_fcos:Total inference time: 0:00:57.919741 (0.106666 s / iter per device, on 1 devices)
INFO:croptrain.engine.inference_fcos:Total inference pure compute time: 0:00:56 (0.103500 s / iter per device, on 1 devices)
Traceback (most recent call last):
File "train_fcos.py", line 89, in
launch(
File "/home/mahilamoghadami.mut/miniconda3/envs/CZDET/lib/python3.8/site-packages/detectron2/engine/launch.py", line 82, in launch
main_func(*args)
File "train_fcos.py", line 80, in main
return trainer.train()
File "/home/mahilamoghadami.mut/miniconda3/envs/CZDET/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 484, in train
super().train(self.start_iter, self.max_iter)
File "/home/mahilamoghadami.mut/miniconda3/envs/CZDET/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 150, in train
self.after_step()
File "/home/mahilamoghadami.mut/miniconda3/envs/CZDET/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 180, in after_step
h.after_step()
File "/home/mahilamoghadami.mut/miniconda3/envs/CZDET/lib/python3.8/site-packages/detectron2/engine/hooks.py", line 552, in after_step
self._do_eval()
File "/home/mahilamoghadami.mut/miniconda3/envs/CZDET/lib/python3.8/site-packages/detectron2/engine/hooks.py", line 525, in _do_eval
results = self._func()
File "/home/mahilamoghadami.mut/CZDet/DroneDetectron2/croptrain/engine/trainer_fcos.py", line 234, in test_and_save_results
self._last_eval_results = self.test_crop(self.cfg, self.model, self.iter)
File "/home/mahilamoghadami.mut/CZDet/DroneDetectron2/croptrain/engine/trainer_fcos.py", line 278, in test_crop
results_i = inference_fcos.inference_with_crops(model, data_loader, evaluator, cfg, iter)
File "/home/mahilamoghadami.mut/CZDet/DroneDetectron2/croptrain/engine/inference_fcos.py", line 124, in inference_with_crops
results = evaluator.evaluate()
File "/home/mahilamoghadami.mut/miniconda3/envs/CZDET/lib/python3.8/site-packages/detectron2/evaluation/coco_evaluation.py", line 194, in evaluate
self._eval_predictions(predictions, img_ids=img_ids)
File "/home/mahilamoghadami.mut/miniconda3/envs/CZDET/lib/python3.8/site-packages/detectron2/evaluation/coco_evaluation.py", line 228, in _eval_predictions
assert category_id < num_classes, (
AssertionError: A prediction has class=10, but the dataset only has 10 classes and predicted class id should be in [0, 9].

while I trained FaterRCNN with your code without any error. I mean that data structure , annotations and classes are ok and are used without any changes compared to training FasterRCNN.

this is my train script:
python train_fcos.py --num-gpus 1 --config-file configs/FCOS-CROP.yaml OUTPUT_DIR outputs_FCOS_crop_pre

appreciate for help.
Thanks

enquiry about stronger detector

@akhilpm Hi! thanks for sharing the great work! I'm wondering you have tried using Cascaded RCNN for the detector to achieve a stronger performance? and I note there were some prior works like FOCUS-AND-DETECT and AdaZoom that perform very good on Visdrone, have you compared to them and could you please give some comments?

checkpoint

please share your training checkpoints.
thanks

crop labeling and augmentation part

Hello Akhil
based on your paper: ' Then we augment the training set with the higher resolution version of the density crops, and the corresponding ground truth (GT) boxes of objects inside the crop'
I can't find this part of your pipeline in your released code.
could you please help me to understand?

I want to modify it and read and searched a lot in your code, but I didn't find it.
Also, I didn't find the crop labeling code, if it exists in this repository, please help me to find it.

thank you.

question

Hello, is there any difference between train_fcos and train_net in this file?

DDP inference with crops

Hello, when I train model used 'python train_net.py --num-gpus 4 --config-file configs/RCNN-FPN-CROP.yaml OUTPUT_DIR outputs_FPN_VisDrone' , error occur!

Is it not possible to use multiple GPUs for parallel training?

Hello, every time I run the code, there will be a broken pipe error. What is the reason?

[07/19 10:41:47] fvcore.common.checkpoint INFO: Saving checkpoint to outputs_FPN_VisDrone/model_0009999.pth
[07/19 10:41:49] d2.data.common INFO: Serializing the dataset using: <class 'detectron2.data.common._TorchSerializedList'>
[07/19 10:41:49] d2.data.common INFO: Serializing 548 elements to byte tensors and concatenating them all ...
[07/19 10:41:49] d2.data.common INFO: Serialized dataset takes 1.44 MiB
[07/19 10:41:49] d2.data.dataset_mapper INFO: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(1200, 1200), max_size=1999, sample_style='choice')]
[07/19 10:41:49] d2.evaluation.evaluator INFO: Start inference on 548 batches
[07/19 10:41:51] d2.evaluation.evaluator INFO: Inference done 11/548. Dataloading: 0.0517 s/iter. Inference: 0.0492 s/iter. Eval: 0.0005 s/iter. Total: 0.1013 s/iter. ETA=0:00:54
[07/19 10:41:56] d2.evaluation.evaluator INFO: Inference done 71/548. Dataloading: 0.0457 s/iter. Inference: 0.0400 s/iter. Eval: 0.0004 s/iter. Total: 0.0861 s/iter. ETA=0:00:41
[07/19 10:42:01] d2.evaluation.evaluator INFO: Inference done 130/548. Dataloading: 0.0457 s/iter. Inference: 0.0394 s/iter. Eval: 0.0004 s/iter. Total: 0.0856 s/iter. ETA=0:00:35
[07/19 10:42:06] d2.evaluation.evaluator INFO: Inference done 191/548. Dataloading: 0.0452 s/iter. Inference: 0.0391 s/iter. Eval: 0.0004 s/iter. Total: 0.0848 s/iter. ETA=0:00:30
[07/19 10:42:11] d2.evaluation.evaluator INFO: Inference done 245/548. Dataloading: 0.0468 s/iter. Inference: 0.0395 s/iter. Eval: 0.0004 s/iter. Total: 0.0868 s/iter. ETA=0:00:26
[07/19 10:42:16] d2.evaluation.evaluator INFO: Inference done 297/548. Dataloading: 0.0481 s/iter. Inference: 0.0402 s/iter. Eval: 0.0004 s/iter. Total: 0.0888 s/iter. ETA=0:00:22
[07/19 10:42:21] d2.evaluation.evaluator INFO: Inference done 348/548. Dataloading: 0.0487 s/iter. Inference: 0.0407 s/iter. Eval: 0.0007 s/iter. Total: 0.0902 s/iter. ETA=0:00:18
[07/19 10:42:26] d2.evaluation.evaluator INFO: Inference done 399/548. Dataloading: 0.0492 s/iter. Inference: 0.0413 s/iter. Eval: 0.0007 s/iter. Total: 0.0912 s/iter. ETA=0:00:13
[07/19 10:42:31] d2.evaluation.evaluator INFO: Inference done 452/548. Dataloading: 0.0495 s/iter. Inference: 0.0416 s/iter. Eval: 0.0007 s/iter. Total: 0.0918 s/iter. ETA=0:00:08
[07/19 10:42:36] d2.evaluation.evaluator INFO: Inference done 503/548. Dataloading: 0.0498 s/iter. Inference: 0.0417 s/iter. Eval: 0.0010 s/iter. Total: 0.0925 s/iter. ETA=0:00:04
[07/19 10:42:40] d2.evaluation.evaluator INFO: Total inference time: 0:00:50.314425 (0.092660 s / iter per device, on 1 devices)
[07/19 10:42:40] d2.evaluation.evaluator INFO: Total inference pure compute time: 0:00:22 (0.041902 s / iter per device, on 1 devices)
[07/19 10:42:41] d2.evaluation.coco_evaluation INFO: Preparing results for COCO format ...
[07/19 10:42:41] d2.evaluation.coco_evaluation INFO: Saving results to outputs_FPN_VisDrone/inference/coco_instances_results.json
[07/19 10:42:42] d2.evaluation.coco_evaluation INFO: Evaluating predictions with unofficial COCO API...
[07/19 10:42:42] d2.engine.train_loop ERROR: Exception during training:
Traceback (most recent call last):
File "/root/detectron2/detectron2/engine/train_loop.py", line 156, in train
self.after_step()
File "/root/detectron2/detectron2/engine/train_loop.py", line 190, in after_step
h.after_step()
File "/root/detectron2/detectron2/engine/hooks.py", line 556, in after_step
self._do_eval()
File "/root/detectron2/detectron2/engine/hooks.py", line 529, in _do_eval
results = self._func()
File "/root/data/yhj/detectron2/croptrain/engine/trainer.py", line 238, in test_and_save_results
self._last_eval_results = self.test(self.cfg, self.model)
File "/root/detectron2/detectron2/engine/defaults.py", line 617, in test
results_i = inference_on_dataset(model, data_loader, evaluator)
File "/root/detectron2/detectron2/evaluation/evaluator.py", line 204, in inference_on_dataset
results = evaluator.evaluate()
File "/root/detectron2/detectron2/evaluation/coco_evaluation.py", line 206, in evaluate
self._eval_predictions(predictions, img_ids=img_ids)
File "/root/detectron2/detectron2/evaluation/coco_evaluation.py", line 266, in _eval_predictions
_evaluate_predictions_on_coco(
File "/root/detectron2/detectron2/evaluation/coco_evaluation.py", line 590, in _evaluate_predictions_on_coco
coco_dt = coco_gt.loadRes(coco_results)
File "/root/data/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/pycocotools/coco.py", line 316, in loadRes
print('Loading and preparing results...')
BrokenPipeError: [Errno 32] Broken pipe

dataset path for train

hello
thank you for your implementation sharing
I want to train my model which is a part of visdrone dataset. where I should set my dataset path?
I prepare dataset as structure you said in my drive but I don't know where should I set path.
also there is no field in config file to set address.

thank you.

dataset

Hello,using the VOC2COCO file you gave, it can only convert xml files into json files, and the data set only has txt and image after downloading. Do I need to convert txt into xml and then into json files? If not, I would like to ask how to solve it?
also how should I change the code if I want to use my own data set?

voc2coco

Hello, you can share with me the following json file of the DOTA data set or share with me your voc2coco file. I will always get an error when converting the data set using the link you gave me. Thank you for sharing.

reimplementation issues

Hello, when I use the check point and datasets in this project and execute the verification script, only 27.546AP, 48.8AP50, there is a certain gap with the paper, how to solve this problem? thanks
image

Some categories have no value
image

NMS

Hello, I would like to ask, which part of your code training process involves NMS?

result question

1693812011313
1693812117784
Hello, I used the statement python train_net.py --num-gpus 1 --config-file configs/RCNN-FPN-CROP.yaml OUTPUT_DIR outputs_FPN_CROP_VisDrone to reproduce the code, and the final result is quite different from the original paper. There is a big gap, and the AP obtained by the motor is 0. What is the reason for this?

An error occurred during training

Thanks for your outstanding work and open source. When I want to re-implement your work, an error occurred shown as below:


[08/17 16:19:02 d2.utils.env]: Using a generated random seed 2905847
Traceback (most recent call last):
File "train_net.py", line 95, in
launch(
File "/media/zz/CCD08CFED08CF04E/anaconda3/envs/detectron2/lib/python3.8/site-packages/detectron2/engine/launch.py", line 84, in launch
main_func(*args)
File "train_net.py", line 50, in main
data_dir = os.path.join(os.environ['SLURM_TMPDIR'], "VisDrone")
File "/media/zz/CCD08CFED08CF04E/anaconda3/envs/detectron2/lib/python3.8/os.py", line 675, in getitem
raise KeyError(key) from None
KeyError: 'SLURM_TMPDIR'


Could you please help me find why the error occurred and how to fix it? Looking forward to your reply!!!

yolo or FasterRCNN config file

hello
I want to train a model with YOLO or FasterRCNN config files which there arent in configs folder.
could you please help me?
or a guide to help me prepare this config?
thank you.

dataset annotation

hello
you have categor-id =[0-11], which involve 12 classes. As I analyse it, I understood that category-id =0 is not related to VisDrone dataset. so whats it? is it crowded region bbox class?
in addition there is not category name in categories part of dataset annotation for class=0.
thank you

Pretrained model

please provide the pretrained model(R-50.pkl) in order to repeat your result. Thanks

checkpoint not found

Thank you for the great work! Can you provide the checkpoint file?

AssertionError: Checkpoint /home/akhil135/PhD/DroneDetectron2/pretrained_models/R-50.pkl not found!

number of repochs

hello
please tell me number of iteration you set for training to achieve AP 58.3 on VisDrone dataset?
I trained on default number of epochs and I get AP = 32

this is my train command and config file I used:
python train_net.py --num-gpus 1 --config-file configs/RCNN-FPN-CROP.yaml OUTPUT_DIR outputs_FPN_CROP_VisDrone

and this is my last part of training log:
image
thank you

DOTA issue

Hello, after training the DOTA data set, why do the final results I get differ so much from the results in the paper? Can you help me analyze it and answer it? If you need me to provide additional documents, please let me know, thank you.
image

P2

Hello, in your experiment, the best results were obtained by adding the P2 layer, but your configuration file gives
RESNETS:
OUT_FEATURES: ["res3", "res4", "res5"]
FPN:
IN_FEATURES: ["res3", "res4", "res5"]
ANCHOR_GENERATOR:
SIZES: [[32], [64], [128], [256], [512]]
RPN:
IN_FEATURES: ["p3", "p4", "p5", "p6", "p7"]
ROI_HEADS:
IN_FEATURES: ["p3", "p4", "p5", "p6", "p7"]
do I need to change it to ["res2", "res3", "res4", "res5"],[" p2","p3", "p4", "p5", "p6", "p7"],[[16], [32], [64], [128], [256], [512]]What about this? thank you

Some questions about the results listed in Table1 and Table6 in the paper

Dear Author,

In the paper, Tables 1, 2, and 6 present the model’s test results on the VisDrone dataset. Regarding these results, I have two questions:

The paper mentions, “Following the existing works, we used the validation set for evaluating the performance.” Does this imply that the results listed above are all based on the validation set, which consists of 548 pictures?
It is noted in the paper that a resolution of (1.5K pixels) was adopted. Could you please clarify what specific resolution this refers to? Additionally, could you specify the input resolutions used for the results in Tables 1, 2, and 6?
I look forward to your response and appreciate your assistance. Best regards!

inference

hello
how can I inference trained model with sample image and visualize output?
is it implemented in your github?

thank you

DOTA datasets

Hello, when I conduct experiments on the DOTA data set, the command I run is python train_net.py --num-gpus 1 --config-file configs/Dota-Base-RCNN-FPN.yaml OUTPUT_DIR outputs_FPN_DOTA. I first DOTA's txt is converted into xml, and then voc2coco is used to convert it into a json file. The experimental error after running is as follows:

[09/30 20:22:14 d2.data.common]: Serializing the dataset using: <class 'detectron2.data.common._TorchSerializedList'>
[09/30 20:22:14 d2.data.common]: Serializing 458 elements to byte tensors and concatenating them all ...
[09/30 20:22:14 d2.data.common]: Serialized dataset takes 1.10 MiB
[09/30 20:22:14 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(1200, 1200), max_size=1999, sample_style='choice')]
[09/30 20:22:14 d2.evaluation.coco_evaluation]: Fast COCO eval is not built. Falling back to official COCO eval.
INFO:croptrain.data.datasets.dota:Loaded 458 images in COCO format from /root/data/yhj/detectron2/croptrain/datasets/DOTA/annotations_DOTA_val.json
[09/30 20:22:15 d2.data.build]: Removed 2 images with no usable annotations. 456 images left.
INFO:croptrain.engine.inference_tile:Start inference on 458 batches
INFO:croptrain.engine.inference_tile:Inference done 11/458. Dataloading: 0.0843 s/iter. Inference: 0.8199 s/iter. Eval: 0.0012 s/iter. Total: 0.9055 s/iter. ETA=0:06:44
INFO:croptrain.engine.inference_tile:Inference done 77/458. Dataloading: 0.0894 s/iter. Inference: 0.8191 s/iter. Eval: 0.0032 s/iter. Total: 0.9117 s/iter. ETA=0:05:47
INFO:croptrain.engine.inference_tile:Inference done 152/458. Dataloading: 0.0912 s/iter. Inference: 0.7649 s/iter. Eval: 0.0031 s/iter. Total: 0.8594 s/iter. ETA=0:04:22
INFO:croptrain.engine.inference_tile:Inference done 185/458. Dataloading: 0.1015 s/iter. Inference: 0.9533 s/iter. Eval: 0.0029 s/iter. Total: 1.0578 s/iter. ETA=0:04:48
INFO:croptrain.engine.inference_tile:Inference done 193/458. Dataloading: 0.1113 s/iter. Inference: 1.2228 s/iter. Eval: 0.0035 s/iter. Total: 1.3377 s/iter. ETA=0:05:54
INFO:croptrain.engine.inference_tile:Inference done 197/458. Dataloading: 0.1228 s/iter. Inference: 1.5925 s/iter. Eval: 0.0035 s/iter. Total: 1.7188 s/iter. ETA=0:07:28
INFO:croptrain.engine.inference_tile:Inference done 207/458. Dataloading: 0.1301 s/iter. Inference: 1.8063 s/iter. Eval: 0.0034 s/iter. Total: 1.9399 s/iter. ETA=0:08:06
INFO:croptrain.engine.inference_tile:Inference done 217/458. Dataloading: 0.1366 s/iter. Inference: 2.0000 s/iter. Eval: 0.0033 s/iter. Total: 2.1400 s/iter. ETA=0:08:35
INFO:croptrain.engine.inference_tile:Inference done 227/458. Dataloading: 0.1438 s/iter. Inference: 2.2068 s/iter. Eval: 0.0032 s/iter. Total: 2.3539 s/iter. ETA=0:09:03
INFO:croptrain.engine.inference_tile:Inference done 231/458. Dataloading: 0.1502 s/iter. Inference: 2.4378 s/iter. Eval: 0.0032 s/iter. Total: 2.5913 s/iter. ETA=0:09:48
INFO:croptrain.engine.inference_tile:Inference done 235/458. Dataloading: 0.1589 s/iter. Inference: 2.7712 s/iter. Eval: 0.0032 s/iter. Total: 2.9334 s/iter. ETA=0:10:54
INFO:croptrain.engine.inference_tile:Inference done 251/458. Dataloading: 0.1621 s/iter. Inference: 2.8293 s/iter. Eval: 0.0031 s/iter. Total: 2.9945 s/iter. ETA=0:10:19
INFO:croptrain.engine.inference_tile:Inference done 263/458. Dataloading: 0.1663 s/iter. Inference: 2.9398 s/iter. Eval: 0.0030 s/iter. Total: 3.1092 s/iter. ETA=0:10:06
INFO:croptrain.engine.inference_tile:Inference done 272/458. Dataloading: 0.1709 s/iter. Inference: 3.0695 s/iter. Eval: 0.0030 s/iter. Total: 3.2435 s/iter. ETA=0:10:03
INFO:croptrain.engine.inference_tile:Inference done 282/458. Dataloading: 0.1748 s/iter. Inference: 3.1748 s/iter. Eval: 0.0035 s/iter. Total: 3.3532 s/iter. ETA=0:09:50
INFO:croptrain.engine.inference_tile:Inference done 292/458. Dataloading: 0.1786 s/iter. Inference: 3.2736 s/iter. Eval: 0.0035 s/iter. Total: 3.4557 s/iter. ETA=0:09:33
INFO:croptrain.engine.inference_tile:Inference done 302/458. Dataloading: 0.1820 s/iter. Inference: 3.3687 s/iter. Eval: 0.0034 s/iter. Total: 3.5541 s/iter. ETA=0:09:14
INFO:croptrain.engine.inference_tile:Inference done 309/458. Dataloading: 0.1829 s/iter. Inference: 3.4836 s/iter. Eval: 0.0033 s/iter. Total: 3.6699 s/iter. ETA=0:09:06
INFO:croptrain.engine.inference_tile:Inference done 310/458. Dataloading: 0.1868 s/iter. Inference: 3.7276 s/iter. Eval: 0.0033 s/iter. Total: 3.9178 s/iter. ETA=0:09:39
ERROR [09/30 20:42:56 d2.engine.train_loop]: Exception during training:
Traceback (most recent call last):
File "/root/data/yhj/detectron2/detectron2/detectron2/engine/train_loop.py", line 156, in train
self.after_step()
File "/root/data/yhj/detectron2/detectron2/detectron2/engine/train_loop.py", line 190, in after_step
h.after_step()
File "/root/data/yhj/detectron2/detectron2/detectron2/engine/hooks.py", line 556, in after_step
self._do_eval()
File "/root/data/yhj/detectron2/detectron2/detectron2/engine/hooks.py", line 529, in _do_eval
results = self._func()
File "/root/data/yhj/detectron2/croptrain/engine/trainer.py", line 236, in test_and_save_results
self._last_eval_results = self.test_crop(self.cfg, self.model, self.iter)
File "/root/data/yhj/detectron2/croptrain/engine/trainer.py", line 275, in test_crop
results_i = inference_dota(model, data_loader, evaluator, cfg, iter)
File "/root/data/yhj/detectron2/croptrain/engine/inference_tile.py", line 77, in inference_dota
new_data_dicts = get_dict_from_crops(new_boxes, inputs[0], cfg.INPUT.MIN_SIZE_TEST)
File "/root/data/yhj/detectron2/utils/crop_utils.py", line 32, in get_dict_from_crops
crop_region = transform(crop_region)
File "/root/data/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/root/data/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 297, in forward
return F.resize(img, self.size, self.interpolation, self.max_size, self.antialias)
File "/root/data/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torchvision/transforms/functional.py", line 403, in resize
return F_t.resize(img, size=size, interpolation=interpolation.value, max_size=max_size, antialias=antialias)
File "/root/data/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py", line 525, in resize
new_short, new_long = requested_new_short, int(requested_new_short * long / short)
ZeroDivisionError: division by zero
[09/30 20:42:56 d2.engine.hooks]: Overall training speed: 4997 iterations in 5:35:58 (4.0341 s / it)
[09/30 20:42:56 d2.engine.hooks]: Total training time: 5:56:43 (0:20:45 on hooks)
[09/30 20:42:56 d2.utils.events]: eta: 3 days, 1:25:44 iter: 4999 total_loss: 0.5234 loss_cls: 0.1695 loss_box_reg: 0.2638 loss_rpn_cls: 0.03474 loss_rpn_loc: 0.0603 time: 4.0332 last_time: 4.2979 data_time: 2.7496 last_data_time: 2.5427 lr: 0.01 max_mem: 13619M
Traceback (most recent call last):
File "train_net.py", line 96, in
launch(
File "/root/data/yhj/detectron2/detectron2/detectron2/engine/launch.py", line 84, in launch
main_func(*args)
File "train_net.py", line 87, in main
return trainer.train()
File "/root/data/yhj/detectron2/detectron2/detectron2/engine/defaults.py", line 484, in train
super().train(self.start_iter, self.max_iter)
File "/root/data/yhj/detectron2/detectron2/detectron2/engine/train_loop.py", line 156, in train
self.after_step()
File "/root/data/yhj/detectron2/detectron2/detectron2/engine/train_loop.py", line 190, in after_step
h.after_step()
File "/root/data/yhj/detectron2/detectron2/detectron2/engine/hooks.py", line 556, in after_step
self._do_eval()
File "/root/data/yhj/detectron2/detectron2/detectron2/engine/hooks.py", line 529, in _do_eval
results = self._func()
File "/root/data/yhj/detectron2/croptrain/engine/trainer.py", line 236, in test_and_save_results
self._last_eval_results = self.test_crop(self.cfg, self.model, self.iter)
File "/root/data/yhj/detectron2/croptrain/engine/trainer.py", line 275, in test_crop
results_i = inference_dota(model, data_loader, evaluator, cfg, iter)
File "/root/data/yhj/detectron2/croptrain/engine/inference_tile.py", line 77, in inference_dota
new_data_dicts = get_dict_from_crops(new_boxes, inputs[0], cfg.INPUT.MIN_SIZE_TEST)
File "/root/data/yhj/detectron2/utils/crop_utils.py", line 32, in get_dict_from_crops
crop_region = transform(crop_region)
File "/root/data/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/root/data/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 297, in forward
return F.resize(img, self.size, self.interpolation, self.max_size, self.antialias)
File "/root/data/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torchvision/transforms/functional.py", line 403, in resize
return F_t.resize(img, size=size, interpolation=interpolation.value, max_size=max_size, antialias=antialias)
File "/root/data/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py", line 525, in resize
new_short, new_long = requested_new_short, int(requested_new_short * long / short)
ZeroDivisionError: division by zero

thank you for your answer.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.