Coder Social home page Coder Social logo

akshitac8 / ow-detr Goto Github PK

View Code? Open in Web Editor NEW
221.0 6.0 37.0 1.11 MB

[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer

Python 77.00% Shell 3.26% C++ 1.79% Cuda 17.96%
open-world-detection deformable-detr transformers ms-coco

ow-detr's Introduction

OW-DETR: Open-world Detection Transformer (CVPR 2022)

Paper Video slides summary slide

(:star2: denotes equal contribution)

Introduction

Open-world object detection (OWOD) is a challenging computer vision problem, where the task is to detect a known set of object categories while simultaneously identifying unknown objects. Additionally, the model must incrementally learn new classes that become known in the next training episodes. Distinct from standard object detection, the OWOD setting poses significant challenges for generating quality candidate proposals on potentially unknown objects, separating the unknown objects from the background and detecting diverse unknown objects. Here, we introduce a novel end-to-end transformer-based framework, OW-DETR, for open-world object detection. The proposed OW-DETR comprises three dedicated components namely, attention-driven pseudo-labeling, novelty classification and objectness scoring to explicitly address the aforementioned OWOD challenges. Our OW-DETR explicitly encodes multi-scale contextual information, possesses less inductive bias, enables knowledge transfer from known classes to the unknown class and can better discriminate between unknown objects and background. Comprehensive experiments are performed on two benchmarks: MS-COCO and PASCAL VOC. The extensive ablations reveal the merits of our proposed contributions. Further, our model outperforms the recently introduced OWOD approach, ORE, with absolute gains ranging from $1.8%$ to $3.3%$ in terms of unknown recall on MS-COCO. In the case of incremental object detection, OW-DETR outperforms the state-of-the-art for all settings on PASCAL VOC.


Installation

Requirements

We have trained and tested our models on Ubuntu 16.0, CUDA 10.2, GCC 5.4, Python 3.7

conda create -n owdetr python=3.7 pip
conda activate owdetr
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt

Backbone features

Download the self-supervised backbone from here and add in models folder.

Compiling CUDA operators

cd ./models/ops
sh ./make.sh
# unit test (should see all checking is True)
python test.py

Dataset & Results

OWOD proposed splits



The splits are present inside data/VOC2007/OWOD/ImageSets/ folder. The remaining dataset can be downloaded using this link

The files should be organized in the following structure:

OW-DETR/
└── data/
    └── VOC2007/
        └── OWOD/
        	├── JPEGImages
        	├── ImageSets
        	└── Annotations

Results

Task1 Task2 Task3 Task4
Method U-Recall mAP U-Recall mAP U-Recall mAP mAP
ORE-EBUI 4.9 56.0 2.9 39.4 3.9 29.7 25.3
OW-DETR 7.5 59.2 6.2 42.9 5.7 30.8 27.8

Our proposed splits



Dataset Preparation

The splits are present inside data/VOC2007/OWDETR/ImageSets/ folder.

  1. Make empty JPEGImages and Annotations directory.
mkdir data/VOC2007/OWDETR/JPEGImages/
mkdir data/VOC2007/OWDETR/Annotations/
  1. Download the COCO Images and Annotations from coco dataset.
  2. Unzip train2017 and val2017 folder. The current directory structure should look like:
OW-DETR/
└── data/
    └── coco/
        ├── annotations/
        ├── train2017/
        └── val2017/
  1. Move all images from train2017/ and val2017/ to JPEGImages folder.
cd OW-DETR/data
mv data/coco/train2017/*.jpg data/VOC2007/OWDETR/JPEGImages/.
mv data/coco/val2017/*.jpg data/VOC2007/OWDETR/JPEGImages/.
  1. Use the code coco2voc.py for converting json annotations to xml files.

The files should be organized in the following structure:

OW-DETR/
└── data/
    └── VOC2007/
        └── OWDETR/
        	├── JPEGImages
        	├── ImageSets
        	└── Annotations

Currently, Dataloader and Evaluator followed for OW-DETR is in VOC format.

Results

Task1 Task2 Task3 Task4
Method U-Recall mAP U-Recall mAP U-Recall mAP mAP
ORE-EBUI 1.5 61.4 3.9 40.6 3.6 33.7 31.8
OW-DETR 5.7 71.5 6.2 43.8 6.9 38.5 33.1

Training

Training on single node

To train OW-DETR on a single node with 8 GPUS, run

./run.sh

Training on slurm cluster

To train OW-DETR on a slurm cluster having 2 nodes with 8 GPUS each, run

sbatch run_slurm.sh

Evaluation

For reproducing any of the above mentioned results please run the run_eval.sh file and add pretrained weights accordingly.

Note: For more training and evaluation details please check the Deformable DETR reposistory.

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.

Citation

If you use OW-DETR, please consider citing:

@inproceedings{gupta2021ow,
    title={OW-DETR: Open-world Detection Transformer}, 
    author={Gupta, Akshita and Narayan, Sanath and Joseph, KJ and 
    Khan, Salman and Khan, Fahad Shahbaz and Shah, Mubarak},
    booktitle={CVPR},
    year={2022}
}

Contact

Should you have any question, please contact 📧 [email protected]

Acknowledgments:

OW-DETR builds on previous works code base such as Deformable DETR, Detreg, and OWOD. If you found OW-DETR useful please consider citing these works as well.

ow-detr's People

Contributors

akshitac8 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ow-detr's Issues

The weights

It seems that you don't give the weights of models in t4_ft task.

Objectness Related

I didnt find the objectness related calculation in the code, could you kindly point it out ? THX

About Visualize

Hello! @akshitac8 Could you tell me how to visualize the test result like your paper result (bound box and label visualization) ? Thanks a lot!

About the Objectness Scores

Where is the part of calculating the objectness scores in the code? I don't see the objectness loss in the sum_loss calculation.

Foreground Objectness

Dear authors,

I compared your code with standard Deformable DETR and didn't find corresponding code of section 2.5.Foreground Objectness, could you please point out it for me? Thanks a lot!

Best regards

About ImageSets

int the data/OWDETR/VOC2007/ImageSets/ files are:
t1_train.txt t2_train.txt t2_ft.txt
t3_train.txt t3_ft.txt t4_train.txt t4_ft.txt
but int the 'OWOD_new_split.sh' file, '--train_set' parameters are:
t1_train_new_split t2_train_new_split t2_ft_new_split
t3_train_new_split t3_ft_new_split t4_train_new_split t4_ft_new_split

I guess that we just need to add '_new_split' to the original file name in data/OWDETR/VOC2007/ImageSets/. Is that right

iOD

What is the upper bound on the iOD mission of PASCAL VOC? Have you tested it?

1

Dear author,how long does it take to upload all the codes? #

how do you select K from M=200

In implementation details, you mentioned M=200, as the number of object queries. K from M for known object detection. From M-K object queries, the top k_u is set as 5 for detecting the pseudo-unknown objects, how do you set the number of K? Is K from the number of object-queries that the novel classifier output as non-zero, or it was empirically set as a fixed number?

Where to find coco2voc.py

Hi, thx for sharing your work!
Where can I get the coco2voc.py? It seem doesn't in this respository.

No such file or directory

thank you for your code,but I ran into some problems.

  1. No such file or directory: '/OW-DETR/data/OWDETR/VOC2007/ImageSets/t4_ft_new_split.txt
  2. FileNotFoundError: [Errno 2] No such file or directory: '/proj/cvl/users/x_fahkh/akshita/Deformable-DETR/models/dino_resnet50_pretrain/dino_resnet50_pretrain.pth'
  3. The link is still disabled

I didn't find these in the code or information you provided, can you please tell me where to find these files,thank you ~

log

May I ask if it is convenient for you to provide the experimental log of data division proposed in your paper? Because I found in the training that the fifth epoch only has person class mAP > 0, mAP of other known classes is 0, I want to know is this OK?

RuntimeError: CUDA out of memory.

When I execute 'python test.py', I meet the problem

  • True check_forward_equal_with_pytorch_double: max_abs_err 8.67e-19 max_rel_err 2.35e-16
  • True check_forward_equal_with_pytorch_float: max_abs_err 4.66e-10 max_rel_err 1.13e-07
  • True check_gradient_numerical(D=30)
  • True check_gradient_numerical(D=32)
  • True check_gradient_numerical(D=64)
  • True check_gradient_numerical(D=71)
    Traceback (most recent call last):
    File "test.py", line 87, in
    check_gradient_numerical(channels, True, True, True)
    File "test.py", line 77, in check_gradient_numerical
    gradok = gradcheck(func, (value.double(), shapes, level_start_index, sampling_locations.double(), attention_weights.double(), im2col_step))
    File "/home/long/miniconda3/envs/owdetr/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 423, in gradcheck
    nondet_tol=nondet_tol)
    File "/home/long/miniconda3/envs/owdetr/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 174, in get_analytical_jacobian
    jacobian_reentrant = make_jacobian(input, output.numel())
    File "/home/long/miniconda3/envs/owdetr/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 31, in make_jacobian
    lambda x: x is not None, (make_jacobian(elem, num_out) for elem in input)))
    File "/home/long/miniconda3/envs/owdetr/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 31, in
    lambda x: x is not None, (make_jacobian(elem, num_out) for elem in input)))
    File "/home/long/miniconda3/envs/owdetr/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 28, in make_jacobian
    return input.new_zeros((input.nelement(), num_out), dtype=input.dtype, layout=torch.strided)
    RuntimeError: CUDA out of memory. Tried to allocate 1.88 GiB (GPU 0; 3.95 GiB total capacity; 1.88 GiB already allocated; 925.62 MiB free; 1.90 GiB reserved in total by PyTorch)

Even I try the command on RTX3090 or A100, the problem is still here.
Does this problem have bad influence in training and reference ?

The training schedule

Dear Author,

In the paper, I see that every task is trained for 50 epochs and finetuned for 20 epochs as
13291a30151dcbb1fe48d3092ce85ce

However, in configs/OWOD_new_split.sh, I see the training schedule is following a different setting as highlighted by the red boxes.
e2489ae780a580596b7bbb8e7fb1c3a

Is there anything I missed? Looking forward to your reply. Thanks.

19+1 setting

In this setting, can you send me the trained model? Because there is a gap between my results and the original.

OWOD data split doesn't work.

Hi,
So I have been trying to implement the old TOWOD data split. I found that the images specified there are the VOC2007 classes. However, there is a bug where the classes defined at the top of "datasets/torchvision_datasets/open_world.py" are not the VOC2007 classes:

VOC_CLASS_NAMES = [
"aeroplane","bicycle","bird","boat","bus","car",
"cat","cow","dog","horse","motorbike","sheep","train",
"elephant","bear","zebra","giraffe","truck","person"
]

and missing: bottle, chair, dining table, potted plant, sofa, tv/monitor.

Did you train on these classes when you report the results of task 1 table 1? did you use the VOC2007 classes? if so, could you upload the scripts you used?

Thank you in advance

Deploying this model to an edge device.

Hi I was looking forward to implement this code on an edge device, is there anyway where I can convert the model learnt through this framework to a tensorRT format ??

the bug in models/deformable_detr.py

In the class SetCriterion forward function of models/deformable_detr.py, both of the nested for loops use i as the iteration variable (Line 530 and Line 543). And after the internal for loop ends, it still uses the variable i. Isn't this a bug? Could you explain this?
bug

about the backbone "DINO resnet50" and do IL

code
print("DINO resnet50") backbone = resnet50(pretrained=False, replace_stride_with_dilation=[False, False, dilation], norm_layer=norm_layer) if is_main_process(): state_dict = torch.load("/proj/cvl/users/x_fahkh/akshita/Deformable-DETR/models/dino_resnet50_pretrain/dino_resnet50_pretrain.pth") backbone.load_state_dict(state_dict, strict=False)
in backbone.py
I want to know how to get this pretrained model? use ImageNet or the first task's images?
and I would like to ask if you are not using any incremental training paradigm (e.g. knowledge distillation) because I don't see the relevant part in the code.
Thank you for such an excellent work.

the results of task1

In Task1, my current class AP50 was 73.0, which was quite different from the results in the paper, and the U-recall was only 4.4. I don't know why. Can you upload the trained weight file?

store exemplars

Where in the code do we store 50 samples per known category?

CUDA out of memory

I use RTX3070 to run test.py,but there occur an error:
OutOfMemoryError: CUDA out of memory. Tried to allocate 1.88 GiB (GPU 0; 7.78 GiB total capacity; 5.65 GiB already allocated; 677.44 MiB free; 5.90 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
How can I slove this problem?

Problems of OWOD proposed splits in Task 1

Hello, when training with OWOD proposed splits, why are there 5 '0.0' in AP50?
Task 1
"AP50 ": "['64.0', '32.8', '30.0', '26.5', '57.1', '38.1', '78.3', '43.1', '60.7', '45.9', '46.3', '42.9', '69.9', '0.0', '0.0', '0.0', '0.0', '0.0', '54.3', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.3']"
owod

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.