Coder Social home page Coder Social logo

arunirc / detectron-self-train Goto Github PK

View Code? Open in Web Editor NEW
119.0 6.0 21.0 14.86 MB

A PyTorch Detectron codebase for domain adaptation of object detectors.

License: MIT License

Shell 19.26% Python 72.32% MATLAB 0.14% Cuda 4.42% C 3.67% C++ 0.19%
detectron pytorch object-detection domain-adaptation cvpr2019 pedestrian-detection faster-rcnn

detectron-self-train's People

Contributors

arunirc avatar jiasenlu avatar jwyang avatar pcjohn avatar roytseng-tw avatar yuliang-zou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

detectron-self-train's Issues

Can you provide a separate evaluation code?

Hi,
As far as I understand, the evaluation is done on the fly you run the detection.
That means we can not run a different model from another source (i.e. tensorflow).
Can you provide the evaluation script that is able to evaluate the box prediction only?

For example, I have a separate tensorflow model that output the 'bbox_bdd_peds_val_results.json'
And I want to evaluate this result file on the ground truth 'bdd_peds_val.json'.
That means I do not have to run your detection script.
It just like: evaluate.py --gt bdd_peds_val.json --pred bbox_bdd_peds_val_results.json

Thank you

Images without pedestrian bbox

Hi,

As I have checked in the bdd_peds_train.json and bdd_peds_val.json, there are a lot of images without bounding boxes annotation. How do you train/ evaluate your model for this case.
For example, in the training file only 4428/12477 images have bboxes annotation, and in the validation there are 628/1764.

Thank a lot

ImportError: cannot import name numpy_type_map

Hi, Automatic adaptation of object detectors to new domains using self-training is a nice work, but when I run gypsum/scripts/demo/hp_cons_demo.sh use bdd_HP-cons_model_step29999.pth.pth and occur

Traceback (most recent call last):
  File "tools/infer_demo.py", line 35, in <module>
    import nn as mynn
  File "/content/detectron-self-train/lib/nn/__init__.py", line 2, in <module>
    from .parallel import DataParallel
  File "/content/detectron-self-train/lib/nn/parallel/__init__.py", line 3, in <module>
    from .data_parallel import DataParallel, data_parallel
  File "/content/detectron-self-train/lib/nn/parallel/data_parallel.py", line 4, in <module>
    from .scatter_gather import scatter_kwargs, gather
  File "/content/detectron-self-train/lib/nn/parallel/scatter_gather.py", line 8, in <module>
    from torch.utils.data.dataloader import numpy_type_map
ImportError: cannot import name numpy_type_map

System information

  • Operating system: Ubuntu 18.04.5 LTS
  • CUDA version: 10.1
  • python version: 3.6.9
  • pytorch version: 1.7.0+cu101
  • torchvision version: 0.8.1+cu101

and it does not seem to support torch >= 1.1.0 and I also find some issues.

detectron2 supports pytorch 1.3.0 and above, and cuda 10.1 only supports pytorch 1.4 and above.
Is it possible to move to detectron2?

Thanks

sh make.sh problem in Window10

First of all, Thank you for provide your code.

I followed https://github.com/AruniRC/detectron-self-train/blob/master/INSTALL.md for Installation.

But i met problem at "Compile Detectron-pytorch"
cd lib # please change to this directory
sh make.sh

Currently, i am using Window10, so i cannot access sh make.sh.
Could you suggest solution of this problem?

System information

  • Operating system: Window10
  • CUDA version: 10.0
  • cuDNN version: ?
  • GPU models (for all devices if they are not all the same): Titan Xp
  • python version: 3.7
  • pytorch version: 1.1.0

initial weight download

When I try to train the model, I cannot find the file "/mnt/nfs/scratch1/pchakrabarty/bdd_recs/ped_models/bdd_peds.pth" to initialize the model. Could you let me know what it is and where to download it.

Width and Height of the images

Hi,

I am a little bit confused about your json data converting.

bdd100k
image['width'] = 720
image['height'] = 1280

Wider
image['width'] = im.height
image['height'] = im.width

It seems that you have swapped the height and width of the image.
What does it mean here?
Does it affect the training and testing of the model?

how to generate pseudo labels from baseline detection model

Hi
I think the the bdd_peds+DETS18k means using bboxes from the source dataset (bdd_peds) and the pseudo labels generated by the baseline detection model for target dataset. Could you let me know how to generate the pseudos labels for this part? Do you run the baseline model on all the training sample in the target dataset and filter ~100000 images?

What if I'd like to use own dataset

Thanks for great sources.
I want to try to implement this with my own dataset, so I'd like to ask you guys which settings or configuration I should change.

What I am thinking of are, config file and dataset path (annotated in Pascal VOC ideally).
Is there any other things that I should take care of?

Thank you.

Is it possible to convert to Caffe2?

Hello.

I have been impressed with your research.
I would like to test your output with caffe2, but I want to know if it is possible.

Thank you :)

What if re-training on pseudo-labeled target images only?

Hi,

After pseudo-labels of unlabeled target images are generated, you re-train the baseline source model jointly on the combined set of source and target images. However, the source images might not always be available. Did you try re-training on pseudo-labeled target images only? What is the expected performance?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.