Coder Social home page Coder Social logo

dasguptar / bcnn.pytorch Goto Github PK

View Code? Open in Web Editor NEW
19.0 3.0 2.0 10 KB

Bilinear CNNs in PyTorch

License: MIT License

Python 82.07% Shell 17.93%
pytorch deep-learning deeplearning machine-learning machinelearning computer-vision computervision bilinear-cnn bilinear-pooling fine-grained-classification fine-grained-visual-categorization fine-grained-recognition

bcnn.pytorch's Introduction

Bilinear ConvNets for Fine-Grained Recognition

This is a PyTorch implementation of Bilinear CNNs as described in the paper Bilinear CNN Models For Fine-Grained Visual Recognition by Tsung-Yu Lin, Aruni Roy Chowdhury, and Subhransu Maji. On the Caltech-UCSD Birds-200-2011 or CUB-200-2011 dataset, for the task of 200 class fine-grained bird species classification, this implementation reaches:

  • Accuracy of 84.29% using the following training regime
    • Train only new bilinear classifier, keeping pre-trained layers frozen
      • Learning rate: 1e0, Weight Decay: 1e-8, Epochs: 55
    • Finetune all pretrained layers as well as bilinear layer jointly
      • Learning rate: 1e-2, Weight Decay: 1e-5, Epochs: 25
    • Common settings for both training runs
      • Optimizer: SGD, Momentum: 0.9, Batch Size: 64, GPUs: 4
  • These values are plugged into the config file as defaults
  • The original paper reports 84.00% accuracy on CUB-200-2011 dataset using VGG-D pretrained model, which is similar to the VGG-16 model that this implementation uses.
  • Minor differences exist, e.g. no SVM being used, and the L2 normalization is done differently.

Requirements

  • Python (tested on 3.6.9, should work on 3.5.0 onwards due to typing).
  • Other dependencies are in requirements.txt
  • Currently works with Pytorch 1.1.0, but should work fine with newer versions.

Usage

The actual model class along with the relevant dataset class and a utility trainer class is packaged into the bcnn subfolder, from which the relevant modules can be imported. Dataset downloading and preprocessing is done via a shell script, and a Python driver script is provided to run the actual training/testing loop.

  • Use the script scripts/prepareData.sh which does the following:
    • WARNING: Some of these steps require GNU Parallel, which can be installed via these methods
    • Download the CUB-200-2011 dataset and extract it.
    • Preprocess the dataset, i.e. resizing smaller edge to 512 pixels maintaining aspect ratio.
    • A copy of the dataset is also created where images are cropped to their bounding boxes.
  • main.py is the actual driver script. It imports relevant modules from the bcnn package, and performs the actual pre-training and fine-tuning of the model, and testing it on the test splits. For a list of all command-line arguments, have a look at config.py.
    • Model checkpoints are saved to the ckpt/ directory with the name specified by the command line argument --savedir.

If you have a working Python3 environment, simply run the following sequence of steps:

- bash scripts/prepareData.sh
- pip install -r requirements.txt
- export CUDA_VISIBLE_DEVICES=0,1,2,3
- python main.py --gpus 1 2 3 4 --savedir ./ckpt/exp_test

Notes

  • (Oct 12, 2019) GPU memory consumption is not very high, which means batch size can be increased. However, that requires changing other hyperparameters such as learning rate.

Acknowledgements

Tsung-Yu Lin and Aruni Roy Chowdhury released the original implementation which was invaluable in understanding the model architecture.
Hao Mood also released a PyTorch implementation which was critical for finding the right hyperparameters to reach the accuracy reported in the paper.
As usual, shout-out to the Pytorch team for the incredible library.

Contact

Riddhiman Dasgupta
Please create an issue or submit a PR if you find any bugs!

License

MIT

bcnn.pytorch's People

Contributors

dasguptar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

bcnn.pytorch's Issues

Can't reproduce the result of 84% accuracy

I refered to your and Hao Mood's code and trained the BCNN model fine tuning all layers,
and the best test accuracy I can reach was ~73%/~61% with and without pretrained VGG16.
Is it easy to reach the accuracy of 84% you report?

I used almost the same hyperparameter setting, except the batch-size.
Due to memory constraint, I can only set batch-size as 12.
I doubt the small batch size would hurt the training but have no evidence.
Since the VGG16 I used didn't include BN layers, and people just said
small batch size can provide noise in training to prevent from poor generalization.

Because small batch size increase the variance of gradient,
so I also tried to tune the lr rate in order to adjust that, but still can't improve the result.

Could you give me some advice on how to reach the 84% accuracy?
or confirm that it is not possible to reach 84% accuracy when batch size is 12.

Shouldn't BCNN use two independent VGG models?

Thank you for providing the implementation of BCNN.
After reading your code, I'm cofused with the model setting that
it just train single VGG model and outer product the output and its transpose.
Shouldn't BCNN train two independent VGG models and outer product their outputs?
Am I misunderstanding something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.