First ever open source Implementation of Computation Adaptive Siamese network for Visual Tracking
The following is an Unofficial implementation of Depth-Adaptive Computational Policies for Efficient Visual Tracking by Chris Ying and Katerina Fragkiadaki.
The folllowing topics are covered by my project:
- Data-Preprocessing. Key and Search frame extraction from Imagenet 2017 VID dataset
- Intermediate Supervision VGG Model. Built using Intermediate Supervision as mentioned in Paper.
- Budgeted Gating Loss. Implemented the g* function mentioned in Paper with Shallow Feature Extractor.
- Hard Gating for Evaluation. Hard gating which stops the computation when confidence score exceeds threshold.
- Readability. The code is very clear,well documented and consistent.
Search Frame Cross Correlation Frame
Model Structure
- Build Key & Search Inputs
- Build Vgg Nets for each
- Build 5 Blocks of Cross-Corr & Flops for each
- Build Non-Diff Shallow Feature Extractor from Cross-Corr
- Build Confidence Score ~ Gfunction
- Build Intermediate Supervision Block Loss
- Build Budgeted Gates & Gate Loss
- Build Hard Gates for Evaluation
Block Loss is implemented in model/compAdaptiveSiam/block_loss().This loss also results in exploding Gradients for which to prevent there is L2 Regularization and Gradient Clipping Implemented
Budgeted Gates implemented in model/compAdaptiveSiam/gStarFunc()
Gate Loss implemented in model/compAdaptiveSiam/gateLoss
Cropped Section of TensorBoard Graph
The main requirements can be installed by:
pip install -r requirements.txt
One can download the ImageNet Vid dataset from the link
The data can be preprocessed to Key frame & Search frame using the following code
Change the location of the dataset from main function in the file
python scripts/preprocess_VID_data.py
Finally data can be split into train and validation and pickled by the following code
python scripts/build_VID2015_imdb.py
The credit for the scripts to preprocess the Visual Tracking DataSet goes to Huazhong University of Science and Technology
It will iteratively train the Vgg Weights using Intermediate Supervison and then use the weights to train the Gated Weights. This process will happen iteratively
python main.py train
Hard Gating will stop the computation when the confidence score exceeds the threshold It will return the Cross Correlation Map,Flops Computed and the index of the block where computation stopped
python main.py eval
If you are training from beginning then use the vgg pretrained model provided here link
The pretrained model for VggNet and Gates trained by me is here