DenseVideoCaptioning
Tensorflow Implementation of the Paper Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning by Jingwen Wang et al. in CVPR 2018.
Citation
@inproceedings{wang2018bidirectional,
title={Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning},
author={Wang, Jingwen and Jiang, Wenhao and Ma, Lin and Liu, Wei and Xu, Yong},
booktitle={CVPR},
year={2018}
}
Data Preparation
Please download annotation data and C3D features from the website ActivityNet Captions. The ActivityNet C3D features with stride of 64 frames (used in my paper) can be found here.
Please follow the script dataset/ActivityNet_Captions/preprocess/anchors/get_anchors.py to obtain clustered anchors and their pos/neg weights (for handling imbalance class problem). I already put the generated files in dataset/ActivityNet_Captions/preprocess/anchors/.
Please follow the script dataset/ActivityNet_Captions/preprocess/build_vocab.py to build word dictionary and to build train/val/test encoded sentence data.
Hyper Parameters
The configuration (from my experiments) is given in opt.py, including model setup, training options, and testing options. You may want to set max_proposal_num=1000 if saving valiation time is not the first priority.
Training
Train dense-captioning model using the script train.py.
First pre-train the proposal module. Set train_proposal=True and train_caption=False. Then train the whole dense-captioning model by setting train_proposal=True and train_caption=True. To understand the proposal module, I refer you to the original SST paper and also my tensorflow implementation of SST.
Prediction
Follow the script test.py to make proposal predictions and to evaluate the predictions. Use max_proposal_num=1000 to generate .json test file and then use script "python2 evaluate.py -s [json_file] -ppv 100" to evaluate the performance (the joint ranking requres to drop items that are less confident).
Evaluation
Please note that the official evaluation metric has been updated (Line 194). In the paper, old metric is reported (but still, you can compare results from different methods, all CVPR-2018 papers report old metric).
Pre-trained Model & Results
[Deprecated] The predicted results for val/test set can be found here.
The pre-trained model and validation/test prediction can be found here. On validation set the model obtained 9.77 METEOR score using evaluate_old.py and 5.42 METEOR score using evaluate.py. On test set the model obtained 4.49 METEOR score returned by the ActivityNet server.
Dependencies
tensorflow==1.0.1
python==2.7.5
Other versions may also work.
Update:
- I corrected some naming errors and simplified the proposal loss using tensorflow built-in function.
- I uploaded C3D features with stride of 64 frames (used in my paper). You can find it here.
- I uploaded val/test results of both without joint ranking and with joint ranking.
- I uploaded video_fps.json and updated test.py.
- Due to large file constraint, you may need to download data/paraphrase-en.gz here and put it in densevid_eval-master/coco-caption/pycocoevalcap/meteor/data/.
- I corrected multi-rnn mistake casused by get_rnn_cell() function (see model.py).
- I updated evaluation code. "evaluator_old.py" is used in my paper, "evaluator.py" is used since ActivityNet Captions 2018 Challenge.
- I removed too small anchors and too large anchors, resulting into 120 anchors.
- I modified data_provider.py and model.py to correct the loss weighting.
- I corrected the mistake from the evaluator (evaluator_old.py & evaluate_old.py). You can match the code with https://github.com/ranjaykrishna/densevid_eval/blob/b8d90707984bf9c99454ba82b089006f14fb62b3/evaluate.py
- I uploaded the pretrained model. Please also download the updated code.