Coder Social home page Coder Social logo

yahoo / object_relation_transformer Goto Github PK

View Code? Open in Web Editor NEW
175.0 8.0 44.0 1.14 MB

Implementation of the Object Relation Transformer for Image Captioning

Home Page: https://arxiv.org/abs/1906.05963

License: MIT License

Python 95.09% Shell 4.21% HTML 0.70%
machine-learning

object_relation_transformer's Introduction

Object Relation Transformer

This is a PyTorch implementation of the Object Relation Transformer published in NeurIPS 2019. You can find the paper here. This repository is largely based on code from Ruotian Luo's Self-critical Sequence Training for Image Captioning GitHub repo, which can be found here.

The primary additions are as follows:

  • Relation transformer model
  • Script to create reports for runs on MSCOCO

Requirements

  • Python 2.7 (because there is no coco-caption version for Python 3)
  • PyTorch 0.4+ (along with torchvision)
  • h5py
  • scikit-image
  • typing
  • pyemd
  • gensim
  • cider (already added as a submodule). See .gitmodules and clone the referenced repo into the object_relation_transformer folder.
  • The coco-caption library, which is used for generating different evaluation metrics. To set it up, clone the repo into the object_relation_transformer folder. Make sure to keep the cloned repo folder name as coco-caption and also to run the get_stanford_models.sh script from within that repo.

Data Preparation

Download ResNet101 weights for feature extraction

Download the file resnet101.pth from here. Copy the weights to a folder imagenet_weights within the data folder:

mkdir data/imagenet_weights
cp /path/to/downloaded/weights/resnet101.pth data/imagenet_weights

Download and preprocess the COCO captions

Download the preprocessed COCO captions from Karpathy's homepage. Extract dataset_coco.json from the zip file and copy it in to data/. This file provides preprocessed captions and also standard train-val-test splits.

Then run:

$ python scripts/prepro_labels.py --input_json data/dataset_coco.json --output_json data/cocotalk.json --output_h5 data/cocotalk

prepro_labels.py will map all words that occur <= 5 times to a special UNK token, and create a vocabulary for all the remaining words. The image information and vocabulary are dumped into data/cocotalk.json and discretized caption data are dumped into data/cocotalk_label.h5.

Next run:

$ python scripts/prepro_ngrams.py --input_json data/dataset_coco.json --dict_json data/cocotalk.json --output_pkl data/coco-train --split train

This will preprocess the dataset and get the cache for calculating cider score.

Download the COCO dataset and pre-extract the image features

Download the COCO images from the MSCOCO website. We need 2014 training images and 2014 validation images. You should put the train2014/ and val2014/ folders in the same directory, denoted as $IMAGE_ROOT:

mkdir $IMAGE_ROOT
pushd $IMAGE_ROOT
wget http://images.cocodataset.org/zips/train2014.zip
unzip train2014.zip
wget http://images.cocodataset.org/zips/val2014.zip
unzip val2014.zip
popd
wget https://msvocds.blob.core.windows.net/images/262993_z.jpg
mv 262993_z.jpg $IMAGE_ROOT/train2014/COCO_train2014_000000167126.jpg

The last two commands are needed to address an issue with a corrupted image in the MSCOCO dataset (see here). The prepro script will fail otherwise.

Then run:

$ python scripts/prepro_feats.py --input_json data/dataset_coco.json --output_dir data/cocotalk --images_root $IMAGE_ROOT

prepro_feats.py extracts the ResNet101 features (both fc feature and last conv feature) of each image. The features are saved in data/cocotalk_fc and data/cocotalk_att, and resulting files are about 200GB. Running this script may take a day or more, depending on hardware.

(Check the prepro scripts for more options, like other ResNet models or other attention sizes.)

Download the Bottom-up features

Download the pre-extracted features from here. For the paper, the adaptive features were used.

Do the following:

mkdir data/bu_data; cd data/bu_data
wget https://imagecaption.blob.core.windows.net/imagecaption/trainval.zip
unzip trainval.zip

The .zip file is around 22 GB. Then return to the base directory and run:

python scripts/make_bu_data.py --output_dir data/cocobu

This will create data/cocobu_fc, data/cocobu_att and data/cocobu_box.

Generate the relative bounding box coordinates for the Relation Transformer

Run the following:

python scripts/prepro_bbox_relative_coords.py --input_json data/dataset_coco.json --input_box_dir data/cocobu_box --output_dir data/cocobu_box_relative --image_root $IMAGE_ROOT

This should take a couple hours or so, depending on hardware.

Model Training and Evaluation

Standard cross-entropy loss training

python train.py --id relation_transformer_bu --caption_model relation_transformer --input_json data/cocotalk.json --input_fc_dir data/cocobu_fc --input_att_dir data/cocobu_att --input_box_dir data/cocobu_box --input_rel_box_dir data/cocobu_box_relative --input_label_h5 data/cocotalk_label.h5 --checkpoint_path log_relation_transformer_bu --noamopt --noamopt_warmup 10000 --label_smoothing 0.0 --batch_size 15 --learning_rate 5e-4 --num_layers 6 --input_encoding_size 512 --rnn_size 2048 --learning_rate_decay_start 0 --scheduled_sampling_start 0 --save_checkpoint_every 6000 --language_eval 1 --val_images_use 5000 --max_epochs 30 --use_box 1

The train script will dump checkpoints into the folder specified by --checkpoint_path (default = save/). We only save the best-performing checkpoint on validation and the latest checkpoint to save disk space.

To resume training, you can specify --start_from option to be the path saving infos.pkl and model.pth (usually you could just set --start_from and --checkpoint_path to be the same).

If you have tensorflow, the loss histories are automatically dumped into --checkpoint_path, and can be visualized using tensorboard.

The current command uses scheduled sampling. You can also set scheduled_sampling_start to -1 to disable it.

If you'd like to evaluate BLEU/METEOR/CIDEr scores during training in addition to validation cross entropy loss, use --language_eval 1 option, but don't forget to download the coco-caption code into coco-caption directory.

For more options, see opts.py.

The above training script should achieve a CIDEr-D score of about 115.

Self-critical RL training

After training using cross-entropy loss, additional self-critical training produces signficant gains in CIDEr-D score.

First, copy the model from the pretrained model using cross entropy. (It's not mandatory to copy the model, just for back-up)

$ bash scripts/copy_model.sh relation_transformer_bu relation_transformer_bu_rl

Then:

python train.py --id relation_transformer_bu_rl --caption_model relation_transformer --input_json data/cocotalk.json --input_fc_dir data/cocobu_fc --input_att_dir data/cocobu_att --input_label_h5 data/cocotalk_label.h5  --input_box_dir data/cocobu_box --input_rel_box_dir data/cocobu_box_relative --input_label_h5 data/cocotalk_label.h5 --checkpoint_path log_relation_transformer_bu_rl --label_smoothing 0.0 --batch_size 10 --learning_rate 5e-4 --num_layers 6 --input_encoding_size 512 --rnn_size 2048 --learning_rate_decay_start 0 --scheduled_sampling_start 0 --start_from log_transformer_bu_rl --save_checkpoint_every 6000 --language_eval 1 --val_images_use 5000 --self_critical_after 30 --max_epochs 60 --use_box 1

The above training script should achieve a CIDEr-D score of about 128.

Evaluate on Karpathy's test split

To evaluate the cross-entropy model, run:

python eval.py --dump_images 0 --num_images 5000 --model log_relation_transformer_bu/model.pth --infos_path log_relation_transformer_bu/infos_relation_transformer_bu-best.pkl --image_root $IMAGE_ROOT --input_json data/cocotalk.json --input_label_h5 data/cocotalk_label.h5  --input_fc_dir data/cocobu_fc --input_att_dir data/cocobu_att --input_box_dir data/cocobu_box --input_rel_box_dir data/cocobu_box_relative --use_box 1 --language_eval 1

and for cross-entropy+RL run:

python eval.py --dump_images 0 --num_images 5000 --model log_relation_transformer_bu_rl/model.pth --infos_path log_relation_transformer_bu_rl/infos_relation_transformer_bu-best.pkl --image_root $IMAGE_ROOT --input_json data/cocotalk.json --input_label_h5 data/cocotalk_label.h5  --input_fc_dir data/cocobu_fc --input_att_dir data/cocobu_att --input_box_dir data/cocobu_box --input_rel_box_dir data/cocobu_box_relative --language_eval 1

Visualization

Visualize caption predictions

Place all your images of interest into a folder, e.g. images, and run the eval script:

$ python eval.py --dump_images 1 --num_images 10 --model log_relation_transformer_bu/model.pth --infos_path log_relation_transformer_bu/infos_relation_transformer_bu-best.pkl --image_root $IMAGE_ROOT --input_json data/cocotalk.json --input_label_h5 data/cocotalk_label.h5  --input_fc_dir data/cocobu_fc --input_att_dir data/cocobu_att --input_box_dir data/cocobu_box --input_rel_box_dir data/cocobu_box_relative

This tells the eval script to run up to 10 images from the given folder. If you have a big GPU you can speed up the evaluation by increasing batch_size. Use --num_images -1 to process all images. The eval script will create an vis.json file inside the vis folder, which can then be visualized with the provided HTML interface:

$ cd vis
$ python -m SimpleHTTPServer

Now visit localhost:8000 in your browser and you should see your predicted captions.

Generate reports from runs on MSCOCO

The create_report.py script can be used in order to generate HTML reports containing results from different runs. Please see the script for specific usage examples.

The script takes as input one or more pickle files containing results from runs on the MSCOCO dataset. It reads in the pickle files and creates a set of HTML files with tables and graphs generated from the different captioning evaluation metrics, as well as the generated image captions and corresponding metrics for individual images.

If more than one pickle file with results is provided as input, the script will also generate a report containing a comparison between the metrics generated by each pair of methods.

Model Zoo and Results

The table below presents links to our pre-trained models, as well as results from our paper on the Karpathy test split. Similar results should be obtained by running the respective commands in neurips_training_runs.sh. As learning rate scheduling was not fully optimized, these values should only serve as a reference/expectation rather than what can be achieved with additional tuning.

The models are Copyright Verizon Media, licensed under the terms of the CC-BY-4.0 license. See associated license file.

Algorithm CIDEr-D SPICE BLEU-1 BLEU-4 METEOR ROUGE-L
Up-Down + LSTM * 106.6 19.9 75.6 32.9 26.5 55.4
Up-Down + Transformer 111.0 20.9 75.0 32.8 27.5 55.6
Up-Down + Object Relation Transformer 112.6 20.8 75.6 33.5 27.6 56.0
Up-Down + Object Relation Transformer + Beamsize 2 115.4 21.2 76.6 35.5 28.0 56.6
Up-Down + Object Relation Transformer + Self-Critical + Beamsize 5 128.3 22.6 80.5 38.6 28.7 58.4

* Note that the pre-trained Up-Down + LSTM model above produces slightly better results than reported, as it came from a different training run. We kept the older LSTM results in the table above for consistency with our paper.

Comparative Analysis

In addition, in the paper we also present a head-to-head comparison of the Object Relation Transformer against the "Up-Down + Transformer" model. (Results from the latter model are also included in the table above). In the paper, we refer to this latter model as "Baseline Transformer", as it does not make use of geometry in its attention definition. The idea of the head-to-head comparison is to better understand the improvement obtained by adding geometric attention to the Transformer, both quantitatively and qualitatively. The comparison consists of a set of evaluation metrics computed for each model on a per-image basis, as well as aggregated over all images. It includes the results of paired t-tests, which test for statistically significant differences between the evaluation metrics resulting from each of the models. This comparison can be generated by running the commands in neurips_report_comands.sh. The commands first run the two aforementioned models on the MSCOCO test set and then generate the corresponding report containing the complete comparative analysis.

Citation

If you find this repo useful, please consider citing (no obligation at all):

@article{herdade2019image,
  title={Image Captioning: Transforming Objects into Words},
  author={Herdade, Simao and Kappeler, Armin and Boakye, Kofi and Soares, Joao},
  journal={arXiv preprint arXiv:1906.05963},
  year={2019}
}

Of course, please cite the original paper of models you are using (you can find references in the model files).

Contribute

Please refer to the contributing.md file for information about how to get involved. We welcome issues, questions, and pull requests.

Please be aware that we (the maintainers) are currently busy with other projects, so it make take some days before we are able to get back to you. We do not foresee big changes to this repository going forward.

Maintainers

Kofi Boakye: [email protected]

Simao Herdade: [email protected]

Joao Soares: [email protected]

License

This project is licensed under the terms of the MIT open source license. Please refer to LICENSE for the full terms.

Acknowledgments

Thanks to Ruotian Luo for the original code.

object_relation_transformer's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

object_relation_transformer's Issues

win10 can not run the project

Python 2.7 (because there is no coco-caption version for Python 3)
PyTorch 0.4+ (along with torchvision)

Because there is no pytroch0.4+ according to python2.7.

Embed box before multihead attention

Thank you for your idea and repo. Since box embedding and w_g stay same in multi-turn multihead attention and they do not rely on k,q,v. Is it proper to move box embedding process to the begining of multihead attention to avoid embedding box in each EncoderLayer again and again? I have tried this and found it can reduce XE training time from 22h to 18h(on GTX 1080Ti) without obvious performance degradation (from CIDEr 1.1495 to CIDEr 1.1485)

Evaluate on COCO test split

When I try to evaluate the model on coco test split for 6w images by the command of "python eval.py --dump_images 0 --num_images 5000 --model log_relation_transformer_bu/model-best.pth --infos_path log_relation_transformer_bu/infos_relation_transformer_bu-best.pkl --image_root ./data/coco2014/ --input_json data/cocotest.json --input_label_h5 data/cocotalk_label.h5 --input_fc_dir data/cocotest_bu_fc --input_att_dir data/cocotest_bu_att --input_box_dir data/cocobu_box --input_rel_box_dir data/cocobu_box_relative --use_box 1 --language_eval 1 '', I got the error of "Bad file descriptor".
How to evaluate the model on coco test split?????

Dimension error for geometric and appearance features in Relation Encoding

Thanks for sharing the codes. It's solid organized and compact programmed.

I'd like to have two questions about the RelationTransformerModel.py based on my running results.

  1. Code at llne 454, after this compare operation, we got a array with boolean indexes which can not be added in the following line, i changed it
    seq_mask = (seq.data > 0) to seq_mask = (seq.data > 0).type(torch.int8)
    It's quiet thereafter.

  2. In the function box_attention at line 236,
    # multiplying log of geometric weights by feature weights w_mn = torch.log(torch.clamp(w_g, min = 1e-6)) + w_a

the dimensions of these two geometric and appearance features are not matched. Thus I got error as follows:

RuntimeError: The size of tensor a (54) must match the size of tensor b (50) at non-singleton dimension 3

I've tried to figure it out what's going on there quite for a while but got no idea as so far. I'm not sure whether it depends on my environments (I think not) or it's just a typo in coding.

  • torch 0.4.1
  • torchvision 0.2.1
  • 4 x Tesla V100-SXM2 Driver Version: 410.104 CUDA Version: 10.0

Any input will be appreciated.
Jian

Train Error

When i try to followed by standard cross-entropy loss training command , it mat the error as follow:
image

How should i fix it??

Visualize caption predictions for my image file

Hello, thank you for your work!

I want to get a caption for the image I have.
So I created a folder 'vis_image' put the image and ran the following code. (also created 'vis_image/val2014' and put images both directory)

python eval.py --dump_images 1 --num_images -1 --model log_relation_transformer_bu_rl_pretrain_beam5/model-best.pth --infos_path log_relation_transformer_bu_rl_pretrain_beam5/infos_relation_transformer_bu-best.pkl --image_root vis_image --input_json data/cocotalk.json --input_label_h5 data/cocotalk_label.h5 --input_fc_dir data/cocobu_fc --input_att_dir data/cocobu_att --input_box_dir data/cocobu_box --input_rel_box_dir data/cocobu_box_relative --beam_size 5 --batch_size 700

As a result, the code below was printed out.

DataLoader loading json file: data/cocotalk.json
vocab size is 9487
DataLoader loading h5 file: data/cocobu_fc data/cocobu_att data/cocobu_box data/cocotalk_label.h5
max sequence length in data is 16
read 123287 image features
assigned 113287 images to split train
assigned 5000 images to split val
assigned 5000 images to split test

cp "vis_image/val2014/COCO_val2014_000000369771.jpg" vis/imgs/369771.jpg
cp: 'vis_image/val2014/COCO_val2014_000000369771.jpg' cannot be described: No such file or directory
image 369771: two plastic containers of food on a table
...

And when I looked at the result with json file in vis and localhost:8000, I saw the caption for test 5000 images.
I just wanted to check the caption of 3 sample images I have.

Could you explain in more detail how i can get a caption for a image i have?

KeyError: 'cocobu_box\\1000'

$ python scripts/prepro_bbox_relative_coords.py --input_json data/dataset_coco.json --input_box_dir data/cocobu_box --output_dir data/cocobu_box_relative --image_root image
Reading coco dataset info
Output dir: data/cocobu_box_relative
processed 0 images (of 123287)
Traceback (most recent call last):
File "scripts/prepro_bbox_relative_coords.py", line 102, in
get_bbox_relative_coords(params)
File "scripts/prepro_bbox_relative_coords.py", line 80, in get_bbox_relative_coords
img_path=coco_ids_to_paths[filenumber]
KeyError: 'cocobu_box\1000'
How to solve this problem?

self-critical training [duration and memory occupation]

Hej,

thank you a lot for your great work and some nice code!

I have a question regarding the self-criticial extra training. I am not exactly sure if there is an issue with it, but could you please tell me, how much memory self-critical training should consume?
I keep running into CUDA out of memory error with 3 GPUs, and I can see that self-critical training is really hungry for space...therefore, I wanted to hear from the authors of the paper how much space this extra training required in the original experiments? And was there any optimisation of the code to handle this issue?

Best,
Nikolai.

Error when processing my image folder

Eval scripts provides evaluation on user patch of images:
# For evaluation on a folder of images: parser.add_argument('--image_folder', type=str, default='', help='If this is nonempty then will predict on the images in this folder path') parser.add_argument('--image_root', type=str, default='', help='In case the image paths have to be preprended with a root path to an image folder')

I put some images into images folder and run the csript:

python3 eval.py --dump_images 1 --num_images 10 --model log_relation_transformer_bu_rl/model-best.pth --infos_path log_relation_transformer_bu_rl/infos_relation_transformer_bu-best.pkl --image_folder images --language_eval 0

When doing so i get the error:

Traceback (most recent call last):
File "eval.py", line 175, in
loss, split_predictions, lang_stats = eval_utils.eval_split(model, crit, loader,
File "/home/docet/Projects/Pic2Text/object_relation_transformer-master/eval_utils.py", line 134, in eval_split
boxes_data= data['boxes'][np.arange(loader.batch_size) * loader.seq_per_img]
KeyError: 'boxes'

How can i solve it?

Confused about the Equation 6. in paper

hey, i am confused about the x_m-x_n and y_m-y_n when they are equal to zero under special condition using faster rcnn object detector, because center coordinates may be same and results to log(0). How do solve this problem?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.