An unofficial pytorch implementation of "TransVG: End-to-End Visual Grounding with Transformers".
My model is still in training. Due to some implementation details, I do not guarantee that I can reproduce the performance in the paper. My own reproduced model performance table will be updated as soon as I finish the training.
Also, if you have any questions about the code please feel free to ask.
paper: https://arxiv.org/abs/2104.08541
Create the conda environment with the environment.yml
file:
conda env create -f environment.yml
Activate the environment with:
conda activate transvg
- Please refer to ReSC, and follow the steps to Prepare the submodules and associated data:
- RefCOCO, RefCOCO+, RefCOCOg, ReferItGame Dataset.
- Dataset annotations, which stored in
./data
- Please refer to DETR and download model weights, I used the DTER model with ResNet50, which reached an AP of 42.0 at COCO2017. Please store it in
./saved_models/detr-r50-e632da11.pth
Train our model using the following commands:
python train.py --data_root XXX --dataset {dataset_name} --gpu {gpu_id}
Evaluate our model using the following commands:
python train.py --data_root XXX --dataset {dataset_name} --gpu {gpu_id} --test