This repository includes code for our paper. We investigate the properties of joint multimodal representations derived from both a task-specific model and a multi-task model with respect to different training objective and information streams. We compare MCAN and multi-task ViLBERT on the VQA task and evaluate their performance on the VQA 2.0 and GQA datasets. We extend the implementation of both MCAN and multi-task ViLBERT.
pkhdipraja / joint-multimodal-embeddings-1 Goto Github PK
View Code? Open in Web Editor NEWThis project forked from lkopf/joint-multimodal-embeddings