Coder Social home page Coder Social logo

ink-usc / viscoll Goto Github PK

View Code? Open in Web Editor NEW
21.0 6.0 4.0 98.42 MB

Code and data for the project "Visually grounded continual learning of compositional semantics"

Home Page: https://inklab.usc.edu/viscoll-project/

Jupyter Notebook 46.43% Python 51.01% C++ 0.68% Cuda 0.96% Shell 0.28% Makefile 0.01% JavaScript 0.63%

viscoll's Introduction

VisCOLL

Code and data for the paper "Visually grounded continual learning of compositional phrases, EMNLP 2020". Paper

Checkout our Project website for data explorers and the leaderboard!

Installation

conda create -n viscoll-env python==3.7.5
conda activate viscoll-env
pip install -r requirements.txt

Overview

VisCOLL proposes a problem setup and studies algorithms for continual learning and compositionality over visual-linguistic data. In VisCOLL, the model visits a stream of examples with an evolving data distribution over time and learn to perform masked phrases prediction. We create COCO-shift and Flickr-shift (based on COCO-captions and Flickr30k-entities) for study.

Training and evaluation

This repo include code for running and evaluating ER/MIR/AGEM continual learning algorithms on VisCOLL datasets (COCO-shift and Flickr-shift), with VLBERT or LXMERT models.

For example, to train a VLBERT model with a memory of 10,000 examples on coco using Experience Replay (ER) algorithm, run:

python train_mlm.py --name debug --config configs/mlmcaptioning/er.yaml --seed 0 --cfg MLMCAPTION.BASE=vlbert OUTPUT_DIR=runs/

You may check walkthrough.ipynb in this repo for a detailed walkthrough of training, inference, and evaluation.

Data

We release the constructed data streams and scripts for visualization in the datasets folder.

COCO shift

COCO-shift is under datasets/coco/coco_buffer, with the name formats: task_buffer_real_split_{1}split{2}novel_comps{3}_task_partition_any_objects.pkl

  1. The official data split where examples are drawn from
  2. How the dataset is applied during training. We separate out 5,000 examples from original train split as the validation examples. The official val split is used as the test split
  3. Whether the data is the novel-composition split of 24 held-out concept pairs

The pickled file is python list where each item is a dict:

{'annotation': {'image_id': 374041, 'id': 31929, 'caption': 'Someone sitting on an elephant standing by a fence.  '}, 'task': 'elephant', 'mask': (3, 5)}

The mask is the text span that should be masked for prediction. image_id can be used for locating the image in the dataset.

Image features for COCO

If you use the provided data stream above (i.e. do not create the non-stationary data stream itself, which requires additional extra steps such as phrase extraction), the only extra files required are,

  1. Extracted image features for COCO under datasets/coco-features. We use code from this repo to extract features. We will upload our extracted features soon;

  2. A json dictionary mapping of image id to image features in the file above. Included in this repo.

Measuring compositional generalization

We use data from this repo to perform evaluation of compositional generalization. Please put compositional-image-captioning/data/dataset_splits/ under datasets/novel_comps and compositional-image-captioning/data/occurrences/ under datasets/occurrences.

Flickr shift

Flickr-shift is under datasets/flickr/flickr_buffer. We use official split for Flickr30k Entities. Below is an example from the Flickr shift dataset.

{'tokens': ['a', 'motorcyclist', '"', 'pops', 'a', 'wheelie', '"', 'in', 'a', 'grassy', 'field', 'framed', 'by', 'rolling', 'hills', '.'], 'task_id': 518, 'image_id': '3353962769', 'instance_id': 8714, 'phrase_offset': (0, 2)}

The phrase offset is the text span that should be masked for prediction. image_id can be used for retrieving the image in the dataset.

Image features for Flickr

Similar to COCO, unless you want to create the datastream yourself, the only files required are,

  1. Extracted image features for Flickr under datasets/flickr-features. Simiarly, we use code from this repo to perfrom extraction. The extracted features will be uploaded soon.

  2. A json dictionary mapping of image id to image features in the file above. Included in this repo.

Visualizing the stream

We provide visual_stream.ipynb to visualize task distributions over time in Flickr-shift dataset.

Citation

@inproceedings{Jin2020VisuallyGC,
    title={Visually Grounded Continual Learning of Compositional Phrases},
    author={Xisen Jin and Junyi Du and Arka Sadhu and R. Nevatia and X. Ren},
    booktitle={EMNLP},
    year={2020}
}

viscoll's People

Contributors

aucson avatar

Stargazers

Yongliang Shen avatar  avatar Longhui Yu avatar  avatar Songyang Zhang avatar CVDDL avatar  avatar  avatar 爱可可-爱生活 avatar  avatar newhousewhite avatar Yongfei Liu avatar Songyang Zhang avatar Junyi Du avatar Yaya Shi avatar Arka Sadhu avatar wind avatar Aaron Chan avatar  avatar Aashiq Muhamed avatar  avatar

Watchers

James Cloos avatar HY avatar Xiang Ren avatar (Bill) Yuchen Lin avatar  avatar Yaya Shi avatar

viscoll's Issues

label not matching

{'annotation': {'image_id': 374041, 'id': 31929, 'caption': 'Someone sitting on an elephant standing by a fence. '}, 'task': 'elephant', 'mask': (3, 5)}

the image_id and the id (annatation_id) is not matching.

  1. the image is an elephant.
  2. but the id is annatation for an computer image.
    Not sure for other items in pickle.

setuptools == 42.0.2.post20191203

When I install each pacakge in requirements, I encountered a issue involoving "
setuptools == 42.0.2.post20191203". The tip of error is "ERROR: No matching distribution found for setuptools==42.0.2.post20191203". However, I can successfully perform the commond line "pip install setuptools==42.0.2", but the version number don't compy with 42.0.2.post20191203. Please tell me what should I do next?

Looking forward to your answer, thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.