Coder Social home page Coder Social logo

dearborn-open-ai / multipar-t Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mitmedialab/multipar-t

0.0 0.0 0.0 30 KB

Official Repository for the IJCAI 2023 Paper: Multipar-T: Multiparty-Transformer for Capturing Contingent Behaviors in Group Conversations

Shell 1.49% Python 98.51%

multipar-t's Introduction

Multipar-T: Multiparty-Transformer for Capturing Contingent Behaviors in Group Conversations

This repo is divided into the following sections:

  • train.py -- contains our main experimental pipeline
  • train.py -- contains train, val, test loop and the backprop pipeline
  • dataset_vid.py -- the dataset
  • layers.py -- layers dependencies for our model classes
  • losses.py -- custom loss functions not in pytorch
  • model.py -- model classes baselines and our proposed Multipar-T
  • utils.py -- helper functions and data for dataset
  • exp.sh -- example scripts to run the models for reproducibility

Roomreader Dataset:

Links to the paper, the agreement form and datset link

To use the dataloader:

  • create a directory "../data/roomreader/room_reader_corpus_db"
  • download the exact structure provided by original authors in the above link (i.e. openface features, video, annotations in "../data/roomreader/room_reader_corpus_db/OpenFace_Features" , "../data/roomreader/room_reader_corpus_db/video", etc)
  • merge "../data/roomreader/room_reader_corpus_db/continuous_engagement/EngAnno_1", "../data/roomreader/room_reader_corpus_db/continuous_engagement/EngAnno_2", "../data/roomreader/room_reader_corpus_db/continuous_engagement/EngAnno_3" to "../data/roomreader/continuous_engagement/room_reader_corpus_db/AllAnno" such that for "room_reader_corpus_db/continuous_engagement/EngAnno_2/S19_P110_Ivy_all.csv" is renamed to "/continuous_engagement/AllAnno/EngAnno_2_S19_P110_Ivy_all.csv"
  • You should be able to use the dataset and dataloader classes now!

To use the full codebase with all baselines:

conda create -y --name ijcai python=3.7
conda install --force-reinstall -y -q --name mlp_env -c conda-forge --file requirements.txt

You will have the necessary environment to run our scrips and easily use our dataset at mlp_env.

For quickstart, we recommend the user to take a look at our quickstart.ipynb

Baselines

Here are the scripts used to test baselines:

# MultipartyTransformer
python train.py --model MultipartyTransformer --train_level group --save_dir test_dir --behavior_dims 100 --data roomreader --data_split bygroup --group_num 5 --seed 0 --lr 0.0001 --batch_size 64 --epochs 20 --loss focal  --labels raw  --oversampling --video_feat resnet

# GAT
python train.py --model Multiparty_GAT --train_level group --save_dir test_dir --data roomreader --data_split bygroup --group_num 5 --seed 0 --lr 0.0001 --batch_size 64 --epochs 20 --loss focal  --labels raw  --oversampling --video_feat resnet

# TEMMA
python train.py --model TEMMA --train_level individual --save_dir test_dir --data roomreader --data_split bygroup --group_num 5 --seed 0 --lr 0.0001  --epochs 20 --loss focal --video_feat resnet --labels raw --oversampling

# ConvLSTM
python train.py --model ConvLSTM --train_level individual --save_dir test_dir --data roomreader --data_split bygroup --group_num 5 --seed 0 --lr 0.0001  --epochs 20 --loss focal --video_feat resnet --labels raw --oversampling

# OCTCNNLSTM
python train.py --model OCTCNNLSTM --train_level individual --data roomreader --save_dir test_dir --data_split bygroup --group_num 5 --seed 0 --lr 0.0001 --epochs 20 --loss focal --video_feat resnet --labels raw --oversampling

# BOOT
python train.py --model BOOT --train_level individual --save_dir test_dir --data roomreader --data_split bygroup --group_num 5 --seed 0 --lr 0.0001 --batch_size 64 --epochs 20 --loss focal  --labels raw  --oversampling --video_feat resnet

# EnsModel
python train.py --model EnsModel --train_level individual --save_dir test_dir --data roomreader --data_split bygroup --group_num 5 --seed 0 --lr 0.0001 --batch_size 64 --epochs 20 --loss focal  --labels raw  --oversampling --video_feat resnet


# HTMIL
python train.py --model HTMIL --train_level individual --save_dir test_dir --data roomreader --data_split bygroup --group_num 5 --seed 0 --lr 0.0001 --batch_size 64 --epochs 20 --loss focal  --labels raw  --oversampling --video_feat resnet

multipar-t's People

Contributors

dondongwon avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.