Coder Social home page Coder Social logo

flame-chasers / rasa Goto Github PK

View Code? Open in Web Editor NEW
40.0 4.0 3.0 221 KB

【IJCAI 2023】RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search

Home Page: https://arxiv.org/abs/2305.13653

License: MIT License

Python 99.09% Shell 0.91%
person-reidentification person-retrieval person-search text-based-person-search

rasa's Introduction

RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search

GitHub

This is the official PyTorch implementation of the paper RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search (IJCAI 2023). This repository supports training and evaluation on three text-based person search benchmarks: CUHK-PEDES, ICFG-PEDES and RSTPReid.

Usage

Requirements

  • pytorch 1.9.1
  • torchvision 0.10.1
  • transformers 4.8.1
  • timm 0.4.9

Prepare Datasets

  1. Download the CUHK-PEDES dataset from here, ICFG-PEDES dataset from here and RSTPReid dataset form here
  2. Organize them in your dataset root dir folder as follows:
    |-- your dataset root dir/
    |   |-- <CUHK-PEDES>/
    |       |-- imgs
    |           |-- cam_a
    |           |-- cam_b
    |           |-- ...
    |       |-- reid_raw.json
    |
    |   |-- <ICFG-PEDES>/
    |       |-- imgs
    |           |-- test
    |           |-- train
    |       |-- ICFG-PEDES.json
    |
    |   |-- <RSTPReid>/
    |       |-- imgs
    |       |-- data_captions.json
    
  3. Split the raw annotations into train.json, val.json and test.json for training, validation and testing.
    # 1. CUHK-PEDES
    bash shell/data_process.sh
    # or
    python data_process.py --dataset_name "CUHK-PEDES" --dataset_root_dir [CUHK-PEDES DATASET DIRECTORY]
    
    # 2. ICFG-PEDES
    bash shell/data_process.sh
    # or
    python data_process.py --dataset_name "ICFG-PEDES" --dataset_root_dir [ICFG-PEDES DATASET DIRECTORY]
    
    # 3. RSTPReid
    bash shell/data_process.sh
    # or
    python data_process.py --dataset_name "RSTPReid" --dataset_root_dir [RSTPReid DATASET DIRECTORY]
  4. Organize the datasets as follows:
     |-- your dataset root dir/
     |   |-- <CUHK-PEDES>/
     |       |-- imgs
     |           |-- cam_a
     |           |-- cam_b
     |           |-- ...
     |       |-- processed_data
     |           |-- train.json
     |           |-- val.json
     |           |-- test.json
     |       |-- reid_raw.json
     |
     |   |-- <ICFG-PEDES>/
     |       |-- imgs
     |           |-- test
     |           |-- train
     |       |-- processed_data
     |           |-- train.json
     |           |-- val.json
     |           |-- test.json
     |       |-- ICFG-PEDES.json
     |
     |   |-- <RSTPReid>/
     |       |-- imgs
     |       |-- processed_data
     |           |-- train.json
     |           |-- val.json
     |           |-- test.json
     |       |-- data_captions.json
    

Pretrained Checkpoint

Training

# Usage:
# 1. Training on CUHK-PEDES
bash shell/cuhk-train.sh
# or
python -m torch.distributed.run --nproc_per_node=4 --rdzv_endpoint=127.0.0.1:29501 \
Retrieval.py \
--config configs/PS_cuhk_pedes.yaml \
--output_dir output/cuhk-pedes/train \
--checkpoint [PRETRAINED ALBEF CHECKPOINT PATH] \
--eval_mAP

# 2. Training on ICFG-PEDES
bash shell/icfg-train.sh
# or
python -m torch.distributed.run --nproc_per_node=4 --rdzv_endpoint=127.0.0.1:29501 \
Retrieval.py \
--config configs/PS_icfg_pedes.yaml \
--output_dir output/icfg-pedes/train \
--checkpoint [PRETRAINED ALBEF CHECKPOINT PATH] \
--eval_mAP

# 3. Training on RSTPReid
bash shell/rstp-train.sh
# or
python -m torch.distributed.run --nproc_per_node=4 --rdzv_endpoint=127.0.0.1:29501 \
Retrieval.py \
--config configs/PS_rstp_reid.yaml \
--output_dir output/rstp-reid/train \
--checkpoint [PRETRAINED ALBEF CHECKPOINT FILE PATH] \
--eval_mAP

Testing

# Usage:
# 1. Testing on CUHK-PEDES
bash shell/cuhk-eval.sh
# or
python -m torch.distributed.launch --nproc_per_node=4 --rdzv_endpoint=127.0.0.1:29501 \
Retrieval.py \
--config configs/PS_cuhk_pedes.yaml \
--output_dir output/cuhk-pedes/evaluation \
--checkpoint [CHECKPOINT FILE PATH] \
--eval_mAP \
--evaluate

# 2. Testing on ICFG-PEDES
bash shell/icfg-eval.sh
# or
python -m torch.distributed.launch --nproc_per_node=4 --rdzv_endpoint=127.0.0.1:29501 \
Retrieval.py \
--config configs/PS_icfg_pedes.yaml \
--output_dir output/icfg-pedes/evaluation \
--checkpoint [CHECKPOINT FILE PATH] \
--eval_mAP \
--evaluate

# 3. Testing on RSTPReid
bash shell/rstp-eval.sh
# or
python -m torch.distributed.run --nproc_per_node=4 --rdzv_endpoint=127.0.0.1:29501 \
Retrieval.py \
--config configs/PS_rstp_reid.yaml \
--output_dir output/rstp-reid/evaluation/ \
--checkpoint [CHECKPOINT FILE PATH] \
--eval_mAP \
--evaluate

RaSa Performance on Three Text-based Person Search Benchmarks

CUHK-PEDES dataset

Method Rank-1 Rank-5 Rank-10 mAP
CMPM/C 49.37 71.69 79.27 -
ViTAA 55.97 75.84 83.52 -
DSSL 59.98 80.41 87.56 -
SAF 64.13 82.62 88.40 58.61
LGUR 65.25 83.12 89.00 -
IVT 65.59 83.11 89.21 -
CFine 69.57 85.93 91.15 -
ALBEF 60.28 79.52 86.34 56.67
RaSa (ours) 76.51 90.29 94.25 69.38

Model for CUHK-PEDES

ICFG-PEDES dataset

Method Rank-1 Rank-5 Rank-10 mAP
CMPM/C 43.51 65.44 74.26 -
SSAN 54.23 72.63 79.53 -
SAF 54.86 72.13 79.13 32.76
IVT 56.04 73.60 80.22 -
CFine 60.83 76.55 82.42 -
ALBEF 34.46 52.32 60.40 19.62
RaSa (ours) 65.28 80.40 85.12 41.29

Model for ICFG-PEDES

RSTPReid dataset

Method Rank-1 Rank-5 Rank-10 mAP
DSSL 32.43 55.08 63.19 -
SSAN 43.50 67.80 77.15 -
SAF 44.05 67.30 76.25 36.81
IVT 46.70 70.00 78.80 -
CFine 50.55 72.50 81.60 -
ALBEF 50.10 73.70 82.10 41.73
RaSa (ours) 66.90 86.50 91.35 52.31

Model for RSTPReid

Acknowledgments

The implementation of RaSa relies on resources from ALBEF, Huggingface Transformers, and timm. We sincerely appreciate the original authors for their open-sourcing.

Citation

If you find this code useful for your research, please cite our paper.

@article{bai2023rasa,
  title={RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search},
  author={Bai, Yang and Cao, Min and Gao, Daming and Cao, Ziqiang and Chen, Chen and Fan, Zhenfeng and Nie, Liqiang and Zhang, Min},
  journal={arXiv preprint arXiv:2305.13653},
  year={2023}
}

rasa's People

Contributors

byougert avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

rasa's Issues

Error of running code

image

Thank you so much for your open-source work!

As shown in the above figure, I meet the question. What is the reason for this? Thank you very much for your help.

where is the code of cross-modal encoder?

Thank you for opening the code! it's wonderful work.
But I'm confused that where is the implementation code for the cross-modal encoder.
I didn't find the code in the model_person_search.py file.
looking forward to your reply, very much!

Why does the dataloader in RaSa return the value like this?

image
This is a great job!
Thank you for your open source!
Why train_ In the loader, there will be phenomena in the image, and the text can still be understood. One is the original text, and the other is the enhanced text. But the images are all transformed through the same transformer, and they are the same. Why is this design necessary?

强弱正样本对的标签何来?

作者你好,非常欣赏你们的这篇工作,也感谢能开源代码。我这边看了论文,对3.2节的 “We propose a Positive Relation Detection (PRD)pretext task to detect the type of the positive pair (i.e., strong or weak), which is formulated as:.....where (I, Tp) denotes a positive pair, y^prd is the ground truth label (i.e., [1, 0]⊤ for the strong positive pair and [0, 1]⊤ for the weak pair)",这里提到的强弱正样本对的标签y^prd 是哪里来的?公开的数据集里没有这个标签,是你们标定的吗?

关于模型的参数量

你好,我想知道你们这项工作的模型参数量具体是多少,可以方便告知吗?

publishing source code?

Hello, your work is excellent and achieves SOTA performance. However, I would like to kindly ask about the time for publishing source code for this work. It has been quite a while since the notification for IJCAI 2023.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.