Coder Social home page Coder Social logo

mplt's Introduction

MPLT for RGB-T Tracking

Implementation of the paper “RGB-T Tracking via Multi-Modal Mutual Prompt Learning

Environment Installation

conda create -n mplt python=3.8
conda activate mplt
bash install.sh

Project Paths Setup

Run the following command to set paths for this project

python tracking/create_default_local_file.py --workspace_dir . --data_dir ./data --save_dir ./output

After running this command, you can also modify paths by editing these two files

lib/train/admin/local.py  # paths about training
lib/test/evaluation/local.py  # paths about testing

Data Preparation

Put the tracking datasets in ./data. It should look like:

${PROJECT_ROOT}
  -- data
      -- lasher
          |-- trainingset
          |-- testingset
          |-- trainingsetList.txt
          |-- testingsetList.txt
          ...

Training

Download SOT pretrained weights and put them under $PROJECT_ROOT$/pretrained_models.

python tracking/train.py --script mplt_track --config vitb_256_mplt_32x1_1e4_lasher_15ep_sot --save_dir ./output/vitb_256_mplt_32x1_1e4_lasher_15ep_sot --mode multiple --nproc_per_node 4

Replace --config with the desired model config under experiments/mplt_track.

Evaluation

Put the checkpoint into $PROJECT_ROOT$/output/config_name/... or modify the checkpoint path in testing code.

python tracking/test.py mplt_track vitb_256_mplt_32x1_1e4_lasher_15ep_sot --dataset_name lasher_test --threads 6 --num_gpus 1

python tracking/analysis_results.py --tracker_name mplt_track --tracker_param vitb_256_mplt_32x1_1e4_lasher_15ep_sot --dataset_name lasher_test

Results on LasHeR testing set

Model | Backbone | Pretraining | Precision | Success | FPS | Checkpoint | Raw Result

-MPLT-|-ViT-Base-|-----SOT-----|----72.0----|---57.1---|--22.8--| download | download

Acknowledgments

Our project is developed upon OSTrack. Thanks for their contributions which help us to quickly implement our ideas.

Citation

If our work is useful for your research, please consider cite.

mplt's People

Contributors

husteryoung avatar

Stargazers

 avatar  avatar aeeeeeep avatar  avatar  avatar  avatar  avatar  avatar Zhangyong Tang avatar  avatar  avatar  avatar Chen Liang avatar  avatar Kaiyin SONG宋恺殷 avatar  avatar  avatar Li Hao avatar  avatar  avatar

Watchers

 avatar

mplt's Issues

How long does MPLT take to train

Hello, I am currently experiencing the following issue when trying to reproduce your code with 2 cards: only 1 card has a normal GPU-Util, while the other card has an abnormal GPU-Util, and only 4 epochs have been run in 14 hours. May I ask how long it takes for you to run MPLT normally?
你好,我目前尝试2卡复现您的代码时发生下面的问题:出现只有1张卡GPU-Utile正常,另外一张卡GPU-Utile异常,且14小时只跑了4个epoch。请问您正常跑完一次MPLT是多长时间?

论文复现问题

您好,我在尝试复现您的工作时发现在LasHeR数据集上的测试指标与论文中差异较大(PR -1.6% / SR -0.6%),请问您在训练时使用的是LasHeR0327还是LasHeR0428,这是否与数据集版本或环境有关?此外,不知道您是否方便提供一下训练日志以便对比,感激不尽!

环境:
torch 1.11.0 + CUDA11.4
LasHeR 0327
RTX3090 * 2

How to test?

Hello, I tried to recurrent your model and encountered a problem during model testing.
Executing:
python track/test.py mplt_track vitb_256_mplt_32x1_1e4_lasher_15ep_sot --dataset_name lasher_test --threads 6 --num_gpus 1

At this point, I have a question.
This test code does not point to a specific training model weight. So,how can I set a specific test to a certain model weight?

Looking forward to your reply! !

请问测试集要跑多久?

我按照您的readme和其他issue里的信息:
用一张3090先执行
python tracking/test.py mplt_track vitb_256_mplt_32x1_1e4_lasher_15ep_sot --dataset_name lasher_test --threads 1 --num_gpus 1
但是跑了快3个小时连一半的结果都没出来,比训练时间还久,而且输出显示的fps一直是11,请问这正常吗?
微信图片_20240328161203

出现错误torch.cuda.OutOfMemoryError

我使用了三张NVIDIA GeForce RTX 3090

运行命令python tracking/train.py --script mplt_track --config vitb_256_mplt_32x1_1e4_lasher_15ep_sot --save_dir ./output/vitb_256_mplt_32x1_1e4_lasher_15ep_sot --mode multiple --nproc_per_node 3

之后报错:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 23.69 GiB total capacity; 6.80 GiB already allocated; 8.94 MiB free; 6.98 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

我看论文里是用了两张RTX 3090 ,想问一下我这里是什么原因呢?有什么解决方法吗?谢谢

模型速度与参数量问题

您好,我在尝试复现您的工作时发现模型的运行速度与论文中的速度有很大出入,请问您是怎么对模型进行速度和参数量测试的?

我修改了tracking下profile_model.py进行了模型速度测试,测试结果如下图:
image
测试的速度和参数量都与论文中有很大差异

下面是我修改的测试脚本代码:
profile_model.zip

测试环境:
torch 1.11.0 + CUDA11.4
NVIDIA A40 * 1 以及 NVIDIA Geforce 3090 * 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.