only focus on video person re-identification
SOTA:
No. | Dataset | Rank1 | mAP | Link |
---|---|---|---|---|
1 | PRID | 97.4% | Null | paper |
2 | iLIDS-VID | 92.0% | Null | paper[ICCV2021] |
3 | Mars | 91.4% | 86.7% | paper[CVPR2021] |
4 | DukeMTMC | 98.3% | 97.4% | paper[ICCV2021] |
5 | LS-VID | 84.6% | 75.1% | paper[CVPR2021] |
If you find this repo useful, please kindly cite the following paper:
@inproceedings{liu2021watching, title={Watching You: Global-guided Reciprocal Learning for Video-based Person Re-identification}, author={Liu, Xuehu and Zhang, Pingping and Yu, Chenyang and Lu, Huchuan and Yang, Xiaoyun}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={13334--13343}, year={2021} } @article{liu2021video, title={A Video Is Worth Three Views: Trigeminal Transformers for Video-based Person Re-identification}, author={Liu, Xuehu and Zhang, Pingping and Yu, Chenyang and Lu, Huchuan and Qian, Xuesheng and Yang, Xiaoyun}, journal={arXiv preprint arXiv:2104.01745}, year={2021} }