Coder Social home page Coder Social logo

wangguanan / pytorch-person-reid-baseline-pcb-beyond-part-models Goto Github PK

View Code? Open in Web Editor NEW
110.0 9.0 22.0 72 KB

A strong implementation of PCB (Beyond Part Models), outperforming all existing implementations.

Python 99.10% Shell 0.90%
person-reidentification pcb market-1501 dukemtmc-reid ide re-identification

pytorch-person-reid-baseline-pcb-beyond-part-models's Introduction

Pytorch-Person-ReID-PCB-Beyond-Part-Models

  • A Strong Implementation of PCB (Beyond Part Models, ECCV2018) on Market-1501 and DukeMTMC-reID datasets.

  • We support:

    • A strong PCB implementation which outperforms most existing implementations.
    • A simple and clear implementation, and end-to-end training and evaluation.

News

  • We re-write a strong Re-ID baseline Bag of Tricks (BoT) with a more simple and clear implementation, which is more friendly with researchers and newers. Our code can be found here. BoT outperforms PCB by using only the global feature. Our implementation of BoT achieves the same performance with the offical one.

Dependencies

Dataset Preparation

Train and Test

python main.py --market_path market_path --duke_path duke_path

Experiments

1. Settings

  • We conduct our experiments on 2 GTX1080ti GPUs

2. Results

Implementations market2market duke2duke market2duke duke2market
PCB w/ REA (Ours) 0.939 (0.832) <model.pth> 0.856 (0.753) <model.pth> 0.384 (0.237) 0.555 (0.285)
PCB (Ours) 0.934 (0.809) 0.867 (0.746) 0.440(0.265) 0.592 (0.308)
PCB (layumi) 0.926 (0.774) 0.642 (0.439) - -
PCB (huanghoujing) 0.928 (0.785) 0.845 (0.700) - -
PCB (Xiaoccer) 0.927 (0.796) - - -
PCB (Paper) 0.924 (0.773) 0.819 (0.653) - -

Contacts

If you have any question about the project, please feel free to contact with me.

E-mail: [email protected]

pytorch-person-reid-baseline-pcb-beyond-part-models's People

Contributors

wangguanan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch-person-reid-baseline-pcb-beyond-part-models's Issues

pcb

Have you ever tried to add an RPP section to your newest PCB module?Is it better than the original paper?

visualization map

In the published PCB paper, the author analyzed the highlight area maps of different branches (e.g. Fig.1 and Fig.7), do you know how to calculate these maps?

Validation dataset are need?

Hi, Guan'an Wang

My English is very bad. I only use Google Translate to communicate. If I can communicate in Chinese, it’s best.

I have a ReID task problem for several days,
I think your code only uses train and test data sets,
No validation dataset is used.

I would like to ask your thoughts.
For ReID tasks, do you think validation is necessary?

According to the ReID tasks I have watched in the past few months,
some people are useful and some are not useful.
Hard to have a definite idea

I think for machine learning problems, validation is always important, since you need to make sure that your model is gaining generalization power instead of overfitting the test set. But since some areas still have very limited open data (still including ReID I guess?), the datasets used by academia don't have enough data for validation. With just a few validation data, the validation error might be pretty random and would not be very meaningful. So for research propose, I think it's fine to use just the training set and run massive experiments on serveral datasets for method comparison. But for enginners who aim to build a reliable model in the wild, it's necessary to collect a large amount of data and train the model with validation.

Have you ever thought about why you didn’t use the validation dataset when you wrote the program?

Abouts the results

Time: 2020-08-05 20:51:44; Test Dataset: market,
mAP: 0.4618259404352151
Rank: [0.7131829 0.7891924 0.82214964 0.84649644 0.86252969 0.87618765
0.88539192 0.89103325 0.89786223 0.90498812 0.90855107 0.91270784
0.9165677 0.92072447 0.92309976 0.92666271 0.93022565 0.93289786
0.93467933 0.93735154 0.9388361 0.94091449 0.94239905 0.94447743
0.945962 0.94685273 0.94774347 0.94833729 0.94982185 0.9510095
0.95190024 0.95249406 0.95308789 0.95368171 0.95427553 0.95486936
0.9557601 0.95665083 0.95724466 0.9584323 0.95932304 0.95961995
0.9608076 0.96110451 0.96140143 0.96140143 0.96169834 0.96169834
0.96258907 0.96347981 0.96347981 0.96407363 0.96407363 0.96466746
0.96466746 0.96526128 0.96555819 0.96585511 0.96585511 0.96615202
0.96674584 0.96674584 0.96704276 0.96704276 0.96733967 0.96793349
0.9682304 0.96852732 0.96882423 0.96912114 0.96941805 0.96971496
0.97030879 0.9706057 0.97149644 0.97179335 0.97209026 0.97238717
0.97238717 0.97238717 0.972981 0.972981 0.972981 0.972981
0.97327791 0.97416865 0.97416865 0.97416865 0.97416865 0.97416865
0.97446556 0.97446556 0.97446556 0.97505938 0.97505938 0.97535629
0.97535629 0.97535629 0.97535629 0.97565321 0.97595012 0.97595012
0.97624703 0.97624703 0.97624703 0.97624703 0.97624703 0.97624703
0.97624703 0.97654394 0.97684086 0.97684086 0.97684086 0.97684086
0.97713777 0.97743468 0.97743468 0.97773159 0.9780285 0.9780285
0.97832542 0.97891924 0.97921615 0.97951306 0.97951306 0.97951306
0.97951306 0.98010689 0.98010689 0.98010689 0.9804038 0.98070071
0.98070071 0.98070071 0.98070071 0.98070071 0.98099762 0.98099762
0.98099762 0.98129454 0.98129454 0.98129454 0.98129454 0.98129454
0.98129454 0.98159145 0.98159145 0.98159145 0.98159145 0.98159145]

I got the above results. I don't understand why I get 150 ranks after 120 epochs. What do they represent?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.