Coder Social home page Coder Social logo

hhl's Introduction

Generalizing A Person Retrieval Model Hetero- and Homogeneously

================================================================

Code for Generalizing A Person Retrieval Model Hetero- and Homogeneously (ECCV 2018). [paper]

Preparation

Requirements: Python=3.6 and Pytorch=0.4.0

  1. Install Pytorch

  2. Download dataset

    • reid_dataset [GoogleDriver]

    • The reid_dataset including Market-1501 (with CamStyle), DukeMTMC-reID (with CamStyle), and CUHK03

    • Unzip reid_dataset under 'HHL/data/'

CamStyle Generation

You can train CamStyle model and generate CamStyle imgaes with stargan4reid

Training and test domain adaptation model for person re-ID

  1. Baseline
# For Duke to Market-1501
python baseline.py -s duke -t market --logs-dir logs/duke2market-baseline
# For Market-1501 to Duke
python baseline.py -s market -t duke --logs-dir logs/market2duke-baseline
  1. HHL
# For Duke to Market-1501
python HHL.py -s duke -t market --logs-dir logs/duke2market-HHL
# For Market-1501 to Duke
python HHL.py -s market -t duke --logs-dir logs/market2duke-HHL

Results

Duke to Market Market to Duke
Methods Rank-1 mAP Rank-1 mAP
Baseline 44.6 20.6 32.9 16.9
HHL 62.2 31.4 46.9 27.2

References

  • [1] Our code is conducted based on open-reid

  • [2] StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation , CVPR 2018

  • [3] Camera Style Adaptation for Person Re-identification. CVPR 2018.

Citation

If you find this code useful in your research, please consider citing:

@inproceedings{zhong2018generalizing,
title={Generalizing A Person Retrieval Model Hetero- and Homogeneously},
author={Zhong, Zhun and Zheng, Liang and Li, Shaozi and Yang, Yi},
booktitle ={ECCV},
year={2018}
}

Contact me

If you have any questions about this code, please do not hesitate to contact me.

Zhun Zhong

hhl's People

Contributors

zhunzhong07 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

hhl's Issues

How to restore the model and continue training

Hi, l have found error what caused by insufficient shared memory,so l want to restore the model and continue training. But, l do not know how to use 'CKPT' file to continue training. Can you help me? Thank you, my friend.

evaluate only

Hello:
Thanks for your giving,I wanna know that how can code know which model to pick when evaluating only?I have not found any path message in code.
Look forward for your reply!Thank you!

problem when running stargan4reid

Hello, i just want to get camstyle figures on msmt17 dataset, and i change the data_loder to read the cam id. But it occurs an error:
file solve.py , line 256, in train
d_loss = d_loss_real + d_loss_fake + self.lambda_cls * d_loss_cls + self.lambda_gp * d_loss_gp
TypeError: unsupported operand type(s) for *: 'int' and 'NoneType'

Can someone help me settle it? Thanks a lot!
@zhunzhong07

No loss reduction

thanks for your shared works,
I use torch0.4,
when i run baseline,i can get:
Mean AP: 19.9%
CMC Scores
top-1 47.6%
top-5 65.8%
top-10 73.3%
top-20 79.8%
but when i run hhl,the loss always is 6.735,
can you help me?

the method of mean_ap

Hi,
I'm sorry to bother you again. I'm confused about your code about mAP.
In open-reid, it's like

from sklearn.metrics import average_precision_score
aps.append(average_precision_score(y_true, y_score))
# examples
>>> import numpy as np
>>> from sklearn.metrics import average_precision_score
>>> y_true = np.array([0, 0, 1, 1])
>>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> average_precision_score(y_true, y_scores)  
0.83...
# after sort by y_socres, it's 1, 0, 1, 0
# recall= [1.  0.5 0.5 0. ], pre = [0.66666667 0.5        1.         1.        ],
# -np.sum(np.diff(recall) * np.array(precision)[:-1])
# 0.83 = 1/2*2/3+0*1/2+1/2*1= 5/6

In your code, it's like
def average_precision_score(y_true, y_score, average="macro",
sample_weight=None):
def _binary_average_precision(y_true, y_score, sample_weight=None):
precision, recall, thresholds = precision_recall_curve(
y_true, y_score, sample_weight=sample_weight)
return auc(recall, precision)

return _average_binary_score(_binary_average_precision, y_true, y_score,
                             average, sample_weight=sample_weight)

examples

import numpy as np
from sklearn.metrics import average_precision_score
y_true = np.array([0, 0, 1, 1])
y_scores = np.array([0.1, 0.4, 0.35, 0.8])
average_precision_score(y_true, y_scores)
0.7916666666666666

recall= [1. 0.5 0.5 0. ], pre = [0.66666667 0.5 1. 1. ],

after I calculate, it's like

0.79 = 1/2*(2/3+1/2)1/2+0(1/2+1)1/2+1/2(1+1)*1/2 = 19/24

it's like get the medium value rather than one value

I think even if it gets the medium value, it should be like

1/2*(2/3+1)1/2+0(1/2+1)1/2+1/2(1+1)*1/2 = 11/12

Maybe there are something I get wrong. Or if you are free, please check me out.
I am trying to understand better your code.
Thank you, sir.

Encouter a problem when test reid-model

python baseline.py -s duke -t market --logs-dir logs/duke2market-baseline1 --features 0 --evaluate --resume logs/duke2market-baseline/checkpoint.pth.tar
DA dataset loaded
subset | # ids | # images

source train | 702 | 16522
target train | 'Unknown' | 12936
query | 750 | 3368
gallery | 751 | 15913
=> Loaded checkpoint 'logs/duke2market-baseline/checkpoint.pth.tar'
=> Start epoch 60
Test:
Extract Features: [1/27] Time 14.225 (14.225) Data 4.546 (4.546)
Extract Features: [2/27] Time 0.219 (7.222) Data 0.001 (2.273)
Extract Features: [3/27] Time 0.163 (4.869) Data 0.001 (1.516)
Extract Features: [4/27] Time 0.144 (3.688) Data 0.000 (1.137)
Extract Features: [5/27] Time 0.184 (2.987) Data 0.000 (0.910)
Extract Features: [6/27] Time 0.163 (2.516) Data 0.000 (0.758)
Extract Features: [7/27] Time 0.163 (2.180) Data 0.000 (0.650)
Extract Features: [8/27] Time 0.201 (1.933) Data 0.000 (0.569)
Extract Features: [9/27] Time 0.175 (1.738) Data 0.000 (0.506)
Extract Features: [10/27] Time 0.185 (1.582) Data 0.000 (0.455)
Extract Features: [11/27] Time 0.165 (1.453) Data 0.000 (0.414)
Extract Features: [12/27] Time 0.170 (1.346) Data 0.000 (0.379)
Extract Features: [13/27] Time 0.199 (1.258) Data 0.000 (0.350)
Extract Features: [14/27] Time 0.175 (1.181) Data 0.000 (0.325)
Extract Features: [15/27] Time 0.184 (1.114) Data 0.000 (0.303)
Extract Features: [16/27] Time 0.164 (1.055) Data 0.000 (0.284)
Extract Features: [17/27] Time 0.169 (1.003) Data 0.000 (0.268)
Extract Features: [18/27] Time 0.204 (0.958) Data 0.000 (0.253)
Extract Features: [19/27] Time 0.168 (0.917) Data 0.000 (0.240)
Extract Features: [20/27] Time 0.128 (0.877) Data 0.000 (0.228)
Extract Features: [21/27] Time 0.184 (0.844) Data 0.000 (0.217)
Extract Features: [22/27] Time 0.165 (0.814) Data 0.000 (0.207)
Extract Features: [23/27] Time 0.169 (0.785) Data 0.000 (0.198)
Extract Features: [24/27] Time 0.205 (0.761) Data 0.000 (0.190)
Extract Features: [25/27] Time 0.172 (0.738) Data 0.000 (0.182)
Extract Features: [26/27] Time 0.123 (0.714) Data 0.000 (0.175)
Extract Features: [27/27] Time 4.403 (0.851) Data 0.000 (0.169)
Extract Features: [1/125] Time 0.986 (0.986) Data 0.857 (0.857)
Extract Features: [2/125] Time 0.120 (0.553) Data 0.001 (0.429)
Extract Features: [3/125] Time 0.139 (0.415) Data 0.027 (0.295)
Extract Features: [4/125] Time 0.144 (0.347) Data 0.032 (0.229)
Extract Features: [5/125] Time 0.248 (0.327) Data 0.136 (0.211)
Extract Features: [6/125] Time 0.373 (0.335) Data 0.263 (0.219)
Extract Features: [7/125] Time 0.318 (0.333) Data 0.206 (0.217)
Extract Features: [8/125] Time 0.249 (0.322) Data 0.139 (0.208)
Extract Features: [9/125] Time 1.350 (0.436) Data 1.237 (0.322)
Extract Features: [10/125] Time 1.770 (0.570) Data 1.658 (0.456)
Extract Features: [11/125] Time 0.147 (0.531) Data 0.033 (0.417)
Extract Features: [12/125] Time 0.162 (0.500) Data 0.051 (0.387)
Extract Features: [13/125] Time 0.303 (0.485) Data 0.190 (0.371)
Extract Features: [14/125] Time 0.407 (0.480) Data 0.298 (0.366)
Extract Features: [15/125] Time 0.249 (0.464) Data 0.135 (0.351)
Extract Features: [16/125] Time 0.140 (0.444) Data 0.028 (0.331)
Extract Features: [17/125] Time 0.140 (0.426) Data 0.027 (0.313)
Extract Features: [18/125] Time 0.140 (0.410) Data 0.028 (0.297)
Extract Features: [19/125] Time 0.145 (0.396) Data 0.033 (0.283)
Extract Features: [20/125] Time 0.144 (0.384) Data 0.029 (0.270)
Extract Features: [21/125] Time 1.339 (0.429) Data 1.229 (0.316)
Extract Features: [22/125] Time 1.759 (0.490) Data 1.645 (0.376)
Extract Features: [23/125] Time 0.151 (0.475) Data 0.036 (0.362)
Extract Features: [24/125] Time 0.149 (0.461) Data 0.035 (0.348)
Extract Features: [25/125] Time 0.153 (0.449) Data 0.039 (0.336)
Extract Features: [26/125] Time 0.172 (0.438) Data 0.029 (0.324)
Extract Features: [27/125] Time 0.131 (0.427) Data 0.018 (0.313)
Extract Features: [28/125] Time 0.229 (0.420) Data 0.118 (0.306)
Extract Features: [29/125] Time 0.566 (0.425) Data 0.288 (0.305)
Extract Features: [30/125] Time 0.120 (0.415) Data 0.000 (0.295)
Extract Features: [31/125] Time 0.587 (0.420) Data 0.473 (0.301)
Extract Features: [32/125] Time 1.630 (0.458) Data 1.520 (0.339)
Extract Features: [33/125] Time 0.956 (0.473) Data 0.842 (0.354)
Extract Features: [34/125] Time 0.143 (0.464) Data 0.030 (0.344)
Extract Features: [35/125] Time 0.445 (0.463) Data 0.332 (0.344)
Extract Features: [36/125] Time 0.383 (0.461) Data 0.272 (0.342)
Extract Features: [37/125] Time 0.279 (0.456) Data 0.165 (0.337)
Extract Features: [38/125] Time 0.144 (0.448) Data 0.030 (0.329)
Extract Features: [39/125] Time 0.137 (0.440) Data 0.023 (0.321)
Extract Features: [40/125] Time 0.126 (0.432) Data 0.001 (0.313)
Extract Features: [41/125] Time 0.123 (0.424) Data 0.000 (0.306)
Extract Features: [42/125] Time 0.118 (0.417) Data 0.000 (0.298)
Extract Features: [43/125] Time 0.118 (0.410) Data 0.000 (0.291)
Extract Features: [44/125] Time 0.118 (0.403) Data 0.000 (0.285)
Extract Features: [45/125] Time 0.115 (0.397) Data 0.000 (0.279)
Extract Features: [46/125] Time 0.122 (0.391) Data 0.000 (0.272)
Extract Features: [47/125] Time 0.121 (0.385) Data 0.000 (0.267)
Extract Features: [48/125] Time 0.122 (0.380) Data 0.000 (0.261)
Extract Features: [49/125] Time 0.123 (0.375) Data 0.000 (0.256)
Extract Features: [50/125] Time 0.123 (0.370) Data 0.001 (0.251)
Extract Features: [51/125] Time 0.121 (0.365) Data 0.000 (0.246)
Extract Features: [52/125] Time 0.122 (0.360) Data 0.000 (0.241)
Extract Features: [53/125] Time 0.125 (0.356) Data 0.001 (0.237)
Extract Features: [54/125] Time 0.123 (0.351) Data 0.001 (0.232)
Extract Features: [55/125] Time 0.125 (0.347) Data 0.000 (0.228)
Extract Features: [56/125] Time 0.125 (0.343) Data 0.000 (0.224)
Extract Features: [57/125] Time 0.124 (0.339) Data 0.000 (0.220)
Extract Features: [58/125] Time 0.126 (0.336) Data 0.000 (0.216)
Extract Features: [59/125] Time 0.124 (0.332) Data 0.000 (0.213)
Extract Features: [60/125] Time 0.123 (0.329) Data 0.000 (0.209)
Extract Features: [61/125] Time 0.123 (0.325) Data 0.000 (0.206)
Extract Features: [62/125] Time 0.123 (0.322) Data 0.001 (0.202)
Extract Features: [63/125] Time 0.119 (0.319) Data 0.000 (0.199)
Extract Features: [64/125] Time 0.124 (0.316) Data 0.000 (0.196)
Extract Features: [65/125] Time 0.123 (0.313) Data 0.000 (0.193)
Extract Features: [66/125] Time 0.118 (0.310) Data 0.000 (0.190)
Extract Features: [67/125] Time 0.122 (0.307) Data 0.000 (0.187)
Extract Features: [68/125] Time 0.122 (0.304) Data 0.000 (0.184)
Extract Features: [69/125] Time 0.121 (0.302) Data 0.000 (0.182)
Extract Features: [70/125] Time 0.121 (0.299) Data 0.000 (0.179)
Extract Features: [71/125] Time 0.122 (0.297) Data 0.000 (0.177)
Extract Features: [72/125] Time 0.125 (0.294) Data 0.000 (0.174)
Extract Features: [73/125] Time 0.123 (0.292) Data 0.000 (0.172)
Extract Features: [74/125] Time 0.119 (0.289) Data 0.000 (0.170)
Extract Features: [75/125] Time 0.118 (0.287) Data 0.000 (0.167)
Extract Features: [76/125] Time 0.119 (0.285) Data 0.000 (0.165)
Extract Features: [77/125] Time 0.125 (0.283) Data 0.001 (0.163)
Extract Features: [78/125] Time 0.123 (0.281) Data 0.000 (0.161)
Extract Features: [79/125] Time 0.117 (0.279) Data 0.000 (0.159)
Extract Features: [80/125] Time 0.121 (0.277) Data 0.000 (0.157)
Extract Features: [81/125] Time 0.114 (0.275) Data 0.001 (0.155)
Extract Features: [82/125] Time 0.116 (0.273) Data 0.000 (0.153)
Extract Features: [83/125] Time 0.123 (0.271) Data 0.000 (0.151)
Extract Features: [84/125] Time 0.124 (0.269) Data 0.000 (0.149)
Extract Features: [85/125] Time 0.123 (0.268) Data 0.001 (0.148)
Extract Features: [86/125] Time 0.124 (0.266) Data 0.000 (0.146)
Extract Features: [87/125] Time 0.122 (0.264) Data 0.000 (0.144)
Extract Features: [88/125] Time 0.123 (0.263) Data 0.000 (0.143)
Extract Features: [89/125] Time 0.124 (0.261) Data 0.000 (0.141)
Extract Features: [90/125] Time 0.120 (0.260) Data 0.000 (0.139)
Extract Features: [91/125] Time 0.120 (0.258) Data 0.000 (0.138)
Extract Features: [92/125] Time 0.118 (0.256) Data 0.000 (0.136)
Extract Features: [93/125] Time 0.118 (0.255) Data 0.000 (0.135)
Extract Features: [94/125] Time 0.120 (0.254) Data 0.001 (0.134)
Extract Features: [95/125] Time 0.118 (0.252) Data 0.000 (0.132)
Extract Features: [96/125] Time 0.121 (0.251) Data 0.000 (0.131)
Extract Features: [97/125] Time 0.131 (0.249) Data 0.000 (0.129)
Extract Features: [98/125] Time 0.163 (0.249) Data 0.000 (0.128)
Extract Features: [99/125] Time 0.164 (0.248) Data 0.001 (0.127)
Extract Features: [100/125] Time 0.161 (0.247) Data 0.000 (0.126)
Extract Features: [101/125] Time 0.161 (0.246) Data 0.000 (0.124)
Extract Features: [102/125] Time 0.163 (0.245) Data 0.000 (0.123)
Extract Features: [103/125] Time 0.146 (0.244) Data 0.000 (0.122)
Extract Features: [104/125] Time 0.127 (0.243) Data 0.000 (0.121)
Extract Features: [105/125] Time 0.125 (0.242) Data 0.001 (0.120)
Extract Features: [106/125] Time 0.121 (0.241) Data 0.000 (0.118)
Extract Features: [107/125] Time 0.119 (0.240) Data 0.000 (0.117)
Extract Features: [108/125] Time 0.122 (0.239) Data 0.000 (0.116)
Extract Features: [109/125] Time 0.123 (0.238) Data 0.000 (0.115)
Extract Features: [110/125] Time 0.120 (0.236) Data 0.000 (0.114)
Extract Features: [111/125] Time 0.119 (0.235) Data 0.000 (0.113)
Extract Features: [112/125] Time 0.119 (0.234) Data 0.000 (0.112)
Extract Features: [113/125] Time 0.125 (0.233) Data 0.000 (0.111)
Extract Features: [114/125] Time 0.123 (0.232) Data 0.000 (0.110)
Extract Features: [115/125] Time 0.142 (0.232) Data 0.029 (0.109)
Extract Features: [116/125] Time 0.112 (0.231) Data 0.000 (0.109)
Extract Features: [117/125] Time 0.113 (0.230) Data 0.000 (0.108)
Extract Features: [118/125] Time 0.115 (0.229) Data 0.000 (0.107)
Extract Features: [119/125] Time 0.112 (0.228) Data 0.000 (0.106)
Extract Features: [120/125] Time 0.113 (0.227) Data 0.000 (0.105)
Extract Features: [121/125] Time 0.115 (0.226) Data 0.001 (0.104)
Extract Features: [122/125] Time 0.111 (0.225) Data 0.000 (0.103)
Extract Features: [123/125] Time 0.111 (0.224) Data 0.000 (0.102)
Extract Features: [124/125] Time 0.111 (0.223) Data 0.000 (0.102)
Extract Features: [125/125] Time 3.963 (0.253) Data 0.000 (0.101)
Segmentation fault (core dumped)

It's strange. I can't figure out why "Segmentation fault (core dumped)" happen. Anyone has an idea?

UPDATE:
Someone has proposed a solution here.

HHL about optimizer

Hi, I am sorry to bother you again.
In your HHL.py, there are some code, like

        base_param_ids = set(map(id, model.module.base.parameters())) \
                         | set(map(id, model.module.triplet.parameters())) \
                         | set(map(id, model.module.feat.parameters())) \
                         | set(map(id, model.module.feat_bn.parameters()))

        new_params = [p for p in model.parameters() if
                      id(p) not in base_param_ids]
        param_groups = [
            {'params': model.module.base.parameters(), 'lr_mult': 0.1},
            {'params': new_params, 'lr_mult': 1.0}]
    optimizer = torch.optim.SGD(param_groups, lr=args.lr,
                                momentum=args.momentum,
                                weight_decay=args.weight_decay,
                                nesterov=True)

there is confusion about the param_groups, after I test, model.module.triplet.parameters(), model.module.feat.parameters(), model.module.feat_bn.parameters() are not included in param_groups, that means during the train process, the three parameters don't update? Or is it something I have ignored?

I have tried like:

        param_groups = [
            {'params': model.module.base.parameters(), 'lr_mult': 0.1},
            {'params': model.module.triplet.parameters(), 'lr_mult': 1.0},
            {'params': model.module.feat.parameters(), 'lr_mult': 1.0},
            {'params': model.module.feat_bn.parameters(), 'lr_mult': 1.0},
            {'params': new_params, 'lr_mult': 1.0}]

but the result is worse

Thank you

CUHK03

Could you provide the cam-style images on cuhk03?
Look forward to your reply!Thank you!

Camera information about cuhk03

Hello, I am sorry to bother you again. Your code is awesome and your paper I have read less ten times.

This time, I want to discuss something about cuhk03 dataset. In your paper,

Note that images in CUHK03 do not have camera labels, so we cannot perform camera invariance learning. Therefore, we only use CUHK03 as the source domain instead of the target domain.

And after I consult the CUHK03 dataset,

1467 identities are collected from 5 different pairs of camera views
5 x 1 cells and each contains the data collected from a pair of camera views. Each pair of camera views consists of M x 10 cells, where M is the number of identities. For each identity, cell 1-5 are images from one camera and cell 6-10 are images from another camera. However, some identities may have less than 10 images.

So could I think that in CUHK03, there are ten cameras, and each camera could be viewed as its style?

And could I label each image in first pair of camera view as c1 (cell 1-5 ) or c2 (cell 6-10 ), each image in second pair of camera view as c3 (cell 1-5 ) or c4 (cell 6-10 ), and so on?.

Then use cuhk03 labeled camera information to train stargan, Is that OK? or there is some problems I overlooked?

Thank you.

multiprocess problem

Hi, I try to evaluate the model every 5 training times, but I encounter this problem. Could you help me ?
The code I changes:

# Start training
for epoch in range(start_epoch, args.epochs):
    adjust_lr(epoch)
    trainer.train(epoch, source_train_loader, source_triplet_train_loader, target_train_loader, optimizer)

    save_checkpoint({
        'state_dict': model.module.state_dict(),
        'epoch': epoch + 1,
    }, fpath=osp.join(args.logs_dir, 'checkpoint.pth.tar'))

    print('\n * Finished epoch {:3d} \n'.
          format(epoch))

    if epoch % 5 == 0:
        # Final test
        print('Test with best model:')
        evaluator = Evaluator(model)
        evaluator.evaluate(query_loader, gallery_loader, dataset.query,
                           dataset.gallery, args.output_feature, args.rerank)

# Final test
print('Test with best model:')
evaluator = Evaluator(model)
evaluator.evaluate(query_loader, gallery_loader, dataset.query, dataset.gallery, args.output_feature, args.rerank)

the problem is this :

Epoch: [0][20/129]	Time 0.759 (1.177)	Data 0.000 (0.055)	Loss 5.916 (6.386)	Prec_c 10.94% (5.20%)	Prec_t 4.55% (1.93%)	
Epoch: [0][40/129]	Time 0.761 (0.969)	Data 0.000 (0.031)	Loss 5.152 (5.923)	Prec_c 17.97% (10.86%)	Prec_t 6.25% (4.30%)	
Epoch: [0][60/129]	Time 0.751 (0.898)	Data 0.000 (0.022)	Loss 3.891 (5.385)	Prec_c 27.34% (16.58%)	Prec_t 12.50% (7.51%)	
Epoch: [0][80/129]	Time 0.757 (0.864)	Data 0.000 (0.018)	Loss 3.114 (4.856)	Prec_c 44.53% (22.59%)	Prec_t 23.86% (11.14%)	
Epoch: [0][100/129]	Time 0.764 (0.856)	Data 0.000 (0.016)	Loss 2.398 (4.423)	Prec_c 54.69% (27.72%)	Prec_t 26.14% (13.48%)	
Epoch: [0][120/129]	Time 0.764 (0.841)	Data 0.000 (0.014)	Loss 2.374 (4.068)	Prec_c 51.56% (32.34%)	Prec_t 41.48% (15.98%)	

 * Finished epoch   0 

Test with best model:
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
Exception ignored in:   File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
<function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    w.join()    
assert self._parent_pid == os.getpid(), 'can only join a child process'
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
AssertionError: can only join a child process
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    self._shutdown_workers()Exception ignored in: 
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
<function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: Exception ignored in: <function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
<function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
Exception ignored in:     w.join()
<function _DataLoaderIter.__del__ at 0x7f32d70c7f28>  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join

Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError:     can only join a child processself._shutdown_workers()

  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7f32d70c7f28>
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 717, in __del__
    self._shutdown_workers()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 713, in _shutdown_workers
    w.join()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/process.py", line 138, in join
    assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/queues.py", line 242, in _feed
    send_bytes(obj)
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
    self._send(header + buf)
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/queues.py", line 232, in _feed
    close()
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/connection.py", line 177, in close
    self._close()
Traceback (most recent call last):
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/connection.py", line 361, in _close
    _close(self._handle)
OSError: [Errno 9] Bad file descriptor
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/queues.py", line 242, in _feed
    send_bytes(obj)
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
    self._send(header + buf)
  File "/home/lsc/.conda/envs/lsc_pytorch/lib/python3.7/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
Mean AP: 15.7%
CMC Scores
  top-1          38.5%
  top-5          56.5%
  top-10         65.3%
  top-20         73.5%
Epoch: [1][20/129]	Time 0.737 (0.793)	Data 0.000 (0.053)	Loss 1.328 (1.349)	Prec_c 72.66% (72.70%)	Prec_t 25.00% (30.97%)	
Epoch: [1][40/129]	Time 0.735 (0.766)	Data 0.000 (0.030)	Loss 1.178 (1.307)	Prec_c 79.69% (73.95%)	Prec_t 33.52% (29.82%)	
Epoch: [1][60/129]	Time 0.737 (0.759)	Data 0.000 (0.022)	Loss 1.165 (1.260)	Prec_c 77.34% (75.46%)	Prec_t 32.39% (30.54%)	
Epoch: [1][80/129]	Time 0.750 (0.756)	Data 0.000 (0.018)	Loss 1.206 (1.229)	Prec_c 77.34% (76.09%)	Prec_t 30.11% (31.13%)	
Epoch: [1][100/129]	Time 0.775 (0.771)	Data 0.000 (0.016)	Loss 0.919 (1.191)	Prec_c 81.25% (76.95%)	Prec_t 36.93% (31.81%)	
Epoch: [1][120/129]	Time 0.772 (0.770)	Data 0.000 (0.014)	Loss 0.950 (1.157)	Prec_c 81.25% (77.62%)	Prec_t 39.77% (33.23%)	

 * Finished epoch   1 

I am really confused.
The version of my pytorch is : 1.0.0

AttributeError: 'ResNet' object has no attribute 'module'

When I run python HHL.py -s duke -t market --logs-dir logs/duke2market-HHL,
It occurs an error:

Traceback (most recent call last):
File "/media/vision/43c620be-e7c3-4af9-9cf6-c791ef2ed83e/zzq/reid/HHL-master/HHL.py", line 231, in
main(parser.parse_args())
File "/media/vision/43c620be-e7c3-4af9-9cf6-c791ef2ed83e/zzq/reid/HHL-master/HHL.py", line 128, in main
if hasattr(model.module, 'base'):
File "/home/vision/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in getattr
type(self).name, name))
AttributeError: 'ResNet' object has no attribute 'module'

AttributeError: 'Generator' object has no attribute 'module'

Hi, when I run stargan4reid myself and try to generate CamStyle imgaes, a bug appeared.The following is the specific content of the bug:

Start training...
Elapsed [0:00:05], Iteration [1/6], D/loss_real: 0.0386, D/loss_fake: -0.0519, D/loss_cls: 1.7772, D/loss_gp: 10.2668, G/loss_fake: 0.0517, G/loss_rec: 0.4263, G/loss_cls: 1.6729
Traceback (most recent call last):
File "/home/wss/HHL/HHL-master/stargan4reid/main.py", line 95, in
main(config)
File "/home/wss/HHL/HHL-master/stargan4reid/main.py", line 36, in main
solver.train()
File "/home/wss/HHL/HHL-master/stargan4reid/solver.py", line 328, in train
torch.save(self.G.module.state_dict(), G_path)
File "/home/wss/anaconda3/envs/test/lib/python3.6/site-packages/torch/nn/modules/module.py", line 539, in getattr
type(self).name, name))
AttributeError: 'Generator' object has no attribute 'module'
Saved real and fake images into ./market/samples/1-images.jpg...

My operating environment is torch-cpu-0.4.0, and torchvision 0.2.1, the program is running under python3.6.
I tried running under pytorch- cpu-1.1.0, but nothing can help.
Hoping to get your reply, and good luck.

about camStyle training with starGAN?

Take market1501 as example . Did you split the train folder into 6 folder accoding to the camera, and train starGAN without any change? In my trainning after 200000 iteration ,the cloth color chage a lot? Can you give me some advices? Thanks a lot

HHL's Mean AP: 9.4%

Hi, when I run

python baseline.py -s duke -t market --logs-dir logs/duke2market-baseline

the result is almost like thing you release.
But when I run

python HHL.py -s duke -t market --logs-dir logs/duke2market-HHL --data-dir ./data -b 16

and the starGAN is yours.
the results is like
Mean AP: 9.4%
CMC Scores
top-1 26.5%
top-5 43.5%
top-10 52.7%
top-20 61.6%
Maybe something I set it wrong, if you are free, pleace check me out.
Thank you, sir.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.