(final) home@home-desktop:~/p3/PSENet.pytorch-master$ python train.py
make: Entering directory '/home/home/p3/PSENet.pytorch-master/pse'
make: 'pse.so' is up to date.
make: Leaving directory '/home/home/p3/PSENet.pytorch-master/pse'
2019-10-13 05:40:52 INFO utils.py: logger init finished
2019-10-13 05:40:52 INFO train.py: {'Lambda': 0.7,
'OHEM_ratio': 3,
'backbone': 'resnet152',
'checkpoint': './check/resnet152.pth',
'data_shape': 640,
'display_input_images': False,
'display_interval': 10,
'display_output_images': False,
'end_lr': 1e-07,
'epochs': 100,
'gpu_id': '0',
'lr': 0.0001,
'lr_decay_step': [200, 400],
'lr_gamma': 0.1,
'm': 0.5,
'n': 6,
'output_dir': './output/psenet_icd2015',
'pretrained': True,
'restart_training': True,
'scale': 1,
'seed': 2,
'show_images_interval': 50,
'start_epoch': 0,
'testroot': './data/test',
'train_batch_size': 1,
'trainroot': './data/train',
'warm_up_epoch': 6,
'warm_up_lr': 1e-05,
'weight_decay': 0.0005,
'workers': 1}
2019-10-13 05:40:52 INFO train.py: train with gpu 0 and pytorch 1.2.0
2019-10-13 05:40:53 INFO resnet.py: load pretrained models from imagenet
2019-10-13 05:40:55 INFO train.py: train dataset has 11 samples,11 in dataloader
/home/home/anaconda3/envs/final/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:82: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
2019-10-13 05:40:59 INFO train.py: [0/100], [0/11], step: 0, 2.350 samples/sec, batch_loss: 0.6994, batch_loss_c: 0.5991, batch_loss_s: 0.9336, time:4.2546, lr:0.0001
2019-10-13 05:41:02 INFO train.py: [0/100], [10/11], step: 10, 3.261 samples/sec, batch_loss: 0.4831, batch_loss_c: 0.4424, batch_loss_s: 0.5779, time:3.0670, lr:0.0001
2019-10-13 05:41:02 INFO train.py: [0/100], train_loss: 0.6902, time: 7.3495, lr: 0.0001
2019-10-13 05:41:03 INFO train.py: [1/100], [0/11], step: 11, 26.996 samples/sec, batch_loss: 0.6069, batch_loss_c: 0.5748, batch_loss_s: 0.6820, time:0.3704, lr:0.0001
2019-10-13 05:41:06 INFO train.py: [1/100], [10/11], step: 21, 3.251 samples/sec, batch_loss: 0.6404, batch_loss_c: 0.5976, batch_loss_s: 0.7404, time:3.0759, lr:0.0001
2019-10-13 05:41:06 INFO train.py: [1/100], train_loss: 0.6874, time: 3.4756, lr: 0.0001
2019-10-13 05:41:06 INFO train.py: [2/100], [0/11], step: 22, 26.784 samples/sec, batch_loss: 1.0000, batch_loss_c: 1.0000, batch_loss_s: 1.0000, time:0.3734, lr:0.0001
2019-10-13 05:41:09 INFO train.py: [2/100], [10/11], step: 32, 3.265 samples/sec, batch_loss: 0.2357, batch_loss_c: 0.2217, batch_loss_s: 0.2685, time:3.0630, lr:0.0001
2019-10-13 05:41:09 INFO train.py: [2/100], train_loss: 0.5731, time: 3.4645, lr: 0.0001
2019-10-13 05:41:10 INFO train.py: [3/100], [0/11], step: 33, 25.340 samples/sec, batch_loss: 0.7164, batch_loss_c: 0.6033, batch_loss_s: 0.9805, time:0.3946, lr:0.0001
2019-10-13 05:41:13 INFO train.py: [3/100], [10/11], step: 43, 3.206 samples/sec, batch_loss: 0.4401, batch_loss_c: 0.4349, batch_loss_s: 0.4524, time:3.1190, lr:0.0001
2019-10-13 05:41:13 INFO train.py: [3/100], train_loss: 0.5325, time: 3.5407, lr: 0.0001
2019-10-13 05:41:13 INFO train.py: [4/100], [0/11], step: 44, 28.332 samples/sec, batch_loss: 1.0000, batch_loss_c: 1.0000, batch_loss_s: 1.0000, time:0.3530, lr:0.0001
2019-10-13 05:41:16 INFO train.py: [4/100], [10/11], step: 54, 3.305 samples/sec, batch_loss: 1.0000, batch_loss_c: 1.0000, batch_loss_s: 1.0000, time:3.0259, lr:0.0001
2019-10-13 05:41:16 INFO train.py: [4/100], train_loss: 0.6342, time: 3.4067, lr: 0.0001
2019-10-13 05:41:17 INFO train.py: [5/100], [0/11], step: 55, 25.383 samples/sec, batch_loss: 0.6059, batch_loss_c: 0.5762, batch_loss_s: 0.6752, time:0.3940, lr:0.0001
2019-10-13 05:41:20 INFO train.py: [5/100], [10/11], step: 65, 3.271 samples/sec, batch_loss: 0.2993, batch_loss_c: 0.3713, batch_loss_s: 0.1312, time:3.0569, lr:0.0001
2019-10-13 05:41:20 INFO train.py: [5/100], train_loss: 0.5481, time: 3.4813, lr: 0.0001
2019-10-13 05:41:20 INFO train.py: [6/100], [0/11], step: 66, 26.413 samples/sec, batch_loss: 0.4229, batch_loss_c: 0.4252, batch_loss_s: 0.4177, time:0.3786, lr:0.0001
2019-10-13 05:41:23 INFO train.py: [6/100], [10/11], step: 76, 3.291 samples/sec, batch_loss: 1.0000, batch_loss_c: 1.0000, batch_loss_s: 1.0000, time:3.0382, lr:0.0001
2019-10-13 05:41:23 INFO train.py: [6/100], train_loss: 0.5335, time: 3.4410, lr: 0.0001
2019-10-13 05:41:24 INFO train.py: [7/100], [0/11], step: 77, 25.149 samples/sec, batch_loss: 0.4964, batch_loss_c: 0.4870, batch_loss_s: 0.5182, time:0.3976, lr:0.0001
2019-10-13 05:41:27 INFO train.py: [7/100], [10/11], step: 87, 3.243 samples/sec, batch_loss: 0.1617, batch_loss_c: 0.1442, batch_loss_s: 0.2023, time:3.0839, lr:0.0001
2019-10-13 05:41:27 INFO train.py: [7/100], train_loss: 0.5238, time: 3.5103, lr: 0.0001
2019-10-13 05:41:27 INFO train.py: [8/100], [0/11], step: 88, 25.768 samples/sec, batch_loss: 0.3041, batch_loss_c: 0.3034, batch_loss_s: 0.3055, time:0.3881, lr:0.0001
2019-10-13 05:41:30 INFO train.py: [8/100], [10/11], step: 98, 3.163 samples/sec, batch_loss: 0.4755, batch_loss_c: 0.4429, batch_loss_s: 0.5517, time:3.1618, lr:0.0001
2019-10-13 05:41:30 INFO train.py: [8/100], train_loss: 0.5797, time: 3.5803, lr: 0.0001
2019-10-13 05:41:31 INFO train.py: [9/100], [0/11], step: 99, 24.169 samples/sec, batch_loss: 0.5055, batch_loss_c: 0.4494, batch_loss_s: 0.6363, time:0.4138, lr:0.0001
2019-10-13 05:41:34 INFO train.py: [9/100], [10/11], step: 109, 3.247 samples/sec, batch_loss: 1.0000, batch_loss_c: 1.0000, batch_loss_s: 1.0000, time:3.0795, lr:0.0001
2019-10-13 05:41:34 INFO train.py: [9/100], train_loss: 0.5373, time: 3.5255, lr: 0.0001
2019-10-13 05:41:34 INFO train.py: [10/100], [0/11], step: 110, 26.647 samples/sec, batch_loss: 0.6993, batch_loss_c: 0.6000, batch_loss_s: 0.9310, time:0.3753, lr:0.0001
2019-10-13 05:41:37 INFO train.py: [10/100], [10/11], step: 120, 3.249 samples/sec, batch_loss: 0.3659, batch_loss_c: 0.3854, batch_loss_s: 0.3204, time:3.0781, lr:0.0001
2019-10-13 05:41:37 INFO train.py: [10/100], train_loss: 0.5155, time: 3.4844, lr: 0.0001
2019-10-13 05:41:38 INFO train.py: [11/100], [0/11], step: 121, 25.979 samples/sec, batch_loss: 0.4168, batch_loss_c: 0.4013, batch_loss_s: 0.4530, time:0.3849, lr:0.0001
2019-10-13 05:41:41 INFO train.py: [11/100], [10/11], step: 131, 3.232 samples/sec, batch_loss: 0.3028, batch_loss_c: 0.3422, batch_loss_s: 0.2109, time:3.0945, lr:0.0001
2019-10-13 05:41:41 INFO train.py: [11/100], train_loss: 0.5639, time: 3.5070, lr: 0.0001
2019-10-13 05:41:41 INFO train.py: [12/100], [0/11], step: 132, 26.902 samples/sec, batch_loss: 0.4069, batch_loss_c: 0.3942, batch_loss_s: 0.4366, time:0.3717, lr:0.0001
2019-10-13 05:41:44 INFO train.py: [12/100], [10/11], step: 142, 3.277 samples/sec, batch_loss: 0.2912, batch_loss_c: 0.2794, batch_loss_s: 0.3188, time:3.0514, lr:0.0001
2019-10-13 05:41:44 INFO train.py: [12/100], train_loss: 0.5980, time: 3.4518, lr: 0.0001
2019-10-13 05:41:45 INFO train.py: [13/100], [0/11], step: 143, 27.316 samples/sec, batch_loss: 1.0000, batch_loss_c: 1.0000, batch_loss_s: 1.0000, time:0.3661, lr:0.0001
2019-10-13 05:41:48 INFO train.py: [13/100], [10/11], step: 153, 3.220 samples/sec, batch_loss: 0.4055, batch_loss_c: 0.3516, batch_loss_s: 0.5311, time:3.1051, lr:0.0001
2019-10-13 05:41:48 INFO train.py: [13/100], train_loss: 0.6170, time: 3.5002, lr: 0.0001
2019-10-13 05:41:48 INFO train.py: [14/100], [0/11], step: 154, 25.028 samples/sec, batch_loss: 0.1432, batch_loss_c: 0.1314, batch_loss_s: 0.1708, time:0.3996, lr:0.0001
2019-10-13 05:41:51 INFO train.py: [14/100], [10/11], step: 164, 3.258 samples/sec, batch_loss: 1.0000, batch_loss_c: 1.0000, batch_loss_s: 1.0000, time:3.0694, lr:0.0001
2019-10-13 05:41:51 INFO train.py: [14/100], train_loss: 0.5414, time: 3.4995, lr: 0.0001
2019-10-13 05:41:52 INFO train.py: [15/100], [0/11], step: 165, 26.693 samples/sec, batch_loss: 0.3728, batch_loss_c: 0.3654, batch_loss_s: 0.3899, time:0.3746, lr:0.0001
2019-10-13 05:41:55 INFO train.py: [15/100], [10/11], step: 175, 3.256 samples/sec, batch_loss: 0.1322, batch_loss_c: 0.1449, batch_loss_s: 0.1027, time:3.0710, lr:0.0001
2019-10-13 05:41:55 INFO train.py: [15/100], train_loss: 0.5539, time: 3.4723, lr: 0.0001
2019-10-13 05:41:55 INFO train.py: [16/100], [0/11], step: 176, 26.328 samples/sec, batch_loss: 0.3516, batch_loss_c: 0.3627, batch_loss_s: 0.3256, time:0.3798, lr:0.0001
2019-10-13 05:41:58 INFO train.py: [16/100], [10/11], step: 186, 3.234 samples/sec, batch_loss: 0.7073, batch_loss_c: 0.6000, batch_loss_s: 0.9577, time:3.0917, lr:0.0001
2019-10-13 05:41:58 INFO train.py: [16/100], train_loss: 0.4249, time: 3.5084, lr: 0.0001
2019-10-13 05:41:59 INFO train.py: [17/100], [0/11], step: 187, 26.359 samples/sec, batch_loss: 0.2356, batch_loss_c: 0.2157, batch_loss_s: 0.2819, time:0.3794, lr:0.0001
2019-10-13 05:42:02 INFO train.py: [17/100], [10/11], step: 197, 3.211 samples/sec, batch_loss: 0.2067, batch_loss_c: 0.2152, batch_loss_s: 0.1871, time:3.1142, lr:0.0001
2019-10-13 05:42:02 INFO train.py: [17/100], train_loss: 0.4031, time: 3.5223, lr: 0.0001
2019-10-13 05:42:02 INFO train.py: [18/100], [0/11], step: 198, 24.509 samples/sec, batch_loss: 0.2297, batch_loss_c: 0.2136, batch_loss_s: 0.2674, time:0.4080, lr:0.0001
2019-10-13 05:42:05 INFO train.py: [18/100], [10/11], step: 208, 3.176 samples/sec, batch_loss: 1.0000, batch_loss_c: 1.0000, batch_loss_s: 1.0000, time:3.1490, lr:0.0001
2019-10-13 05:42:05 INFO train.py: [18/100], train_loss: 0.4457, time: 3.5896, lr: 0.0001
2019-10-13 05:42:06 INFO train.py: [19/100], [0/11], step: 209, 25.056 samples/sec, batch_loss: 0.2912, batch_loss_c: 0.2686, batch_loss_s: 0.3438, time:0.3991, lr:0.0001
2019-10-13 05:42:09 INFO train.py: [19/100], [10/11], step: 219, 3.235 samples/sec, batch_loss: 0.2599, batch_loss_c: 0.1941, batch_loss_s: 0.4135, time:3.0916, lr:0.0001
2019-10-13 05:42:09 INFO train.py: [19/100], train_loss: 0.4395, time: 3.5204, lr: 0.0001
2019-10-13 05:42:09 INFO train.py: [20/100], [0/11], step: 220, 25.007 samples/sec, batch_loss: 0.2725, batch_loss_c: 0.2591, batch_loss_s: 0.3039, time:0.3999, lr:0.0001
2019-10-13 05:42:12 INFO train.py: [20/100], [10/11], step: 230, 3.239 samples/sec, batch_loss: 1.0000, batch_loss_c: 1.0000, batch_loss_s: 1.0000, time:3.0872, lr:0.0001
2019-10-13 05:42:12 INFO train.py: [20/100], train_loss: 0.5304, time: 3.5196, lr: 0.0001
2019-10-13 05:42:13 INFO train.py: [21/100], [0/11], step: 231, 24.451 samples/sec, batch_loss: 0.5386, batch_loss_c: 0.5582, batch_loss_s: 0.4929, time:0.4090, lr:0.0001
2019-10-13 05:42:16 INFO train.py: [21/100], [10/11], step: 241, 3.260 samples/sec, batch_loss: 1.0000, batch_loss_c: 1.0000, batch_loss_s: 1.0000, time:3.0675, lr:0.0001
2019-10-13 05:42:16 INFO train.py: [21/100], train_loss: 0.5384, time: 3.5078, lr: 0.0001
2019-10-13 05:42:16 INFO train.py: [22/100], [0/11], step: 242, 25.894 samples/sec, batch_loss: 0.6823, batch_loss_c: 0.5979, batch_loss_s: 0.8792, time:0.3862, lr:0.0001
2019-10-13 05:42:19 INFO train.py: [22/100], [10/11], step: 252, 3.227 samples/sec, batch_loss: 1.0000, batch_loss_c: 1.0000, batch_loss_s: 1.0000, time:3.0992, lr:0.0001
2019-10-13 05:42:19 INFO train.py: [22/100], train_loss: 0.4688, time: 3.5151, lr: 0.0001
2019-10-13 05:42:20 INFO train.py: [23/100], [0/11], step: 253, 26.003 samples/sec, batch_loss: 0.1498, batch_loss_c: 0.1327, batch_loss_s: 0.1896, time:0.3846, lr:0.0001
2019-10-13 05:42:23 INFO train.py: [23/100], [10/11], step: 263, 3.167 samples/sec, batch_loss: 0.1623, batch_loss_c: 0.1491, batch_loss_s: 0.1930, time:3.1578, lr:0.0001
2019-10-13 05:42:23 INFO train.py: [23/100], train_loss: 0.2373, time: 3.5653, lr: 0.0001
test models: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:12<00:00, 1.11it/s]
Traceback (most recent call last):
File "/home/home/p3/PSENet.pytorch-master/cal_recall/rrc_evaluation_funcs.py", line 326, in main_evaluation
evalData = evaluate_method_fn(p['g'], p['s'], evalParams)
File "/home/home/p3/PSENet.pytorch-master/cal_recall/script.py", line 180, in evaluate_method
True, False)
File "/home/home/p3/PSENet.pytorch-master/cal_recall/rrc_evaluation_funcs.py", line 296, in get_tl_line_values_from_file_contents
points, confidence, transcription = get_tl_line_values(line,LTRB,withTranscription,withConfidence,imWidth,imHeight);
File "/home/home/p3/PSENet.pytorch-master/cal_recall/rrc_evaluation_funcs.py", line 218, in get_tl_line_values
raise Exception("Format incorrect. Should be: x1,y1,x2,y2,x3,y3,x4,y4,transcription")
Exception: Format incorrect. Should be: x1,y1,x2,y2,x3,y3,x4,y4,transcription
Traceback (most recent call last):
File "train.py", line 263, in <module>
main()
File "train.py", line 217, in main
recall, precision, f1 = eval(model, os.path.join(config.output_dir, 'output'), config.testroot, device)
File "train.py", line 149, in eval
return result_dict['recall'], result_dict['precision'], result_dict['hmean']
TypeError: string indices must be integers