Coder Social home page Coder Social logo

guoqiangqi / pfld Goto Github PK

View Code? Open in Web Editor NEW
625.0 27.0 165.0 10.04 MB

Implementation of PFLD A Practical Facial Landmark Detector , reference to https://arxiv.org/pdf/1902.10859.pdf

Python 99.30% Shell 0.70%
euler-angles deep-learning landmark-localization pfld-tensorflow

pfld's Introduction

PFLD implementation with tensorflow

It is an open surce program reference to https://arxiv.org/pdf/1902.10859.pdf , if you find any bugs or anything incorrect,you can notice it in the issues and pull request,be glad to receive you advices.
And thanks @lucknote for helping fixing existing bugs.

Datasets

WFLW Dataset

Wider Facial Landmarks in-the-wild (WFLW) contains 10000 faces (7500 for training and 2500 for testing) with 98 fully manual annotated landmarks.

  1. Training and Testing images [Google Drive] [Baidu Drive]
  2. WFLW Face Annotations

Training & Testing

training :

$ python data/SetPreparation.py
$ train.sh

use tensorboard, open a new terminal

$ tensorboard  --logdir=./checkpoint/tensorboard/

testing:

$ python test.py

Results:

Sample images:
Image text Image text Image text Image text Image text Image text Image text

Sample gif:

Image text

Bug fix

  1. The code for cauculating euler angles prediction loss have been added.
  2. Fixed the memory leak bug:
    The code has a flaw that i calculate euler angles ground-truth while training process,so the training speed have slowed down because some work have to be finished on the cpu ,you should calculate the euler angles in the preprocess code

CONTACT US:

If you have any questiones ,please contact us! Also join our QQ group(945933636) for more information.

pfld's People

Contributors

dependabot[bot] avatar guoqiangqi avatar luckynote avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pfld's Issues

关于训练

你好,请问如果我的关键点检测模型是接在人脸检测模型后面,但是我在训练过程中,保证了所有关键点均在图片上,但是在与检测模型进行结合的时候,送入的框并不包含所有的关键点,但是对于关键点检测模型而言,还是认为点在框里面,这样进行检测会使得点进行偏移,预测的就有问题,请问这个应该如何解决?如果方便的话,麻烦加一下我的qq:327595697,想找个人探讨一下,谢谢

WFLW训练报ValueError: too many values to unpack错误

先用SetPreparation.py针对WFLW数据集,生成了2500的test集和75000(每张图片扩展10张)的train集合。然后运行 train.sh的时,在跑到generate_data.py的gen_data这个函数,会报ValueError: too many values to unpack错误,是不是需要把数据集搞小点儿?

good model

Hi, thanks for your open source, I tried to training model, but is not good to used, can we provide an available model in this project? Thanks @guoqiangqi

is something wrong with your pfld net in model2.py

line 298:
#14 * 14 * 128
conv5_1 = slim.convolution2d(conv4_1, 512, [1, 1], stride=2, activation_fn=None,scope='conv5_1/expand')

line 362:
#14 * 14 * 128
conv6_1 = slim.convolution2d(block5_6, 256,

the data output from conv5 is 7 * 7 * 128 ,not the one you write on line 362. Maybe stride on line 299 shouldbe 1 ,not 2 which you write

model.py和model2.py

请问老哥,model.py里面存放的模型是哪个模型?发现train_model里面的是针对model2的,想训练一下model感觉很费劲

关于loss函数中涉及到的两个个问题,希望能够请教一下。

尊敬的开源作者:
您好!我在读读论文以及读您代码的过程中,有两个问题不是很清楚,希望能够得到您的解答,非常感谢!
1.w_n_c这个权重是怎么定义的分,c是分类数目,对原数据集的分类是要自己分类嘛?
2.这个θ是三个角度,这三个角度是怎么通过辅助网络预测出来的呢?角度的gt通过什么方式得到呢?

有关get_tensor_by_name的疑惑

由于我自己对tensorflow学艺不精,一直都是拿来就用,不懂就百度。但这次我一直不知道怎么找到输入和输出的张量的名字。
我按照网上博客去查询张量名称,两种方法得到的两种结果分别为:
pfld_inference/fc/weights/Adam_1/Initializer/zeros/shape_as_tensor
pfld_inference/fc/weights/Adam_1/Initializer/zeros/Const
pfld_inference/fc/weights/Adam_1/Initializer/zeros
pfld_inference/fc/weights/Adam_1
pfld_inference/fc/weights/Adam_1/Assign
pfld_inference/fc/weights/Adam_1/read
pfld_inference/fc/biases/Adam/Initializer/zeros
pfld_inference/fc/biases/Adam
pfld_inference/fc/biases/Adam/Assign
pfld_inference/fc/biases/Adam/read
pfld_inference/fc/biases/Adam_1/Initializer/zeros
pfld_inference/fc/biases/Adam_1
pfld_inference/fc/biases/Adam_1/Assign
pfld_inference/fc/biases/Adam_1/read
pfld_conv1/weights/Adam/Initializer/zeros/shape_as_tensor
pfld_conv1/weights/Adam/Initializer/zeros/Const
pfld_conv1/weights/Adam/Initializer/zeros
pfld_conv1/weights/Adam
pfld_conv1/weights/Adam/Assign
pfld_conv1/weights/Adam/read
pfld_conv1/weights/Adam_1/Initializer/zeros/shape_as_tensor
pfld_conv1/weights/Adam_1/Initializer/zeros/Const
pfld_conv1/weights/Adam_1/Initializer/zeros
pfld_conv1/weights/Adam_1
pfld_conv1/weights/Adam_1/Assign
pfld_conv1/weights/Adam_1/read
pfld_conv1/BatchNorm/beta/Adam/Initializer/zeros
pfld_conv1/BatchNorm/beta/Adam
pfld_conv1/BatchNorm/beta/Adam/Assign
pfld_conv1/BatchNorm/beta/Adam/read
pfld_conv1/BatchNorm/beta/Adam_1/Initializer/zeros
pfld_conv1/BatchNorm/beta/Adam_1
pfld_conv1/BatchNorm/beta/Adam_1/Assign
pfld_conv1/BatchNorm/beta/Adam_1/read
pfld_conv2/weights/Adam/Initializer/zeros/shape_as_tensor
pfld_conv2/weights/Adam/Initializer/zeros/Const
pfld_conv2/weights/Adam/Initializer/zeros
pfld_conv2/weights/Adam
pfld_conv2/weights/Adam/Assign
pfld_conv2/weights/Adam/read
pfld_conv2/weights/Adam_1/Initializer/zeros/shape_as_tensor
pfld_conv2/weights/Adam_1/Initializer/zeros/Const
pfld_conv2/weights/Adam_1/Initializer/zeros
pfld_conv2/weights/Adam_1
pfld_conv2/weights/Adam_1/Assign
pfld_conv2/weights/Adam_1/read
pfld_conv2/BatchNorm/beta/Adam/Initializer/zeros
pfld_conv2/BatchNorm/beta/Adam
pfld_conv2/BatchNorm/beta/Adam/Assign
pfld_conv2/BatchNorm/beta/Adam/read
pfld_conv2/BatchNorm/beta/Adam_1/Initializer/zeros
pfld_conv2/BatchNorm/beta/Adam_1
pfld_conv2/BatchNorm/beta/Adam_1/Assign
pfld_conv2/BatchNorm/beta/Adam_1/read
pfld_conv3/weights/Adam/Initializer/zeros/shape_as_tensor
pfld_conv3/weights/Adam/Initializer/zeros/Const
pfld_conv3/weights/Adam/Initializer/zeros
pfld_conv3/weights/Adam
pfld_conv3/weights/Adam/Assign
pfld_conv3/weights/Adam/read
pfld_conv3/weights/Adam_1/Initializer/zeros/shape_as_tensor
pfld_conv3/weights/Adam_1/Initializer/zeros/Const
pfld_conv3/weights/Adam_1/Initializer/zeros
pfld_conv3/weights/Adam_1
pfld_conv3/weights/Adam_1/Assign
pfld_conv3/weights/Adam_1/read
pfld_conv3/BatchNorm/beta/Adam/Initializer/zeros
pfld_conv3/BatchNorm/beta/Adam
pfld_conv3/BatchNorm/beta/Adam/Assign
pfld_conv3/BatchNorm/beta/Adam/read
pfld_conv3/BatchNorm/beta/Adam_1/Initializer/zeros
pfld_conv3/BatchNorm/beta/Adam_1
pfld_conv3/BatchNorm/beta/Adam_1/Assign
pfld_conv3/BatchNorm/beta/Adam_1/read
pfld_conv4/weights/Adam/Initializer/zeros/shape_as_tensor
pfld_conv4/weights/Adam/Initializer/zeros/Const
pfld_conv4/weights/Adam/Initializer/zeros
pfld_conv4/weights/Adam
pfld_conv4/weights/Adam/Assign
pfld_conv4/weights/Adam/read
pfld_conv4/weights/Adam_1/Initializer/zeros/shape_as_tensor
pfld_conv4/weights/Adam_1/Initializer/zeros/Const
pfld_conv4/weights/Adam_1/Initializer/zeros
pfld_conv4/weights/Adam_1
pfld_conv4/weights/Adam_1/Assign
pfld_conv4/weights/Adam_1/read
pfld_conv4/BatchNorm/beta/Adam/Initializer/zeros
pfld_conv4/BatchNorm/beta/Adam
pfld_conv4/BatchNorm/beta/Adam/Assign
pfld_conv4/BatchNorm/beta/Adam/read
pfld_conv4/BatchNorm/beta/Adam_1/Initializer/zeros
pfld_conv4/BatchNorm/beta/Adam_1
pfld_conv4/BatchNorm/beta/Adam_1/Assign
pfld_conv4/BatchNorm/beta/Adam_1/read
pfld_fc1/weights/Adam/Initializer/zeros/shape_as_tensor
pfld_fc1/weights/Adam/Initializer/zeros/Const
pfld_fc1/weights/Adam/Initializer/zeros
pfld_fc1/weights/Adam
pfld_fc1/weights/Adam/Assign
pfld_fc1/weights/Adam/read
pfld_fc1/weights/Adam_1/Initializer/zeros/shape_as_tensor
pfld_fc1/weights/Adam_1/Initializer/zeros/Const
pfld_fc1/weights/Adam_1/Initializer/zeros
pfld_fc1/weights/Adam_1
pfld_fc1/weights/Adam_1/Assign
pfld_fc1/weights/Adam_1/read
pfld_fc1/BatchNorm/beta/Adam/Initializer/zeros
pfld_fc1/BatchNorm/beta/Adam
pfld_fc1/BatchNorm/beta/Adam/Assign
pfld_fc1/BatchNorm/beta/Adam/read
pfld_fc1/BatchNorm/beta/Adam_1/Initializer/zeros
pfld_fc1/BatchNorm/beta/Adam_1
pfld_fc1/BatchNorm/beta/Adam_1/Assign
pfld_fc1/BatchNorm/beta/Adam_1/read
pfld_fc2/weights/Adam/Initializer/zeros
pfld_fc2/weights/Adam
pfld_fc2/weights/Adam/Assign
pfld_fc2/weights/Adam/read
pfld_fc2/weights/Adam_1/Initializer/zeros
pfld_fc2/weights/Adam_1
pfld_fc2/weights/Adam_1/Assign
pfld_fc2/weights/Adam_1/read
pfld_fc2/BatchNorm/beta/Adam/Initializer/zeros
pfld_fc2/BatchNorm/beta/Adam
pfld_fc2/BatchNorm/beta/Adam/Assign
pfld_fc2/BatchNorm/beta/Adam/read
pfld_fc2/BatchNorm/beta/Adam_1/Initializer/zeros
pfld_fc2/BatchNorm/beta/Adam_1
pfld_fc2/BatchNorm/beta/Adam_1/Assign
pfld_fc2/BatchNorm/beta/Adam_1/read
Adam/beta1
Adam/beta2
Adam/epsilon
Adam/update_pfld_inference/conv_1/weights/ApplyAdam
Adam/update_pfld_inference/conv_1/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv2/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv2/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_1/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv3_1/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_1/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv3_1/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_1/linear/weights/ApplyAdam
Adam/update_pfld_inference/conv3_1/linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_2/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv3_2/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_2/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv3_2/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_2/linear/weights/ApplyAdam
Adam/update_pfld_inference/conv3_2/linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_3/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv3_3/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_3/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv3_3/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_3linear/weights/ApplyAdam
Adam/update_pfld_inference/conv3_3linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_4/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv3_4/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_4/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv3_4/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_4/linear/weights/ApplyAdam
Adam/update_pfld_inference/conv3_4/linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_5/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv3_5/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_5/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv3_5/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv3_5/linear/weights/ApplyAdam
Adam/update_pfld_inference/conv3_5/linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv4_1/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv4_1/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv4_1/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv4_1/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv4_1/linear/weights/ApplyAdam
Adam/update_pfld_inference/conv4_1/linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_1/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv5_1/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_1/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv5_1/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_1/linear/weights/ApplyAdam
Adam/update_pfld_inference/conv5_1/linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_2/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv5_2/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_2/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv5_2/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_2/linear/weights/ApplyAdam
Adam/update_pfld_inference/conv5_2/linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_3/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv5_3/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_3/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv5_3/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_3/linear/weights/ApplyAdam
Adam/update_pfld_inference/conv5_3/linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_4/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv5_4/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_4/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv5_4/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_4/linear/weights/ApplyAdam
Adam/update_pfld_inference/conv5_4/linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_5/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv5_5/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_5/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv5_5/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_5/linear/weights/ApplyAdam
Adam/update_pfld_inference/conv5_5/linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_6/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv5_6/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_6/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv5_6/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv5_6/linear/weights/ApplyAdam
Adam/update_pfld_inference/conv5_6/linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv6_1/expand/weights/ApplyAdam
Adam/update_pfld_inference/conv6_1/expand/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv6_1/dwise/depthwise_weights/ApplyAdam
Adam/update_pfld_inference/conv6_1/dwise/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv6_1/linear/weights/ApplyAdam
Adam/update_pfld_inference/conv6_1/linear/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv7/weights/ApplyAdam
Adam/update_pfld_inference/conv7/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/conv8/weights/ApplyAdam
Adam/update_pfld_inference/conv8/BatchNorm/beta/ApplyAdam
Adam/update_pfld_inference/fc/weights/ApplyAdam
Adam/update_pfld_inference/fc/biases/ApplyAdam
Adam/update_pfld_conv1/weights/ApplyAdam
Adam/update_pfld_conv1/BatchNorm/beta/ApplyAdam
Adam/update_pfld_conv2/weights/ApplyAdam
Adam/update_pfld_conv2/BatchNorm/beta/ApplyAdam
Adam/update_pfld_conv3/weights/ApplyAdam
Adam/update_pfld_conv3/BatchNorm/beta/ApplyAdam
Adam/update_pfld_conv4/weights/ApplyAdam
Adam/update_pfld_conv4/BatchNorm/beta/ApplyAdam
Adam/update_pfld_fc1/weights/ApplyAdam
Adam/update_pfld_fc1/BatchNorm/beta/ApplyAdam
Adam/update_pfld_fc2/weights/ApplyAdam
Adam/update_pfld_fc2/BatchNorm/beta/ApplyAdam
Adam/mul
Adam/Assign
Adam/mul_1
Adam/Assign_1
Adam/update
Adam/value
Adam
train_op/CheckNumerics
train_op/control_dependency
Const_2
ME
ME/Assign
ME/read
Const_3
FR
FR/Assign
FR/read
Const_4
TestLoss
TestLoss/Assign
TestLoss/read
Const_5
TrainLoss
TrainLoss/Assign
TrainLoss/read
Const_6
TrainLoss2
TrainLoss2/Assign
TrainLoss2/read
test_mean_error/tags
test_mean_error
test_failure_rate/tags
test_failure_rate
test_10_loss/tags
test_10_loss
train_loss/tags
train_loss
train_loss_l2/tags
train_loss_l2
save/Const
save/SaveV2/tensor_names
save/SaveV2/shape_and_slices
save/SaveV2
save/control_dependency
save/RestoreV2/tensor_names
save/RestoreV2/shape_and_slices
save/RestoreV2
save/Assign
save/Assign_1
save/Assign_2
save/Assign_3
save/Assign_4
save/Assign_5
save/Assign_6
save/Assign_7
save/Assign_8
save/Assign_9
save/Assign_10
save/Assign_11
save/Assign_12
save/Assign_13
save/Assign_14
save/Assign_15
save/Assign_16
save/Assign_17
save/Assign_18
save/Assign_19
save/Assign_20
save/Assign_21
save/Assign_22
save/Assign_23
save/Assign_24
save/Assign_25
save/Assign_26
save/Assign_27
save/Assign_28
save/Assign_29
save/Assign_30
save/Assign_31
save/Assign_32
save/Assign_33
save/Assign_34
save/Assign_35
save/Assign_36
save/Assign_37
save/Assign_38
save/Assign_39
save/Assign_40
save/Assign_41
save/Assign_42
save/Assign_43
save/Assign_44
save/Assign_45
save/Assign_46
save/Assign_47
save/Assign_48
save/Assign_49
save/Assign_50
save/Assign_51
save/Assign_52
save/Assign_53
save/Assign_54
save/Assign_55
save/Assign_56
save/Assign_57
save/Assign_58
save/Assign_59
save/Assign_60
save/Assign_61
save/Assign_62
save/Assign_63
save/Assign_64
save/Assign_65
save/Assign_66
save/Assign_67
save/Assign_68
save/Assign_69
save/Assign_70
save/Assign_71
save/Assign_72
save/Assign_73
save/Assign_74
save/Assign_75
save/Assign_76
save/Assign_77
save/Assign_78
save/Assign_79
save/Assign_80
save/Assign_81
save/Assign_82
save/Assign_83
save/Assign_84
save/Assign_85
save/Assign_86
save/Assign_87
save/Assign_88
save/Assign_89
save/Assign_90
save/Assign_91
save/Assign_92
save/Assign_93
save/Assign_94
save/Assign_95
save/Assign_96
save/Assign_97
save/Assign_98
save/Assign_99
save/Assign_100
save/Assign_101
save/Assign_102
save/Assign_103
save/Assign_104
save/Assign_105
save/Assign_106
save/Assign_107
save/Assign_108
save/Assign_109
save/Assign_110
save/Assign_111
save/Assign_112
save/Assign_113
save/Assign_114
save/Assign_115
save/Assign_116
save/Assign_117
save/Assign_118
save/Assign_119
save/Assign_120
save/Assign_121
save/Assign_122
save/Assign_123
save/Assign_124
save/Assign_125
save/Assign_126
save/Assign_127
save/Assign_128
save/Assign_129
save/Assign_130
save/Assign_131
save/Assign_132
save/Assign_133
save/Assign_134
save/Assign_135
save/Assign_136
save/Assign_137
save/Assign_138
save/Assign_139
save/Assign_140
save/Assign_141
save/Assign_142
save/Assign_143
save/Assign_144
save/Assign_145
save/Assign_146
save/Assign_147
save/Assign_148
save/Assign_149
save/Assign_150
save/Assign_151
save/Assign_152
save/Assign_153
save/Assign_154
save/Assign_155
save/Assign_156
save/Assign_157
save/Assign_158
save/Assign_159
save/Assign_160
save/Assign_161
save/Assign_162
save/Assign_163
save/Assign_164
save/Assign_165
save/Assign_166
save/Assign_167
save/Assign_168
save/Assign_169
save/Assign_170
save/Assign_171
save/Assign_172
save/Assign_173
save/Assign_174
save/Assign_175
save/Assign_176
save/Assign_177
save/Assign_178
save/Assign_179
save/Assign_180
save/Assign_181
save/Assign_182
save/Assign_183
save/Assign_184
save/Assign_185
save/Assign_186
save/Assign_187
save/Assign_188
save/Assign_189
save/Assign_190
save/Assign_191
save/Assign_192
save/Assign_193
save/Assign_194
save/Assign_195
save/Assign_196
save/Assign_197
save/Assign_198
save/Assign_199
save/Assign_200
save/Assign_201
save/Assign_202
save/restore_all
init
init_1
Merge/MergeSummary

tensor_name: pfld_inference/conv7/BatchNorm/beta
tensor_name: pfld_inference/conv5_1/linear/BatchNorm/beta
tensor_name: pfld_inference/conv4_1/expand/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_2/expand/weights
tensor_name: FR
tensor_name: pfld_conv1/BatchNorm/moving_mean
tensor_name: pfld_conv1/BatchNorm/beta
tensor_name: ME
tensor_name: pfld_inference/conv3_3/expand/BatchNorm/moving_mean
tensor_name: TestLoss
tensor_name: pfld_inference/conv3_1/expand/weights
tensor_name: pfld_inference/conv5_3/expand/BatchNorm/moving_variance
tensor_name: TrainLoss
tensor_name: TrainLoss2
tensor_name: pfld_inference/conv2/dwise/depthwise_weights
tensor_name: pfld_inference/conv6_1/expand/BatchNorm/moving_variance
tensor_name: pfld_conv1/weights
tensor_name: pfld_conv1/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_5/dwise/depthwise_weights
tensor_name: pfld_conv2/BatchNorm/beta
tensor_name: pfld_conv4/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_1/dwise/depthwise_weights
tensor_name: pfld_conv2/BatchNorm/moving_mean
tensor_name: pfld_conv2/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_3/dwise/BatchNorm/moving_mean
tensor_name: pfld_conv2/weights
tensor_name: pfld_conv3/BatchNorm/beta
tensor_name: pfld_conv3/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_2/dwise/depthwise_weights
tensor_name: pfld_inference/conv2/dwise/BatchNorm/beta
tensor_name: pfld_conv3/BatchNorm/moving_variance
tensor_name: pfld_conv3/weights
tensor_name: pfld_conv4/BatchNorm/beta
tensor_name: pfld_inference/conv5_2/linear/weights
tensor_name: pfld_conv4/BatchNorm/moving_mean
tensor_name: pfld_conv4/weights
tensor_name: pfld_inference/conv5_4/linear/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_1/expand/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_2/linear/BatchNorm/beta
tensor_name: pfld_fc1/BatchNorm/beta
tensor_name: pfld_inference/conv3_3/dwise/depthwise_weights
tensor_name: pfld_inference/conv3_2/linear/BatchNorm/moving_mean
tensor_name: pfld_fc1/BatchNorm/moving_mean
tensor_name: pfld_inference/conv4_1/linear/weights
tensor_name: pfld_inference/conv3_2/linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_1/dwise/BatchNorm/moving_mean
tensor_name: pfld_fc1/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_2/linear/weights
tensor_name: pfld_inference/conv4_1/linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_1/expand/BatchNorm/beta
tensor_name: pfld_fc1/weights
tensor_name: pfld_fc2/BatchNorm/beta
tensor_name: pfld_fc2/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_1/linear/BatchNorm/moving_mean
tensor_name: pfld_fc2/BatchNorm/moving_variance
tensor_name: pfld_fc2/weights
tensor_name: pfld_inference/conv2/dwise/BatchNorm/moving_mean
tensor_name: pfld_inference/conv2/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_3/expand/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_3/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_1/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv3_1/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_1/expand/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_1/linear/BatchNorm/beta
tensor_name: pfld_inference/conv3_1/linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_2/expand/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_1/linear/weights
tensor_name: pfld_inference/conv5_3/linear/BatchNorm/beta
tensor_name: pfld_inference/conv3_2/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv5_3/linear/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_2/dwise/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_3/linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_2/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_2/expand/BatchNorm/beta
tensor_name: pfld_inference/conv3_2/expand/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_3/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv3_3/expand/BatchNorm/beta
tensor_name: pfld_inference/conv3_3/expand/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_5/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_3/expand/weights
tensor_name: pfld_inference/conv3_3linear/BatchNorm/beta
tensor_name: pfld_inference/conv8/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_3linear/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_3linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_3linear/weights
tensor_name: pfld_inference/conv3_4/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv3_4/dwise/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_4/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_4/dwise/depthwise_weights
tensor_name: pfld_inference/conv3_4/expand/BatchNorm/beta
tensor_name: pfld_inference/conv3_4/expand/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_4/expand/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_4/expand/weights
tensor_name: pfld_inference/conv3_4/linear/BatchNorm/beta
tensor_name: pfld_inference/conv3_4/linear/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_4/linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_4/linear/weights
tensor_name: pfld_inference/conv3_5/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv3_5/dwise/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_1/dwise/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_5/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_5/expand/BatchNorm/beta
tensor_name: pfld_inference/conv3_5/expand/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_5/expand/BatchNorm/moving_variance
tensor_name: pfld_inference/conv_1/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_5/expand/weights
tensor_name: pfld_inference/conv6_1/dwise/depthwise_weights
tensor_name: pfld_inference/conv3_5/linear/BatchNorm/beta
tensor_name: pfld_inference/conv3_5/linear/BatchNorm/moving_mean
tensor_name: pfld_inference/conv3_5/linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv3_5/linear/weights
tensor_name: pfld_inference/conv4_1/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv4_1/dwise/BatchNorm/moving_mean
tensor_name: pfld_inference/conv4_1/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv6_1/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv4_1/dwise/depthwise_weights
tensor_name: pfld_inference/conv4_1/expand/BatchNorm/beta
tensor_name: pfld_inference/conv4_1/expand/BatchNorm/moving_mean
tensor_name: pfld_inference/conv4_1/expand/weights
tensor_name: pfld_inference/conv4_1/linear/BatchNorm/beta
tensor_name: pfld_inference/conv4_1/linear/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_1/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv5_1/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_5/expand/BatchNorm/beta
tensor_name: pfld_inference/conv5_1/dwise/depthwise_weights
tensor_name: pfld_inference/conv5_1/expand/BatchNorm/beta
tensor_name: pfld_inference/conv5_1/expand/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_1/expand/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_1/expand/weights
tensor_name: pfld_inference/conv7/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_3/linear/weights
tensor_name: pfld_inference/conv5_1/linear/BatchNorm/moving_mean
tensor_name: pfld_inference/conv7/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_1/linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv7/weights
tensor_name: pfld_inference/conv5_1/linear/weights
tensor_name: pfld_inference/conv5_2/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv5_2/dwise/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_2/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_2/dwise/depthwise_weights
tensor_name: pfld_inference/conv5_6/expand/weights
tensor_name: pfld_inference/conv5_2/expand/BatchNorm/beta
tensor_name: pfld_inference/conv5_2/expand/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_6/expand/BatchNorm/beta
tensor_name: pfld_inference/conv5_2/expand/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_2/expand/weights
tensor_name: pfld_inference/conv5_2/linear/BatchNorm/beta
tensor_name: pfld_inference/conv_1/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_2/linear/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_2/linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_3/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv5_3/dwise/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_3/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_3/dwise/depthwise_weights
tensor_name: pfld_inference/conv_1/weights
tensor_name: pfld_inference/conv5_3/expand/BatchNorm/beta
tensor_name: pfld_inference/conv8/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_3/expand/weights
tensor_name: pfld_inference/conv5_4/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv5_4/dwise/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_4/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_4/dwise/depthwise_weights
tensor_name: pfld_inference/conv5_4/expand/BatchNorm/beta
tensor_name: pfld_inference/conv5_4/expand/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_5/linear/weights
tensor_name: pfld_inference/conv5_4/expand/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_5/linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_4/expand/weights
tensor_name: pfld_inference/conv5_4/linear/BatchNorm/beta
tensor_name: pfld_inference/conv5_4/linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_4/linear/weights
tensor_name: pfld_inference/conv5_5/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv5_5/dwise/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_5/dwise/depthwise_weights
tensor_name: pfld_inference/conv5_5/expand/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_5/expand/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_5/expand/weights
tensor_name: pfld_inference/conv5_5/linear/BatchNorm/beta
tensor_name: pfld_inference/conv5_5/linear/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_6/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv5_6/dwise/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_6/dwise/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_6/dwise/depthwise_weights
tensor_name: pfld_inference/conv5_6/expand/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_6/expand/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_6/linear/BatchNorm/beta
tensor_name: pfld_inference/conv5_6/linear/BatchNorm/moving_mean
tensor_name: pfld_inference/conv5_6/linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv5_6/linear/weights
tensor_name: pfld_inference/conv6_1/dwise/BatchNorm/beta
tensor_name: pfld_inference/conv6_1/dwise/BatchNorm/moving_mean
tensor_name: pfld_inference/conv6_1/expand/BatchNorm/beta
tensor_name: pfld_inference/conv6_1/expand/BatchNorm/moving_mean
tensor_name: pfld_inference/conv6_1/expand/weights
tensor_name: pfld_inference/conv6_1/linear/BatchNorm/beta
tensor_name: pfld_inference/conv6_1/linear/BatchNorm/moving_mean
tensor_name: pfld_inference/conv6_1/linear/BatchNorm/moving_variance
tensor_name: pfld_inference/conv6_1/linear/weights
tensor_name: pfld_inference/conv8/BatchNorm/beta
tensor_name: pfld_inference/conv8/weights
tensor_name: pfld_inference/conv_1/BatchNorm/beta
tensor_name: pfld_inference/fc/biases
tensor_name: pfld_inference/fc/weights

我也查看了一下tensorboard,我在图里看到了image_batch 以及phase_train。我始终没有找到在您代码中写的pfld_inference/fc/BiasAdd 这个张量。请问到底应该怎么去找输入输出节点呀?

角度检测问题

通过14个关键点,结合OpenCV的pnp方法,发现角度的检测结果并不是很理想,想问一下作者,目前有哪些较好的人脸角度检测方法

提升精度问题

大家来分享一下训练的最好精度,还有提升精度的方法?
目前我在 WFLW数据集最好精度:
mean error: 0.073(7.3 %)
failure rate: 0.175(17.5%)

How accurate euler angle estimation?

How accurate euler angle estimation? i.e. is there any data with ground truth yaw, pitch, raw to compute error introduced by solvepnp method?

Can you elaborate on how f_x was estimated?

f_x = c_x / np.tan(60/2 * np.pi / 180)

Seems here they just use width of image for fx, fy.

Also about landmarks_3D, where do these values come from?

Some problems about concat feature

The author concat last 3 feature maps and then use a FC layer to get the landmarks.

In your code model2.py, gloabal average pooling is added to the last 3 feature maps. And then you concat them.

I did the same things as you. My last 3 feature maps are 8*8 4*4 2*2. I used GAP and Concat. But the performance is not as good as what I get only using the last feature map 2*2(GAP+FC).

I use smooth L1 loss without any weights. Have you tried to use the last feature map instead of concat?

Thanks for your reply.

loss中wnc的问题

你好,请问看其他issues里,你说wnc是利用每个batch中各个属性人脸所占batch图像比例的倒数来调整学习比例,实现数据均衡,但是WFLW中有的attribute label的属性是全0,我理解的正常的,这样进行相乘,他求出来的loss是0,并不对正常样本进行loss回传嘛?

关于数据生成问题

数据生成脚本用的是关键点的边界为box,而不是用wflw提供的box,这样是不是加入了先验信息呢

300w

How to run SetPreparation.py for 300w dataset?

关于euler_angle单位的问题

生成数据集时,euler_angle的单位是degree,训练时有直接将该数值传入,但tf.cos的输入不应该是rad?

关键点抖动问题

训练模型后使用python调用摄像头进行跟踪,发现关键点抖动很明显,有什么方法可以解决?

Question about accuracy of the model

Hi,
I found this code is for training on 98-pts dataset. Do you have trained on 68-pts dataset 300-W and get the same accuracy as the PFLD paper said?

attributes_w_n的问题

如果样本的属性全是0,那么attributes_w_n的输出就是全是0,我看了一下Loss回传的地方就全都变成0了,这样不会有问题吗?

attribute_batch = np.random.randint(0, 1, [4, 6]) # 所有属性为0
attributes_w_n = tf.to_float(attribute_batch[:, 1:6])
# _num = attributes_w_n.shape[0]
mat_ratio = tf.reduce_mean(attributes_w_n, axis=0)
mat_ratio = tf.map_fn(lambda x: (tf.cond(x > 0, lambda: 1 / x, lambda: float(4))), mat_ratio)
attributes_w_n = tf.convert_to_tensor(attributes_w_n * mat_ratio)
attributes_w_n = tf.reduce_sum(attributes_w_n, axis=1)

loss_sum = tf.reduce_sum(tf.to_float(np.random.rand(4, 3)))   # 假定一些loss
_sum_k = tf.reduce_sum(tf.to_float(np.random.rand(4, 196)))
loss_sum = tf.reduce_mean(loss_sum * _sum_k * attributes_w_n)#  0
with tf.Session() as sess1:
    print(attributes_w_n.eval())
    print(loss_sum.eval())  # 这个batch_size就为0了

test_model.py 中 graph.get_tensor_by_name('landmark_L1:0') 等无法与 train_model 对应起来

在test_model.py 中,我找到如下代码

landmark_L1 = graph.get_tensor_by_name('landmark_L1:0')
landmark_L2 = graph.get_tensor_by_name('landmark_L2:0')
landmark_L3 = graph.get_tensor_by_name('landmark_L3:0')
landmark_L4 = graph.get_tensor_by_name('landmark_L4:0')
landmark_L5 = graph.get_tensor_by_name('landmark_L5:0')

其中的landmark_L1:0 等,在train中我并没有找到对应的tesor定义,故而存在如下存在如下报错:
KeyError: "The name 'landmark_L1:0' refers to a Tensor which does not exist. The operation, 'landmark_L1', does not exist in the graph."

请问如何解决这个问题?

角度损失值过大

你好,我在训练的时候,送入的label_euler_angles的用solvepnp算出来的,在此我用的是弧度制,predict_euler_angles是使用附加网络预测出来的角度,但是特别难收敛,相差特别大,

关于数据集的处理

想请问下,人脸关键点检测数据集怎么处理?是从整张图片截取人脸框,然后resize224或者112的 尺寸,再进行训练吗?那标注点是不是也要另做处理

leraning rate

Why in your train.sh lr =0.000000001, but in paper
image

关于测试的

lz,我用其他的自己拍的人脸图,发现人脸占比和训练数据的不同,因此就效果不好。在你的视频测试里,是先用人脸检测的网络,来把人脸扣出来,然后再给pfld检测,抠图的比例是在检测框的基础上取长宽最大值的1.1倍。那么我自己抠的人脸图,是要遵循什么样的人脸占比来才好呢?还是我用其他的人脸检测网络,把检测网络的抠图范围扩大1.1倍就都可以用呢?

Pre-trained model

Can you please provide a pre-trained model? I do not have the resources to train a model from scratch and would like to test out this excellent technique.
Thanks in advance.

预训练模型

谢谢您开源代码,不知道是否可以提供一份训练好的模型下载链接

BN层问题

看到网络使用了bn层,但是在训练的时候,没有对moving_mean和moving_var更新,这样测试的时候会不会有问题啊。是不是该加上下面代码
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):

3D points

hello,where can find the standard 3D points define, any references can provide,thanks

Expected float32, got <map object at 0x7f3936ddd6a0> of type 'map' instead.

 attributes_w_n= tf.to_float(attributes[:,1:6])
_num = attributes_w_n.shape[0]
mat_ratio = tf.reduce_mean(attributes_w_n,axis=0)
#TODO when use function tf.map_fn get error results [inf,nan]
# mat_ratio = tf.map_fn(lambda x:1.0/x if not x==0.0 else 0.0,mat_ratio)
mat_ratio = map(lambda x: 1.0 / x if not x == 0.0 else float(images.shape[0]),sess.run(mat_ratio))
attributes_w_n = attributes_w_n * mat_ratio

你好,我在训练的时候,报错:
Expected float32, got <map object at 0x7f3936ddd6a0> of type 'map' instead.
定位问题,在“attributes_w_n = attributes_w_n * mat_ratio” 这段代码,attributes_w_n 是一个Tensor类型,而mat_ratio是一个map的地址指向,这两个明显属于不同类型,请问您是如何调试这个地方的问题的?
补充:我使用的Tensorflow 1.10.0,是我这个版本不支持这个运算吗?

关于角度

我用训练出来的关键点(关键点效果可以),再选取14个点,用opencv的pnp计算出三个欧拉角,左右侧脸的角度还可以,俯仰角感觉挺奇怪的,很大的脸部俯仰角度,都只计算出20左右的度数。为了对比,我用商汤的服务器计算了一下同样图片的俯仰角,也是20度左右。实际上很夸张的人脸俯仰,只计算出20度,不太应该,请问楼主知道这个角度是怎么回事吗?

训练模型

请问您有没有已经训练好的模型呀?而且能不能哪个代码是那个用于直接检测图片的呀?我只看到了视频的

Not found的问题

我在尝试训练中遇到了如下的问题

  • 2019-08-10 14:30:37.463821: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at save_restore_v2_ops.cc:109 : Not found: Failed to create a NewWriteableFile: models1/model_test\model.ckpt-0.data-00000-of-00001.tempstate13236971813382146132 : ϵͳ�Ҳ���ָ����·����

请问您知道如何解决么?

关于训练WFLW数据时候的若干问题

您好,我在使用train_model.py进行模型训练,发现下面两个问题:

  1. 训练过程中,mean error 在epoch=1000以内,始终处于1.02大小左右,请问这个现象是否正常?
  2. 使用训练好的模型进行camera.py测试,统计
    pre_landmarks = sess.run(landmarks, feed_dict=feed_dict)
    运行的时间,发现处理一张图像大概要30ms,与PFLD文章中给出的150fps相差甚远,请问有什么办法能够加速运行?PS:我是在i7-7700 cpu上运行的。
  3. 通过gpu上训练对比,发现,gpu上mean error确实能够快速下降到1以下,应该确实是cpu不适合训练PFLD,模型测试时间后续添加

网络的平均池化层的作用

网络的最后是对多尺度特征进行全连接,但这之前你用了平均池化层,是为了加速嘛,论文找了半天没找到?

关于数据均衡

attributes_w_n = tf.to_float(attribute_batch[:, 1:6])
train_model.py的这一句,是提取5个attribute,为什么不是提取数据集中的全部6个attribute呢?

关于论文里的loss一些问题

首先感谢作者的分析,无论是论文还是源码都是很好的工作。向你学习!

image
但是我阅读过您的论文后心里存在一些疑问,在您的loss函数上的设计我有些疑问 :

您loss的设计主要是想用辅助网络的角度去辅助学习landmark的位置,但是根据公式如果辅助网络学习得到的角度和groundtruth相差小的话(极端点,差值直接为0)那么上述loss应该为0,

假如角度学习的3个值比landmark要学习136个值更加容易学习更容易过拟合,那么会不会存在角度辅助网络反而阻挠了landmark的学习。而且您没有给出不使用该loss的对比试验结果,所以我有点疑问。

心中只是有这个疑问,不知道是不是我论文没看明白没彻底理解作者意思。希望作者在闲暇之余帮忙解答

关于模型大小的问题

各位老哥:
我训练完后模型文件很大
.meta文件有140M
.data-00000有8M多

就算转换成.pb文件 也要76M
和作者的模型大了很多呀

有知道怎么优化的呢么

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.