610265158 / peppa_pig_face_landmark Goto Github PK
View Code? Open in Web Editor NEWA simple face detect and alignment method, which is easy and stable.
License: Apache License 2.0
A simple face detect and alignment method, which is easy and stable.
License: Apache License 2.0
I went to convert models into the TensorFlow Lite
keypoints success
but detector get error
packages\tensorflow_core\lite\python\lite.py", line 428, in convert
"invalid shape '{1}'.".format(_get_tensor_name(tensor), shape_list))
ValueError: None is only supported in the 1st dimension. Tensor 'images' has invalid shape '[None, None, None, None]'.
can help me?
Thinks.
能不能考虑直接使用OpenFace的检测代码,在Python上实现调用呢
您好,我在尝试用mnn C++做keypoint模型的c++移植,对于 这里的预处理不理解:https://github.com/610265158/Peppa_Pig_Face_Engine/blob/master/lib/core/api/face_landmark.py#L102
add = int(max(bbox_width, bbox_height))
bimg = cv2.copyMakeBorder(image, add, add, add, add, borderType=cv2.BORDER_CONSTANT, value=cfg.DATA.pixel_means)
bbox += add
请问能否协助指点C++的预处理吗?
hi, dear Peppa man, I found you have listed those model inference time,
shufflenetv2_0.75 including tflite model,
(time cost: mac [email protected], tf2.0 5ms+, tflite 3.7ms+- model size 2.5M)
but when I run the demo in my 1080/ cpu,they spent the almost same time.
In CPU(Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz):
one iamge cost 0.016803 s
facebox detect cost 0.0021691322326660156
in 1080:
one iamge cost 0.011924 s
facebox detect cost 0.001332998275756836
It seems to your face landmark model inference time is Shorter?
Will the training code and data be released?
the launching time is so long(after 1 minute) or I am doing something wrong
Thank you very much for your open source, I want to ask how to use GPU to detect faces, looking forward to your reply!
你好,作者。
我想请教下, 你试过在TensorRT中使用人脸特征点模型吗 ?
Hi, the face detection algorithm used in the project is faceboxes??
Do we need a QQ group to communication?
https://github.com/610265158/Peppa_Pig_Face_Engine/blob/b2a39f9f5db13d833c263a6d2aafd4c2e4b41502/lib/core/api/facer.py#L65
Comments on this line of code said ‘calculate the head pose for the whole image‘,but It seems not right when I watch it. This function is for update landmark, do I understand right?
face_detector.py ave me above error:
self.model = tf.saved_model.load(self.model_path) in this line. In cofig.py as path I gave model/detector
Tensorflow 1.13
你好,作者。
请问这个demo有C++实现的吗 ?
Nice job. I download the project, and test it with a video, but the landmarks is untstable like demo.
except for cropping the eyes region and then predict
Thanks for your good work.
When I testing the Tf1 version on ubuntu 18.04 OS then pepa-face engine application some time detecting nonface things as a face like below image
please advise.
在您另外一个facelandmark 的工程中,我看到您的训练数据(landmark的输入img)的截取是根据landmark的外框来的,并没有去结合根据人脸检测的框,在peppa pig 的demo 用的landamrk就是用这种数据处理方式得到的吗?
观察到当我把人脸检测的方法换成别的时,精度明显没有用你的人脸检测的做出来的landmark 高,请问您是在训练landmark的数据预处理时参考了人脸检测框吗?
您好,
我看了您给的demo,关键点抖动稳定性比较好。我看在实际代码中,是通过利用前一帧的结果对人脸框和关键点做平滑来进行处理的。我按照同样的方式,在其他视频上测试,抖动稍微好些,但还是有抖动。请问有关视频中关键点抖动问题有什么更好的方法吗?
Traceback (most recent call last):
File "/home/changshuai/code/Peppa_test/demo.py", line 165, in
video("/home/lianping/sunxiaohu/face-detection-tensorrt-yolov3-tiny/test_video/20191010155113.avi")
File "/home/changshuai/code/Peppa_test/demo.py", line 30, in video
boxes, landmarks, states = facer.run(image)
File "/home/changshuai/code/Peppa_test/lib/core/api/facer.py", line 61, in run
landmarks,states=self.face_landmark.batch_call(image,boxes)
File "/home/changshuai/code/Peppa_test/lib/core/api/face_landmark.py", line 140, in batch_call
self.model.set_tensor(self.input_details[0]['index'], images_batched)
File "/home/lianping/develop_environment/anaconda2/envs/tensorflow_1.14_cuda9.0/lib/python3.5/site-packages/tensorflow/lite/python/interpreter.py", line 197, in set_tensor
self._interpreter.SetTensor(tensor_index, value)
File "/home/lianping/develop_environment/anaconda2/envs/tensorflow_1.14_cuda9.0/lib/python3.5/site-packages/tensorflow/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 136, in SetTensor
return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_SetTensor(self, i, value)
ValueError: Cannot set tensor: Dimension mismatch. Got 4 but expected 1 for dimension 0 of input 411.
Hi, Thanks for your awesome repository.
I compared your keypoints.tflite model to mediapipe face_landmark.tflite provided by mediapipe.
mediapipe output_details has 2 row containing landmark points and confidence(not sure).
but keypoints.tflite model output_details has 3 row. I don't know what are the first 2 rows but the third row is landmark points.
is the first 2 rows related to confidence? what are they?
hi,I don’t understand the meaning of the “states” derived from this keypoint model, and why is this variable not used?
Thanks for great work
could you please advise how to get the left and right eye landmark?
I want to calculate the EAR of the left and right eyes.
I followed the DLIB facial points to I am getting below error
cannot reshape array of size 600 into shape (1,10,20,1)
code
`def predict_eye_state(self, image):
global graph
with graph.as_default():
image = cv2.resize(image, (20, 10))
image = image.astype(dtype=np.float32)
image_batch = np.reshape(image, (1, 10, 20, 1))
image_batch = keras.applications.mobilenet.preprocess_input(image_batch)
return np.argmax(self.eye_model.predict(image_batch)[0])
`
Help highly appreciated.
Hi, i checked your keypoints model, it's great! But i found that the 37th point(index:36) with a bigger deviation, what's more, eye can not close well when the angle of yaw bigger than about 30 degree. So what can i do to reduce these problems?
For me i want to use euler angle balance the half face, calculate eye area based on opencv to balance the eye close, and make a bigger weights for 37th point. Also remove the additional outputs, just for the 136 landmarks. My train is time consuming and i don't know whether my ideas is more effective or not, so can you give me some advices? Thx!
您好,在调试您的代码的时候发现:当突然遮挡住人脸的时候就会出现bug,貌似是
for i in range(landmarks.shape[0]):
track.append([np.min(landmarks[i][:, 0]), np.min(landmarks[i][:, 1]), np.max(landmarks[i][:, 0]),
np.max(landmarks[i][:, 1])])
这块代码的问题,我还在调试,没找到解决方案
when I wear eyeglasses, the result is not good. How to improve?
1.我看到你在实现optimizer的时候是用apply_gradient那些自己去实现的,而不是直接用的minimize,请问为啥要自己写呢?有什么考虑
2.我跑了一下你的training发现lr 并没有在boudaries处变化,global step超过第一个节点150000之后lr还是0.001哎。甚至到了90k 也没变,是不是minize 有什么bug?
输出Landmark坐标和dlib输出的差别很大
在lib/core/LK/lk.py中,
line86: result.append(self.filter(now_landmarks[i], previous_landmarks[i]))
使用OneEuroFilter 来smooth;
line144 : self.dx_prev = dx_hat
使用68个关键点中的上一个点的dx_prev,而不这个点对应上一帧的dx_prev。
这么写没问题么?虽然smooth对结果影响不是很大。
Hi, I'm checking the head pose calculation of the face, and I have a problem when with a webcam I change the yaw of my face the pitch and roll change even though my head is not changing pitch and roll (the pitch is increasing for example). I've calibrated my webcam to minimize the distortion from the lenses and changed the D value.
I've seen the "object_pts" variable is an array with the points reference. How did you found these values?
It is possible to change from the "object_pts" which points pick to reduce the error? (for example, add the 4 points of the nose)
Thanks to your pretty good works , and i wish to see your model containing pupil-detection ,...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.