jingyang2017 / au-net Goto Github PK
View Code? Open in Web Editor NEWTowards robust facial action units detection
License: MIT License
Towards robust facial action units detection
License: MIT License
Hello, thank you for this repository; it's truly inspirational.
I've been attempting to utilize the model for action unit detection on a single image, inspired by the demo provided in the repository. However, I've encountered some issues related to face alignment that I believe might be affecting the output. Each execution of the code yields different results, which has led me to suspect that my implementation might not be fully aligned with the model's requirements or expected input format.
I am not an expert in this field but understand that nuances in face alignment and preprocessing could significantly impact the model's performance and output consistency. Below is the code snippet I've been working with, adapted from your demo. I aimed to replicate the demo's preprocessing and model invocation steps for a simple single-image context. Am I missing something?
All the best and regards,
Enes.
note, I only adapted the AU_detection_single_image
fn, so other areas are remained same (get_scale_center
, get_transform
,crop
and caculate_pts5
)
import cv2
import face_alignment
import numpy as np
from PIL import Image
from skimage import io
from skimage import transform as trans
from torchvision import transforms
import torch
from models.aunet import AU_NET
from face_alignment import LandmarksType
def get_scale_center(bb, scale_=220.0):
center = np.array([bb[2] - (bb[2] - bb[0]) / 2,
bb[3] - (bb[3] - bb[1]) / 2])
scale = (bb[2] - bb[0] + bb[3] - bb[1]) / scale_
return scale, center
def get_transform(center, scale, res, rot=0):
h = 200 * scale
t = np.zeros((3, 3))
t[0, 0] = float(res[1]) / h
t[1, 1] = float(res[0]) / h
t[0, 2] = res[1] * (-float(center[0]) / h + .5)
t[1, 2] = res[0] * (-float(center[1]) / h + .5)
t[2, 2] = 1
if not rot == 0:
rot = -rot # To match direction of rotation from cropping
rot_mat = np.zeros((3, 3))
rot_rad = rot * np.pi / 180
sn, cs = np.sin(rot_rad), np.cos(rot_rad)
rot_mat[0, :2] = [cs, -sn]
rot_mat[1, :2] = [sn, cs]
rot_mat[2, 2] = 1
# Need to rotate around center
t_mat = np.eye(3)
t_mat[0, 2] = -res[1] / 2
t_mat[1, 2] = -res[0] / 2
t_inv = t_mat.copy()
t_inv[:2, 2] *= -1
t = np.dot(t_inv, np.dot(rot_mat, np.dot(t_mat, t)))
return t
def crop(img,rects):
im_w = 256
bb = rects[:4]
scale, center = get_scale_center(bb, scale_=260)
aug_rot = 0
dx, dy = 0, 0
center[0] += dx * center[0]
center[1] += dy * center[1]
mat = get_transform(center, scale, (im_w, im_w), aug_rot)[:2]
img = cv2.warpAffine(img.copy(), mat, (im_w, im_w))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
return bb,img
def caculate_pts5(pred):
eye_left = np.mean(pred[36:42,:],axis=0)
eye_right = np.mean(pred[42:48,:],axis=0)
return np.array([eye_left,eye_right,pred[33,:],pred[48,:],pred[54,:]])
def AU_detection_single_image(model, image_path):
tform = trans.SimilarityTransform()
au_indices = (1, 2, 4, 6, 7, 9, 10, 12, 14, 15, 17, 23, 24, 25, 26)
fa = face_alignment.FaceAlignment(face_alignment.LandmarksType.TWO_D, flip_input=False, device='cuda')
transform_val = transforms.Compose([transforms.ToTensor()])
img = cv2.imread(image_path)
if img is None:
print(f"Image at {image_path} could not be read.")
return
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
detected_faces = fa.face_detector.detect_from_image(img.copy())
if len(detected_faces) == 0:
print("No faces detected.")
return
# Processing the detected face...
bbox = detected_faces[0][:4]
scale, center = get_scale_center(bbox, scale_=260)
mat = get_transform(center, scale, (256, 256))
img_transformed = cv2.warpAffine(img, mat[:2], (256, 256))
img_transformed = Image.fromarray(img_transformed)
img_transformed_tensor = transform_val(img_transformed).unsqueeze(0).cuda()
with torch.no_grad():
pred = model(img_transformed_tensor, img_transformed_tensor, img_transformed_tensor)
probs = np.array(torch.sigmoid(pred[0]).cpu().data)
print("Detected Action Units and their probabilities:")
for au_idx, au_prob in zip(au_indices, probs):
print(f"AU {au_idx}: {au_prob:.4f}")
model = AU_NET(alpha=0.9, beta=0.1, n_classes=15)
pre_trained = torch.load('./checkpoints/cross_model.pth', map_location="cuda")
pretrained_dict = {k: v for k, v in pre_trained.items() if k in model.predictor.state_dict()}
model.predictor.load_state_dict(pretrained_dict, strict=False)
model = model.cuda()
model.eval()
image_path = 'lena.png'
AU_detection_single_image(model, image_path)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.