Coder Social home page Coder Social logo

deepmo24 / cpem Goto Github PK

View Code? Open in Web Editor NEW
95.0 95.0 18.0 24.05 MB

PyTorch implementation of "Towards Accurate Facial Motion Retargeting with Identity-Consistent and Expression-Exclusive Constraints" (AAAI2022)

Python 79.90% Shell 0.04% Cython 5.81% C++ 14.04% CMake 0.21%

cpem's People

Contributors

deepmo24 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

cpem's Issues

expression parameter

hi,thks for your great work.i have one question.

  1. Whether these expression parameters have actual physical meaning?just like arkit face blendshape.

looking forward to your reple.

Voxceleb2 is not available from the official website now

Hi, CPEM is fantastic!
But when I want to train with Voxceleb dataset, I disappointedly find that it's not available now,so could you share the Google drive link of this dataset? Or could you share the mobile-netv2 pre-trained model?

切换为flame?实时性?

Hi,感谢分享这份工作!咨询下:

1.模型切换为flame的可行性?
2.模型推理的实时性?放一个video进去
3.与DECA的对比?

谢谢~!

Facial Retargeting

Hi,
I just finished reading your paper and I'm curious about how you implement cross-domain retargeting for virtual avatars. Specifically, after obtaining the pose and expression parameters, how would you apply them to the avatars? Still using BFM model for avatars so that you can transform the neural avatar to the desired one?
Thank you in advance for your kind reply!

Which is the test split of FEAFA dataset?

Hi,
In the released code, it seems that all the FEAFA dataset is used for training. I wonder which test split was used to generate Tab. 1 in the paper? Thank you!

Expression Model

Hi, I'm curious about how to 'mean_delta_blendshape.npy'!

Could you release more detail or source code about it?

Thanks!

?how to driven meathuman's face>

lt looks like using alpha and beta params to control retarget face, but what if metahuman's face? How to using these params to retarget it?

about dataset

could you share the facewarehouse dataset? I cannot find the dataset on the official website。

Facial motion retargeting to 3D avatar

Hi, thanks for your great work and the sharing of it :) I am impressed by the retargeting effect on 3D cartoon avatar shown in Figure 3 in the paper, especially compared with other approaches. Is it possible that you can share the assets of this 3D cartoon avatar model so that I can run the retargeting demo on the 3D avartar?

How to process the videos in the VoxCeleb2 dataset?

Hi, thanks for the great work!

I wonder how to process the training data in the VoxCeleb2 dataset. The videos are 25fps, shall I use all the frames for training or sample a part of them? Do they need to be extracted as jpgs (if I extract each frame to jpg, I expect it to be very large)? Thank you!

About realtime of using mobilenetv2 backbone

hi, i see in your paper that you also tried MobileNetV2 as backbone, how about the realtime? In other words, how much time does it takes to infer blendshapes from one image by using MobileNetV2 backbone?

如何驱动3D avatar

作者你好,3DMM 47点是否与ARKIt52点存在对应关系或者说如何驱动类似论文中的3d人脸表情呢?

Question about the 2D landmarks detection on voxceleb2

Dear author,
Thanks for the brilliant work. But we met a question when detecting the 2D landmarks on the demo dataset of voxceleb2. It seems that we get different results using the preprocess/detect_landmarks.py by replacing the FaceAlignment parameter to LandmarksType.2D.

This is our generated results on id00025/eb8vIK6NrmE/00045_0218:
image

This is the provided 2D landmarks in the demo dataset:
image

Could you please share us your code for the 2D landmark detection on voxceleb2? Thank you very much. Looking forward to your reply!

Face mask

Hi, thanks for your great work.
I use face-parsing.Pytorch to generate mask. The results as follows.
AFW_156474078_1_0
AFW_156474078_1_0
AFW_156474078_1_10
AFW_156474078_1_10


My result is different from yours.
data/demo_dataset/300w_lp/face_mask/AFW_156474078_1/001/AFW_156474078_1_0
AFW_156474078_1_0
data/demo_dataset/300w_lp/face_mask/AFW_156474078_1/001/AFW_156474078_1_10
AFW_156474078_1_10

My code as follows.

n_classes = 19
net = BiSeNet(n_classes=n_classes)
img = Image.open(osp.join(dspth, image_path))
image = img.resize((512, 512), Image.BILINEAR)
img = to_tensor(image)
img = torch.unsqueeze(img, 0)
img = img.cuda()
out = net(img)[0]
parsing = out.squeeze(0).cpu().numpy().argmax(0) # (512, 512)

face_mask = np.zeros(parsing.shape, dtype=np.uint8)
valid_indices = [1, 2, 3, 10, 12, 13] # # 1: skin, 2: l_brow, 3: r_brow, 10: nose, 12: u_lip, 13: l_lip
for valid_idx in valid_indices:
    face_mask += parsing == valid_idx

mask = (face_mask * 255).astype(np.uint8)


Can you share your code or ideas?

如何计算facewarehouse 的mean_delta_blendshape.npy

作者您好,感谢您分享的珍贵的研究工作,我这里有个问题想请教一下,关于mean_delta_blendshape.npy的计算问题,我看了您在#3 这里的回复,有个细节没有考虑清楚,就是您在计算mean_delta_blendshape.npy的时候用到了deform transfor的技术.以下几个问题请您解惑:
1.facewarehouse的数据共有150个identity,每个identity有46个blendshape的obj文件,顶点数有11k个,而BFM的模型顶点数是35709,这里用deform transfor的时候,是将facewarehouse数据中的某个identity的46个blendshape的obj转换到BFM09的mean blendshape,还是先计算了数据集里面所有的identity的blendshap的平均值,也就是说把facewarehouse150个identity的46个blendshape的obj相加,然后求平均计算出46个blendshape,然后再转换到BFM09的的mean blendshape.

2.这个BFM09的mean blendshape是S+T?S代表形状的平均,T代表纹理的平均

retarget到 3D avatar 眼睛闭不上的问题

您好,大佬,我按照您的方法添加了 identity-consistent constraint to deep3Drecons. 但是发现,还是没办法很好的将面部表情retarget到 3D avatar上,比如,在我的实验中,3D重构的mesh的眼睛是能够闭上的,但是retarget到 3D avatar上,眼睛几乎没有变化。想请教下您是否也曾经遇到过类似的问题?
如果遇到过,是怎么解决的呀?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.