Coder Social home page Coder Social logo

timobolkart / tf_flame Goto Github PK

View Code? Open in Web Editor NEW
420.0 22.0 76.0 10.36 MB

Tensorflow framework for the FLAME 3D head model. The code demonstrates how to sample 3D heads from the model, fit the model to 2D or 3D keypoints, and how to generate textured head meshes from Images.

Home Page: http://flame.is.tue.mpg.de/

Python 100.00%
face-models morphable-model computer-graphics computer-vision face-model 3d-models 3d-mesh tensorflow face-reconstruction flame-model

tf_flame's Introduction

FLAME: Articulated Expressive 3D Head Model (TF)

This is an official Tensorflow-based FLAME repository.

We also provide PyTorch FLAME, a Chumpy-based FLAME-fitting repository, and code to convert from Basel Face Model to FLAME.

FLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. FLAME combines a linear identity shape space (trained from head scans of 3800 subjects) with an articulated neck, jaw, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. For details please see the scientific publication

Learning a model of facial shape and expression from 4D scans
Tianye Li*, Timo Bolkart*, Michael J. Black, Hao Li, and Javier Romero
ACM Transactions on Graphics (Proc. SIGGRAPH Asia) 2017

and the supplementary video.

Content

This repository demonstrates how to

  1. sample 3D face meshes
  2. fit the 3D model to 2D landmarks
  3. fit the 3D model to 3D landmarks
  4. fit the 3D model to registered 3D meshes
  5. sample the texture space
  6. how to generate templates for speech-driven facial animation (VOCA)

Set-up

The code has been tested with Python3.6, using Tensorflow 1.15.2.

Install pip and virtualenv

sudo apt-get install python3-pip python3-venv

Clone the git project:

git clone https://github.com/TimoBolkart/TF_FLAME.git

Set up virtual environment:

mkdir <your_home_dir>/.virtualenvs
python3 -m venv <your_home_dir>/.virtualenvs/TF_FLAME

Activate virtual environment:

cd TF_FLAME
source <your_home_dir>/.virtualenvs/TF_FLAME/bin/activate

Install mesh processing libraries from MPI-IS/mesh within the virtual environment.

Make sure your pip version is up-to-date:

pip install -U pip

Other requirements (including tensorflow) can be installed using:

pip install -r requirements.txt

The visualization uses OpenGL which can be installed using:

sudo apt-get install python-opengl

Data

Download the FLAME model and the MPI texture space from MPI-IS/FLAME. You need to sign up and agree to the model license for access to the model and the data. Further, download the FLAME_texture_data and unpack this into the data folder. If you want to use a statistical appearance texture space for FLAME, download either AlbedoMM (CVPR 2020) or the FLAME texture space.

Demo

We provide demos to i) draw random samples from FLAME to demonstrate how to edit the different FLAME parameters, ii) to fit FLAME to 3D landmarks, iii) to fit FLAME to a registered 3D mesh (i.e. in FLAME topology), and iv) to generate VOCA templates.

Sample FLAME

This demo introduces the different FLAME parameters (i.e. pose, shape, expression, and global transformation) of the FLAME model by generating random sample meshes. Please note that this does not demonstrate how to get realistic 3D face samples from the model.

python sample_FLAME.py --option sample_FLAME --model_fname './models/generic_model.pkl' --num_samples 5 --out_path './FLAME_samples'

By default, running this demo uses an OpenGL-based mesh viewer viewer to visualize the samples. If this causes any problems, try running the demo with the additional flag --visualize False to disable the visualization.

Fit 2D landmarks

This demo demonstrates how to fit FLAME to 2D landmarks. Corresponding 2D landmarks can for instance be automatically predicted using 2D-FAN Torch or 2D-FAN Pytorch. (The test images are taken from CelebA-HQ)

python fit_2D_landmarks.py --model_fname './models/female_model.pkl' --flame_lmk_path './data/flame_static_embedding.pkl' --texture_mapping './data/texture_data_512.npy' --target_img_path './data/imgHQ00088.jpeg' --target_lmk_path './data/imgHQ00088_lmks.npy' --out_path './results'
python fit_2D_landmarks.py --model_fname './models/female_model.pkl' --flame_lmk_path './data/flame_static_embedding.pkl' --texture_mapping './data/texture_data_512.npy' --target_img_path './data/imgHQ00095.jpeg' --target_lmk_path './data/imgHQ00095_lmks.npy' --out_path './results'
python fit_2D_landmarks.py --model_fname './models/male_model.pkl' --flame_lmk_path './data/flame_static_embedding.pkl' --texture_mapping './data/texture_data_512.npy' --target_img_path './data/imgHQ00039.jpeg' --target_lmk_path './data/imgHQ00039_lmks.npy' --out_path './results'
python fit_2D_landmarks.py --model_fname './models/female_model.pkl' --flame_lmk_path './data/flame_static_embedding.pkl' --texture_mapping './data/texture_data_512.npy' --target_img_path './data/imgHQ01148.jpeg' --target_lmk_path './data/imgHQ01148_lmks.npy' --out_path './results'

By default, running the demo opens a window to visualize the fitting progress. This will fail if running the code remotely. In this case try running the demo with an additional flag --visualize False to disable the visualization. If you want to get FLAME textures of resolutions other than 512x512, use texture_data_256.npy, texture_data_1024.npy, or texture_data_2048.npy instead.

Create textured mesh

This demo demonstrates how to create a textured mesh in FLAME topology by projecting an image onto the fitted FLAME mesh (i.e. obtained by fitting FLAME to 2D landmarks). (The test images are taken from CelebA-HQ)

python build_texture_from_image.py --source_img './data/imgHQ00088.jpeg' --target_mesh './results/imgHQ00088.obj' --target_scale './results/imgHQ00088_scale.npy' --texture_mapping './data/texture_data_512.npy' --out_path './results'
python build_texture_from_image.py --source_img './data/imgHQ00095.jpeg' --target_mesh './results/imgHQ00095.obj' --target_scale './results/imgHQ00095_scale.npy' --texture_mapping './data/texture_data_512.npy' --out_path './results'
python build_texture_from_image.py --source_img './data/imgHQ00039.jpeg' --target_mesh './results/imgHQ00039.obj' --target_scale './results/imgHQ00039_scale.npy' --texture_mapping './data/texture_data_512.npy' --out_path './results'
python build_texture_from_image.py --source_img './data/imgHQ01148.jpeg' --target_mesh './results/imgHQ01148.obj' --target_scale './results/imgHQ01148_scale.npy' --texture_mapping './data/texture_data_512.npy' --out_path './results'

If you want to get FLAME textures of resolutions other than 512x512, use texture_data_256.npy, texture_data_1024.npy, or texture_data_2048.npy instead.

Fit 3D landmarks

This demo demonstrates how to fit FLAME to 3D landmarks. Corresponding 3D landmarks can for instance be selected manually from 3D scans using MeshLab.

python fit_3D_landmarks.py
Fit registered 3D meshes

This demo shows how to fit FLAME to a 3D mesh in FLAME topology (i.e. in dense corresponcence to the model template). Datasets with available meshes in FLAME topology are e.g. registered D3DFACS, CoMA dataset, and VOCASET.

python fit_3D_mesh.py

Note that this demo to date does not support registering arbitrary 3D face scans. This requires replacing the vertex loss function by some differentiable scan-to-mesh or mesh-to-scan distance.

Sample texture space

Three texture spaces are available for FLAME, the MPI texture space, AlbedoMM, and the BFM color space. This demo generates FLAME meshes with textures randomly sampled from the MPI texture space (download here)

python sample_texture.py --model_fname './models/generic_model.pkl' --texture_fname './models/FLAME_texture.npz' --num_samples 5 --out_path './texture_samples_MPI'

Randomly sample textures from the AlbedoMM texture space (download albedoModel2020_FLAME_albedoPart.npz here)

python sample_texture.py --model_fname './models/generic_model.pkl' --texture_fname './models/albedoModel2020_FLAME_albedoPart.npz' --num_samples 5 --out_path './texture_samples_AlbedoMM'
Generate VOCA template

VOCA is a framework to animate a static face mesh in FLAME topology from speech. This demo samples the FLAME identity space to generate new templates that can then be animated with VOCA.

python sample_FLAME.py --option sample_VOCA_template --model_fname './models/generic_model.pkl' --num_samples 5 --out_path './FLAME_samples'

By default, running this demo uses an OpenGL-based mesh viewer viewer to visualize the samples. If this causes any problems, try running the demo with the additional flag --visualize False to disable the visualization.

Landmarks

The provided demos fit FLAME to 3D landmarks or to a scan, using 3D landmarks for initialization and during fitting. Both demos use the shown 51 landmarks. Providing the landmarks in the exact order is essential. The landmarks can for instance be obtained with MeshLab using the PickPoints module. PickPoints outputs a .pp file containing the selected points. The .pp file can be loaded with the provided 'load_picked_points(fname)' function in utils/landmarks.py.

Citing

When using this code in a scientific publication, please cite FLAME

@article{FLAME:SiggraphAsia2017,
  title = {Learning a model of facial shape and expression from {4D} scans},
  author = {Li, Tianye and Bolkart, Timo and Black, Michael. J. and Li, Hao and Romero, Javier},
  journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},
  volume = {36},
  number = {6},
  year = {2017},
  url = {https://doi.org/10.1145/3130800.3130813}
}

License

The FLAME model is under a Creative Commons Attribution license. By using this code, you acknowledge that you have read the terms and conditions (https://flame.is.tue.mpg.de/modellicense.html), understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not use the code. You further agree to cite the FLAME paper when reporting results with this model.

Supported projects

Visit the FLAME-Universe for an overview of FLAME-based projects.

FLAME supports several projects such as

FLAME is part of SMPL-X: : A new joint 3D model of the human body, face and hands together

Acknowledgement

The Tensorflow implementation used in this project is adapted from HMR. We thank Angjoo Kanazawa for making this code available. We thank Ahmed Osman for support with Tensorflow.

tf_flame's People

Contributors

timobolkart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tf_flame's Issues

Ladmarks used in fit_2D_landmarks.py

example_2

I wonder why only subset of landmarks is used? i.e. why jaw landmarks is not used? seems person identity can be better preserved with more landmarks?

Open Jaw by modifying flame parameters

Hi Timo,

Congrats on the great work. I was trying to modify a flame mesh by modifying its parameters. What changes I need to make to the flame parameters of a mesh to open and close the jaw of a face like you do in the web demo.

Thanks a lot.

build_texture_from_image.py get an error

@TimoBolkart, thanks for your great work, I get an error when build texture.
importError: ../python3.6/site-packages/psbody_mesh-0.4-py3.6-linux-x86_64.egg/psbody/mesh/serialization/loadobj.cpython-36m-x86_64-linux-gnu.so: undefined symbol : _ZTVNSt7__cxx1115basic_stringbufIcS...
Hope you can help~ thanks~

Registration of template data on 3D point cloud raw scanned data

Dear Author,

Hi. Thank you for your great work first of all. It is very useful model indeed.

I want to do some experiments with your data and wish to know how I can align a template mesh on a raw scanned pcd data.

In the paper, it mentions that the registration work was done by "D.A. Hirshberg, M. Loper, E. Rachlin, and M.J. Black. 2012. Coregistration: Simultaneous
alignment and modeling of articulated 3D shape. In European Conference on Computer
Vision. 242–255." but I am struggling to find a way to align those two pcd data.

I have tested some of existing non-rigid ICP source codes that are publicly available but they didn't help me.

Would you clarify more in detail?

Thanks

How to set parameters to find some concrete expressions?

Thanks for share of the great work!!! Recent days I am working on FLAME and VOCA.

I met a problem: I want to set some concrete expressions(eg. smile or cry or laugh ...) to a specific model, however I don't know what the expression parameters to set(eg. 0.1 or 0.2 or 0.3).

Can you give me some advice for parameter values to concrete shape/pose/expressions? Thank you very much.

ImportError: cannot import name 'plyutils'

Hi, thank you for your great work. But I got a problem when I run your code. Could you help me with it? Thank you very much.

I run the code using the following order.
python sample_FLAME.py --option sample_FLAME --model_fname './models/generic_model.pkl' --num_samples 5 --out_path './FLAME_samples' --visualize False

I got the following error.
2022-01-06 21:05:24.843794: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
Traceback (most recent call last):
File "sample_FLAME.py", line 114, in
main(args)
File "sample_FLAME.py", line 101, in main
sample_FLAME(args.model_fname, int(args.num_samples), args.out_path, str2bool(args.visualize), sample_VOCA_template=False)
File "sample_FLAME.py", line 92, in sample_FLAME
sample_mesh.write_ply(out_fname)
File "D:\CUIXIN\01_HumanModelTemplate\TF_FLAME-master\psbody\mesh\mesh.py", line 476, in write_ply
serialization.write_ply(self, filename, flip_faces, ascii, little_endian, comments)
File "D:\CUIXIN\01_HumanModelTemplate\TF_FLAME-master\psbody\mesh\serialization\serialization.py", line 214, in write_ply
from psbody.mesh.serialization import plyutils
ImportError: cannot import name 'plyutils'

Fit 3D landmarks example

I tried to run demo with 3d fitting and I got some strange result.
I supposed to get mesh looking similar to the ./data/registered_mesh.ply but I got smth very different. But landmarks look like a good fit.
Probably I do smth wrong. But I didn't change the original script.

Here on the right is the result I got, on the left -- registered_mesh.ply.
Screenshot from 2020-05-04 19-32-13

AlbedoMM Clipping Textures

Hi (cc @waps101),

When I use the AlbedoMM texture model I get texture clipping like this:

tex_sample_05

Notice the eyebrows in the sample above.

Another example with a rendered face:

out_000_000_000_000_001

A third example with less clipping:

out_000_000_000_000_001

Directly fitting the 3D mesh

Can I train a neutral network model that can calculate the shape/expression/pose of a random input flame obj(instead of any 3D scan mesh) file without using iterator optimizer such as dogleg, BFGS.
In other words, could I sample as many flame obj files and its corresponding shape/expression/pose coefficients dataset as possible to train a model to fit the it, so I can directly output the coefficients when input an obj file.
Thanks in advance.

Adding More Landmarks

I've noticed the neck tends to be a little off because the flame landmark embedding is missing any sort of neck landmark. If I wanted to add to the embedding using some openpose landmarks, how would i find the barycentric coordinates for the new landmarks?

fit_2d_landmarks.py Pyglet error

Thanks the great work share~ I am trying the demo, run the following command on my Mac.

img_name="imgHQ00088.jpeg"
python fit_2D_landmarks.py \
    --tf_model_fname './models/generic_model' \
    --template_fname './data/template.ply' \
    --flame_lmk_path './data/flame_static_embedding.pkl'\
    --texture_mapping './data/texture_data.npy' \
    --target_img_path "./data/${img_name}" \
    --target_lmk_path './data/imgHQ00088_lmks.npy' \
    --out_path './results'

However, it always have a problem after some minutes. And I have tried other image and other versions of pyrender, pyglet. Sadly, the error always is there. I hope maybe you can give me some advice. Thank you very much~~

lmk_dist: 0.015134, shape_reg: 0.004308, exp_reg: 0.002860, neck_pose_reg: 0.000000, jaw_pose_reg: 0.000016, eyeballs_pose_reg: 0.000000
lmk_dist: 0.015124, shape_reg: 0.004314, exp_reg: 0.002863, neck_pose_reg: 0.000000, jaw_pose_reg: 0.000016, eyeballs_pose_reg: 0.000000
lmk_dist: 0.015116, shape_reg: 0.004319, exp_reg: 0.002866, neck_pose_reg: 0.000000, jaw_pose_reg: 0.000016, eyeballs_pose_reg: 0.000000
lmk_dist: 0.015107, shape_reg: 0.004325, exp_reg: 0.002869, neck_pose_reg: 0.000000, jaw_pose_reg: 0.000016, eyeballs_pose_reg: 0.000000
Traceback (most recent call last):
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/pyrender/platforms/pyglet_platform.py", line 32, in init_context
    width=1, height=1)
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/pyglet/window/__init__.py", line 632, in __init__
    self._create()
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/pyglet/window/cocoa/__init__.py", line 194, in _create
    self.context.attach(self.canvas)
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/pyglet/gl/cocoa.py", line 291, in attach
    self._nscontext.setView_(canvas.nsview)
AttributeError: 'NoneType' object has no attribute 'setView_'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "fit_2D_landmarks.py", line 215, in <module>
    run_2d_lmk_fitting(args.tf_model_fname, args.template_fname, args.flame_lmk_path, args.texture_mapping, args.target_img_path, args.target_lmk_path, args.out_path)
File "fit_2D_landmarks.py", line 171, in run_2d_lmk_fitting
    result_mesh, result_scale = fit_lmk2d(target_img, lmk_2d, template_fname, tf_model_fname, lmk_face_idx, lmk_b_coords, weights)
  File "fit_2D_landmarks.py", line 126, in fit_lmk2d
    lmk_dist, shape_reg, exp_reg, neck_pose_reg, jaw_pose_reg, eyeballs_pose_reg], loss_callback=on_step)
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/tensorflow/contrib/opt/python/training/external_optimizer.py", line 207, in minimize
    optimizer_kwargs=self.optimizer_kwargs)
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/tensorflow/contrib/opt/python/training/external_optimizer.py", line 402, in _minimize
    result = scipy.optimize.minimize(*minimize_args, **minimize_kwargs)
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/scipy/optimize/_minimize.py", line 610, in minimize
    callback=callback, **options)
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/scipy/optimize/lbfgsb.py", line 345, in _minimize_lbfgsb
    f, g = func_and_grad(x)
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/scipy/optimize/lbfgsb.py", line 295, in func_and_grad
    f = fun(x, *args)
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/scipy/optimize/optimize.py", line 327, in function_wrapper
    return function(*(wrapper_args + args))
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/scipy/optimize/optimize.py", line 65, in __call__
    fg = self.fun(x, *args)
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/tensorflow/contrib/opt/python/training/external_optimizer.py", line 367, in loss_grad_func_wrapper
    loss, gradient = loss_grad_func(x)
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/tensorflow/contrib/opt/python/training/external_optimizer.py", line 281, in eval_func
    callback(*augmented_fetch_vals[num_tensors:])
  File "fit_2D_landmarks.py", line 106, in on_step
    rendered_img = render_mesh(Mesh(scale*verts, faces), height=target_img.shape[0], width=target_img.shape[1])
  File "/Users/plm/pro/TF_FLAME/utils/render_mesh.py", line 30, in render_mesh
    r = pyrender.OffscreenRenderer(viewport_width=width, viewport_height=height)
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/pyrender/offscreen.py", line 31, in __init__
    self._create()
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/pyrender/offscreen.py", line 134, in _create
    self._platform.init_context()
  File "/Users/plm/app/anaconda3/envs/flame/lib/python3.6/site-packages/pyrender/platforms/pyglet_platform.py", line 38, in init_context
    'internal error message was "{}"'.format(e)
ValueError: Failed to initialize Pyglet window with an OpenGL >= 3+ context. If you're logged in via SSH, ensure that you're running your script with vglrun (i.e. VirtualGL). The internal error message was "'NoneType' object has no attribute 'setView_'"

filename_lmks.npy file issue

How filename_lmks.npy files are created. I was trying to create a textured obj file for voca. When I use the files present in TF_FLAME i was enable to create the textured model but failed while trying with npy files which was created from the Ringnet. When I checked the content of lmks.npy file and the npy files which was created in Ringnet, they were totally different. How can I create these npy files.
What all things I need to do to make the textured model to animate for talking my sentences?

Fit FLAME to a 3D mesh

Hi, it's literally an impressive work!

I want to know of the possibility to fit FLAME to 3D mesh which doesn't have topology of the FLAME and it just covers the frontal part of the face (it's a face point cloud after triangulation).

Thanks.

The "tf_pose" of the updated code seems to be invalid

When removing the expression parameters and using the old version of code to fit the mouth opening model, the result model is correct. But when I use the new version of model to fit, the fitting result model is not mouth opening. (the jaw parameter will lead to mouth opening). tf_pose parameter seems to be not quite right. Do anyone know the reason?

Missing texture data

Thank you for the amazing work.
Just have one question. It seems there isn't texture_data_number.npy. And the download link seems invalid now?

Fit to animation

Awesome project! I'm playing around with it together with the Blender add-on.

I was curious about the possibility of fitting animated FLAME models, being it from 2D landmarks or even from videos.
In the video you showed captured performance from scan, but I believe that you didn't provide the code for that.
Also processing each frame (or landmarks) separately to generate a fit FLAME model doesn't sound optimal.

I tried for example 3DDFA_V2 to get animated 3D mesh from video, but I don't have any idea how to transfer such information to a FLAME model.

Any input on this side is more that welcome, I will then also try the path of animation via VOCA

about dynamic_lmk_faces_idx

Sorry for bothering you. Thanks for your great work! I am trying the texture model of TF-FLAME now, and I notice that there is a dynamic_lmk_faces_idx data. could you give me some hints to understand that?

Understanding the loss function for the optimization of the rigid transformation

Hello,
fist of all thank you very much for your awesome work!

I am currently looking through the code that fits the flame model to the 2D landmarks of a given image and I am trying to understand it. However, I am not sure what the logic behind the formula is that was used to create the loss for finding the appropriate scale, translation and rotation to align the projected flame landmarks with the target landmarks.
More specifically I am talking about this two lines:

factor = max(max(target_2d_lmks[:,0]) - min(target_2d_lmks[:,0]),max(target_2d_lmks[:,1]) - min(target_2d_lmks[:,1]))
lmk_dist = weights['lmk']*tf.reduce_sum(tf.square(tf.subtract(lmks_proj_2d, target_2d_lmks))) / (factor ** 2)

I do understand that the factor is the maximum range of either the x- or y-coordinates of the target landmarks. But why is the result of the subtraction of the projected landmarks and the target landmarks squared, reduced and then divided by the square of the factor? Why don't calculate the average distance of the projected and the target landmarks and reduce those?

a question about valid_pixel_ids

Dear author,
thanks for the great work! I am sorry to bother you, I have a question about the selection of valid_pixel_ids. I see that the valid_pixel_ids is about the valid pairs of x_coords and y_coords. how do you generate the valid_pixel_ids? do you have any reference?

Question

Thank you for your good work. I have a question about texture-mapping.How to get texture mapping? How to establish links bewteen the source image and UV image? @TimoBolkart

strange shape question

Hi,I use accurate 2D landmark to fit and add the weight to let it close eyes.It shows that it can close eyes but get strange shape. I try to add regularizer of shape and expression, but it does not work.Any suggestions? @TimoBolkart
image
image

Any pre-scanned full textures available?

Hello, thank you so so much for this amazing work! I'm really enjoying playing around with it.
I have tried the fit_2D_landmark demo and since the texture cannot be generated for occluded regions in the 2D image (i.e. ears) I was wondering if there is a pre-scanned complete texture for any subject so that I can use it as a base texture and just update the valid parts with my own image.
Or if you can recommend some other way, that'd be great.

Also, is it possible to use a single pre-scanned texture template for all the fitted meshes? (might be a very trivial question, I'm very new to computer graphics :'))
Thank you so much again!

Help with projection Matrix

I am trying to calculate the pixel coordinates of the landmarks.
(Parameters are copied from the render helper).

def viewport(x, y, w, h):
    x, y, w, h = map(float, (x, y, w, h))
    return np.matrix([[w/2, 0  , 0,x+w/2],
                      [0  , h/2, 0,y+h/2],
                      [0  , 0  , 0.5,  0.5],
                      [0  , 0  , 0,    1]])

size = 800

frustum_ = {'near': 0.01, 'far': 3.0, 'height': 800, 'width': 800}
camera_params = {'c': np.array([400, 400]),
                 'k': np.array([-0.19816071, 0.92822711, 0, 0, 0]),
                 'f': np.array([4754.97941935 / 2, 4754.97941935 / 2])}

landmarks = mesh_points_by_barycentric_coordinates(m.v,m.f,flame_embedding['lmk_face_idx'],flame_embedding['lmk_b_coords'])
 


pose_mat = [[1, 0, 0, 0],
            [0, 1, 0, 0],
            [0, 0, 1, 1],
            [0, 0, 0, 1]]

camera = pyrender.IntrinsicsCamera(fx=camera_params['f'][0],
                                  fy=camera_params['f'][1],
                                  cx=camera_params['c'][0],
                                  cy=camera_params['c'][1],
                                  znear=frustum_['near'],
                                  zfar=frustum_['far'])

P = camera.get_projection_matrix(size,size)
V = np.linalg.inv(pose_mat)
M = np.eye(4)
Vw= viewport(0,0,size,size)

landmarks_expanded = np.ones((68,4)) 
landmarks_expanded[:68,:3] = landmarks

expanded = (Vw@P@V@landmarks_expanded.T).T
plt.scatter([expanded[:,0]],[expanded[:,1]])
plt.show()
plt.scatter([landmarks[:,0]],[landmarks[:,1]])
plt.show()

`
But the resulting image is a bit skewed.
I'm probably doing something wrong... but not sure what.
Any help would be greatly appreciated!

image

License

Hello, thanks for sharing this awesome research! Just a question, is the source code with the same license as the FLAME model file(as I understood from the README)? I ask this because the source code on Github is proprietary. Thanks for your attention!

How to get texture map

I saw your demo on youtube.
In the demo, i saw the face with texture map.
So, i want to know how to render texture map from scan file.

The shape parameters to change the width of neck independently

Hello Dr. Bolkart, I trained a deep network to predict the parameters of face reconstruction with Flame. Now the shape of face looks good but the width of neck looks larger than the input image. I expanded the mask to cover the neck when computing the photometric loss but the result looks almost the same. Is there any parameter of shape could change the width of neck and won't change the shape of face?

how to generate dynamic_lmk_faces_idx?

Sorry for bothering you. Thanks for your great work! I am trying the texture model of TF-FLAME now, and I notice that there is a dynamic_lmk_faces_idx data. could you give me some hints to understand that?

tf_models vs texture_data.npy

I was exploring fit_2D_landmarks.py and noticed that the result_mesh returned by fit_lmk2d on line 171 has 5023 vertices (len(result_mesh.v) == 5023), as does the tf_model loaded by fit_lmk2d from the models/generic_model.meta file.

However, the texture coordinates loaded from data/texture_data.npy file, which you use to set result_mesh.vt on line 184, has length 5118 (len(texture_data['vt']) == 5118).

Shouldn't these two arrays have the same length?

About tf2.3.1

hi! thank you for the amazing work.
according to #41 , is there any plan upgrade to tf2.3.1?

Issue with pyopengl library

Dear developers,

I have installed all the library and python for using TF_Flame program.
However, when I tried to run the example by typing "python sample_FLAME.py", I got the following error instead:

AttributeError: 'NoneType' object has no attribute 'glGetError'

Something must have gone wrong with pyopengl library for TF_Flame.
Therefore, I would like to know which version of pyopengl library you are using for TF_Flame project so I deal with Attritbute Error problem.

ErrorinPyOpenGLinTF_Flame

how to load obj file, mtl file and texture png?

I generated imgHQ00088.obj imgHQ00088.mtl imgHQ00088.png with the following command.Then use this three file to add texture information to the obj output by voca. Under voca/animation_output/meshes_textured directory have generated new obj, mtl and png. For example, 00000.obj, 00000.mtl, 00000.png...
I have a problem now, how do I generate a video with texture from these files?

python fit_2D_landmarks.py --model_fname './models/female_model.pkl' --flame_lmk_path './data/flame_static_embedding.pkl' --texture_mapping './data/texture_data_512.npy' --target_img_path './data/imgHQ00088.jpeg' --target_lmk_path './data/imgHQ00088_lmks.npy' --out_path './results'
python build_texture_from_image.py --source_img './data/imgHQ00088.jpeg' --target_mesh './results/imgHQ00088.obj' --target_scale './results/imgHQ00088_scale.npy' --texture_mapping './data/texture_data_512.npy' --out_path './results'
python fit_3D_landmarks.py
python fit_3D_mesh.py

this is my code

import os
import glob
import argparse
from subprocess import call
from psbody.mesh import Mesh
from psbody.mesh.meshviewer import MeshViewer

parser = argparse.ArgumentParser(description='Sequence visualization')
parser.add_argument('--sequence_path', default='./animation_output', help='Path to motion sequence')
parser.add_argument('--audio_fname', default='./audio/test_sentence.wav', help='Path of speech sequence')
parser.add_argument('--out_path', default='./animation_visualization', help='Output path')

args = parser.parse_args()
sequence_path = args.sequence_path
audio_fname = args.audio_fname
out_path = args.out_path

if not os.path.exists(args.out_path): os.makedirs(args.out_path)
img_path = os.path.join(out_path, 'img')
if not os.path.exists(img_path): os.makedirs(img_path)

mv = MeshViewer()
sequence_fnames = sorted(glob.glob(os.path.join(sequence_path, '*.obj')))
if len(sequence_fnames) == 0:
    print('No meshes found')

# Render images
for frame_idx, mesh_fname in enumerate(sequence_fnames):
    frame_mesh = Mesh(filename=mesh_fname)
    temp = mesh_fname.split('/')
    frame_mesh.set_texture_image(temp[-1][:-4] + '.png')    
    mv.set_dynamic_meshes([frame_mesh], blocking=True)
    img_fname = os.path.join(img_path, '%05d.png' % frame_idx)
    mv.save_snapshot(img_fname)

# Encode images to video
cmd_audio = []
if os.path.exists(audio_fname):
    cmd_audio += ['-i', audio_fname]
    print cmd_audio

if os.path.exists(args.out_path):
print(args.out_path)

out_video_fname = os.path.join(out_path, 'video2.mp4')
print(out_video_fname)
cmd = ['ffmpeg', '-framerate', '60', '-pattern_type', 'glob', '-i', os.path.join(img_path, '*.png')] + cmd_audio + [out_video_fname]
call(cmd)

Arbitrary 3D face fitting

Hello, I found the result is great and I wonder to know is there any suggestion for Arbitrary 3D face fitting? Thanks in advance.

Right values for 3D landmarks

In fit_3D_landmarks.py , 3D landmark data /data/landmark_3d.npy (51,3) is the input 3D detected lanmark position that is fitted.
So, If I generate 3D landmarks (using face_alignment package or some other) its landmark value is depending on the size of the image (greater thn zero). But landmark value in landmark_3d.npy are between 0.106573 and -0.108844.
How should I normalize the values?

Unlabelled mask region in Flame.

1
2

Do Flame not provide a mask for the region as shown in image 2.
I have extracted and visualized all masks and the region below the ears (both of them) seems to have no mask label. Is it so? or I am messing up somewhere.

Please clarify.

Can FLAME model close eyes?

We'd like to use FLAME model to fit RGB-D photos, but seems unlike BFM, FLAME model never close her/his eyes. Is this observation valid?

Question about Face Contour and Head shape from 2D Landmarks

Thank you for your fantastic work, but I had a question related to how FLAME generates the proper Head Shape and Face Contour from 2D Landmarks.

From what I have seen, you use the 49 Landmarks on the Facial Features to generate the Head Model.

How is the contour of the face and head determined if you do not use the contour landmarks to help generate the head shape?

Thank you and I hope to hear from you soon!

how to normalize lmks3d?

good job for me!
i want to know how to normalize the input lmks3d.The provided lmk3d is work well, like follow
image
but when i use lmk3d * 100, the result is bad.
image

How can I fit a 3D mesh that was not generated with FLAME topology?

I have seen in multiple videos of your team using FLAME with existing 3D models, for example Churchill's head in VOCA. I have downloaded such models but the script fit_3D_mesh.py cannot use them as input as they have a different number of vertices and, I suspect, not exactly the same mapping between vertices and parts of the head. Thanks for the help!

About build_texture_from_image.py

Hi, Thanks for your excellent work!

I've been trying to figure out how to use an image to build texture to the FLAME mesh (using build_texture_from_image.py) and I had several questions:

  1. I saw the file needs to take in a pre-computed parameter "target_scale". I'm wondering how to calculate this parameter if I want to align my own image with its FLAME mesh.

  2. I noticed that the example meshes given (such as imgHQ00039.obj) are not zero-centered. They have offsets from the origin and it seems that this offset is useful for calculating the texture map. How is this offset chosen?

Thanks for answering!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.