Coder Social home page Coder Social logo

1adrianb / face-alignment Goto Github PK

View Code? Open in Web Editor NEW
6.8K 175.0 1.3K 5.29 MB

:fire: 2D and 3D Face alignment library build using pytorch

Home Page: https://www.adrianbulat.com

License: BSD 3-Clause "New" or "Revised" License

Python 98.75% Dockerfile 1.25%
python deep-learning face-alignment face-detector pytorch face-detection

face-alignment's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

face-alignment's Issues

Test error

Python 3.5.4 |Anaconda custom (64-bit)| (default, Aug 14 2017, 13:26:58)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.

import face_alignment
from skimage import io

fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, enable_cuda=True, flip_input=False)
Downloading the face detection CNN. Please wait...
Traceback (most recent call last):
File "", line 1, in
File "/home/quantumliu/pyprojects/face-alignment/face_alignment/api.py", line 84, in init
path_to_detector)
Boost.Python.ArgumentError: Python argument types in
cnn_face_detection_model_v1.init(cnn_face_detection_model_v1, str)
did not match C++ signature:
init(_object*, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)

AttributeError: 'torch.cuda.FloatTensor' object has no attribute 'ndim'

In order to improve the accuracy, make flip_input=True.but I got the error as above! Is this a bug!
here is my code

fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, enable_cuda=True, enable_cudnn=True,flip_input=True,use_cnn_face_detector=True)

here is the error message
Traceback (most recent call last):
File "/home/cp/PycharmProjects/EmotiW/preprocess/test.py", line 11, in
preds = fa.get_landmarks(input)[-1]
File "/usr/local/lib/python3.5/dist-packages/face_alignment-0.1.0-py3.5.egg/face_alignment/api.py", line 189, in get_landmarks
File "/usr/local/lib/python3.5/dist-packages/face_alignment-0.1.0-py3.5.egg/face_alignment/utils.py", line 215, in flip

AttributeError: 'torch.cuda.FloatTensor' object has no attribute 'ndim'

Test with new image

Hi, Adrian,
I replace the default image under test/assets with a new one, and run the detect_landmarks_in_image.py, but the generated 3D landmarks is still the one in the default image. How can I detect the 3D landmarks of a new image with FAN? should I train it by myself? Thanks ^_^

Accuracy of positive faces much worse than side faces?

I've downloaded new models and codes, and tested on several video frames.
Accuracy of side faces' alignment is excellent.
Curiously, I got even worse results on positive faces.
What might be the reason?
positive faces:
peek 2018-01-13 21-55
side faces
figure_1

How to install it without pytorch required?

Hi
I am having a lot of difficulties installing pytorch. I was wondering if I can run the example code (detect_landmarks_in_image.py) you provided without installing pytorch and just use cpu?

Thank you so much
Mohsen

Index error occurred.

hm_[int(pY) + 1, int(pX)] - hm_[int(pY) - 1, int(pX)]])

Hi Adrian.

Thank you for sharing your project.
I test your python project and found index error at utils.py(123 line)
In my test environment, sometimes, 63 is assigned to the pY.
So int(pY) + 1 make index error.

Could you tell me how to solve this ?

Aaron's vrn vs Adrian's face-alignment

Hi Adrian
I am particularly interested in the accurate 3d landmark detection. My question is that in terms of accuracy, which one gives better accuracy :
your face-alignment code or Aaron's vrn or they are using the same model for landmark detection?
thank you so much

Dockerfile is missing boost libs, CMake fails in docker build.

Only errors shown below.

$docker build -t face-alignment .
...
-- Could NOT find Boost
...
Failed building wheel for dlib
...
Command "/opt/conda/envs/pytorch-py35/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-build-7xmab85s/dlib/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-lxb8brpv-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-7xmab85s/dlib/
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1

Accuracy of Eyes compared to Dlib

Hi,

Great piece of software. It definitely gets better poses than other solution.

One thing I noticed over 10 images is that the eye point accuracy isn't as accurate as dlib's feature extraction. Is this a by-product of the training? And is this also seen with the Lua version?

Here's an example of one of the worst alignments.

Thanks,

Face-Alignment
face_alignment

Dlib
dlib

Got much worse result than dlib.

Both fa.get_landmarks and dlib.pose_predictor_68_point recive the raw image array and a dlib.rectangle as input,but 2d/3d FAN got worse result.
As GIF shows, I plot dlib's landmarks first for the same frame, FAN's result is much worse than dlib's.
peek 2018-01-02 10-32
I have wrote a Alignment class as follow:

class Alignment:
    """Initialize the face alignment pipeline

    Args:
        landmarks_type (``LandmarksType`` object): an enum defining the type of predicted points.
        network_size (``NetworkSize`` object): an enum defining the size of the network (for the 2D and 2.5D points).
        enable_cuda (bool, optional): If True, all the computations will be done on a CUDA-enabled GPU (recommended).
        enable_cudnn (bool, optional): If True, cudnn library will be used in the benchmark mode
        flip_input (bool, optional): Increase the network accuracy by doing a second forward passed with
                                    the flipped version of the image
        use_cnn_face_detector (bool, optional): If True, dlib's CNN based face detector is used even if CUDA
                                                is disabled.

    Example:
        >>> FaceAlignment(NetworkSize.2D, flip_input=False)
    """

    def __init__(self, landmarks_type='3d', 
                  base_path='.',fan_path='./3DFAN-4.pth.tar',depth_model_path='./depth.pth.tar',
                  flip_input=False,
                 use_cnn_face_detector=False):
        network_size=4
        self.enable_cuda = True
        self.use_cnn_face_detector = use_cnn_face_detector
        self.flip_input = flip_input
        self.landmarks_type = landmarks_type

        self.base_path = (os.path.join(appdata_dir('face_alignment'), "data") if not os.path.exists(base_path) else base_path)

        if landmarks_type == '2d':
            network_name = '2DFAN-' + str(int(network_size)) + '.pth.tar'
        else:
            network_name = '3DFAN-' + str(int(network_size)) + '.pth.tar'
        if not os.path.exists(fan_path):
            if not os.path.exists(self.base_path):
                os.makedirs(self.base_path)
            fan_path = os.path.join(base_path, network_name)
        self.fan_path=fan_path

        torch.backends.cudnn.benchmark = True


        # Initialise the face alignemnt networks
        self.face_alignemnt_net = FAN(int(network_size))
        
        if not os.path.isfile(fan_path):
            print("Downloading the Face Alignment Network(FAN) to {}. Please wait...".format(fan_path))

            request_file.urlretrieve(
                "https://www.adrianbulat.com/downloads/python-fan/" +
                network_name, os.path.join(fan_path))
        
        fan_weights = torch.load(
            fan_path,
            map_location=lambda storage,
            loc: storage)
        fan_dict = {k.replace('module.', ''): v for k,
                    v in fan_weights['state_dict'].items()}

        self.face_alignemnt_net.load_state_dict(fan_dict)

        self.face_alignemnt_net.cuda()
        self.face_alignemnt_net.eval()

        # Initialiase the depth prediciton network
        if landmarks_type == '3d':
            self.depth_prediciton_net = ResNetDepth()
            if not os.path.exists(depth_model_path):
                depth_model_path = os.path.join(base_path, 'depth.pth.tar')
                if not os.path.exists(self.base_path):
                    os.makedirs(self.base_path)
            self.depth_model_path=depth_model_path
            
            if not os.path.isfile(self.depth_model_path):
                print(
                    "Downloading the Face Alignment depth Network (FAN-D) to {}. Please wait...".format(depth_model_path))

                request_file.urlretrieve(
                    "https://www.adrianbulat.com/downloads/python-fan/depth.pth.tar",
                    os.path.join(self.depth_model_path))

            depth_weights = torch.load(
                depth_model_path,
                map_location=lambda storage,
                loc: storage)
            depth_dict = {
                k.replace('module.', ''): v for k,
                v in depth_weights['state_dict'].items()}
            self.depth_prediciton_net.load_state_dict(depth_dict)

            self.depth_prediciton_net.cuda()
            self.depth_prediciton_net.eval()


    def get_landmarks(self,image,rect):
        '''
        get landmarks
        :params:
            image:RGB img
            rect:Dlib rectangle object
            
        '''
        center = torch.FloatTensor(
            [rect.right() - (rect.right() - rect.left()) / 2.0, rect.bottom() -
             (rect.bottom() - rect.top()) / 2.0])
        center[1] = center[1] - (rect.bottom() - rect.top()) * 0.1
        scale = (rect.right() - rect.left() + rect.bottom() - rect.top()) / 200.0

        inp = crop(image, center, scale)
        inp = torch.from_numpy(inp.transpose(
            (2, 0, 1))).float().div(255.0).unsqueeze_(0)

        inp = inp.cuda()

        out = self.face_alignemnt_net(
            Variable(inp, volatile=True))[-1].data.cpu()
        if self.flip_input:
            out += flip(self.face_alignemnt_net(Variable(flip(inp),
                                                         volatile=True))[-1].data.cpu(), is_label=True)

        pts, pts_img = get_preds_fromhm(out, center, scale)
        pts, pts_img = pts.view(68, 2) * 4, pts_img.view(68, 2)

        if self.landmarks_type == '3d':
            heatmaps = np.zeros((68, 256, 256))
            for i in range(68):
                if pts[i, 0] > 0:
                    heatmaps[i] = draw_gaussian(heatmaps[i], pts[i], 2)
            heatmaps = torch.from_numpy(heatmaps).view(1, 68, 256, 256).float()
            heatmaps = heatmaps.cuda()
            depth_pred = self.depth_prediciton_net(
                Variable(
                    torch.cat(
                        (inp, heatmaps), 1), volatile=True)).data.cpu().view(
                68, 1)
            pts_img = torch.cat(
                (pts_img, depth_pred * (1.0 / (256.0 / (200.0 * scale)))), 1)

        return pts_img.numpy()

And a plot function:

def draw_3d(landmarks2d,landmarks3d,img,fig=None,ax1=None,ax2=None):
    fig = (plt.figure(figsize=plt.figaspect(.5)) if fig==None else fig)
    preds=landmarks2d
    ax =ax1 = (fig.add_subplot(1, 2, 1) if ax1==None else ax1)
    ax.clear()
    
    ax.imshow(img)
    ax.plot(preds[0:17,0],preds[0:17,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
    ax.plot(preds[17:22,0],preds[17:22,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
    ax.plot(preds[22:27,0],preds[22:27,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
    ax.plot(preds[27:31,0],preds[27:31,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
    ax.plot(preds[31:36,0],preds[31:36,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
    ax.plot(preds[36:42,0],preds[36:42,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
    ax.plot(preds[42:48,0],preds[42:48,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
    ax.plot(preds[48:60,0],preds[48:60,1],marker='o',markersize=6,linestyle='-',color='w',lw=2)
    ax.plot(preds[60:68,0],preds[60:68,1],marker='o',markersize=6,linestyle='-',color='w',lw=2) 
    ax.axis('off')
    
    preds=landmarks3d
    ax=ax2 = (fig.add_subplot(1,2,2,projection='3d') if ax2==None else ax2)
    ax.clear()
    ax.scatter(preds[:,0],preds[:,1],preds[:,2],c="cyan", alpha=1.0, edgecolor='b')
    ax.plot3D(preds[:17,0],preds[:17,1], preds[:17,2], color='blue' )#chin
    ax.plot3D(preds[17:22,0],preds[17:22,1],preds[17:22,2], color='red')#brows
    ax.plot3D(preds[22:27,0],preds[22:27,1],preds[22:27,2], color='red')
    ax.plot3D(preds[27:31,0],preds[27:31,1],preds[27:31,2], color='green')#nose
    ax.plot3D(preds[31:36,0],preds[31:36,1],preds[31:36,2], color='green')
    ax.plot3D(preds[36:42,0],preds[36:42,1],preds[36:42,2], color='yellow')#eyes
    ax.plot3D(preds[42:48,0],preds[42:48,1],preds[42:48,2], color='yellow')
    ax.plot3D(preds[48:,0],preds[48:,1],preds[48:,2], color='black' )#lips
    
    ax.view_init(elev=90., azim=90.)
    ax.set_xlim(ax.get_xlim()[::-1])

    ax.set_xlabel('X Label')
    ax.set_ylabel('Y Label')
    ax.set_zlabel('Z Label')
    return fig

the speed of get_landmarks() for one image

the code is as below:
fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, enable_cuda=True, flip_input=False) input = cv2.imread('036.jpg') s0 = time.time() preds = fa.get_landmarks(input[:,:,(2,1,0)]) s1 = time.time() print s1-s0

the picture resolusion is 112x96, it takes about 1.75s for one image processing, is it right?
i feel the speed is a little bit slow.

Failed to download network file when run example

Hi, I'm trying to run example code (detect_landmarks_in_image.py) but it failed to download network file, so I try again but this time It doesn't restart the download and show 'RuntimeError: unexpected EOF. The file might be corrupted' instead. How can I fix this?

`(d) E:\DeepLearning\Repositories\face-alignment\examples>python detect_landmarks
_in_image.py
Downloading the Face Alignment Network(FAN). Please wait...
Downloading the Face Alignment depth Network (FAN-D). Please wait...
Traceback (most recent call last):
File "detect_landmarks_in_image.py", line 8, in
fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, enable_c
uda=False, flip_input=False)
File "E:\Anaconda3\envs\d\lib\site-packages\face_alignment-0.1.0-py3.6.egg\fac
e_alignment\api.py", line 125, in init
File "E:\Anaconda3\envs\d\lib\urllib\request.py", line 289, in urlretrieve
% (read, size), result)
urllib.error.ContentTooShortError: <urlopen error retrieval incomplete: got only
9977475 out of 234708217 bytes>

(d) E:\DeepLearning\Repositories\face-alignment\examples>python detect_landmarks
_in_image.py
Traceback (most recent call last):
File "detect_landmarks_in_image.py", line 8, in
fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, enable_c
uda=False, flip_input=False)
File "E:\Anaconda3\envs\d\lib\site-packages\face_alignment-0.1.0-py3.6.egg\fac
e_alignment\api.py", line 129, in init
File "E:\Anaconda3\envs\d\lib\site-packages\torch\serialization.py", line 261,
in load
return _load(f, map_location, pickle_module)
File "E:\Anaconda3\envs\d\lib\site-packages\torch\serialization.py", line 416,
in _load
deserialized_objects[key]._set_from_file(f, offset)
RuntimeError: unexpected EOF. The file might be corrupted.
`

I'm using Anaconda, env python 3.6
Windows 7, pytorch_legacy 0.3.0 (by peterjc123)
I can "import face_alignment" without error.

KeyError: 'state_dict'

I run the test code in this 'fan_dict = {k.replace('module.', ''): v for k, v in fan_weights['state_dict'].items()}' line have a error KeyError: 'state_dict'

OpenCV python depency issue

Hi,

I need to use a custom build OpenCV as I need a different feature set than the opencv-python binaries. So I built my OpenCV with python binding from source. But I get:


Processing dependencies for face-alignment==0.1.0
Searching for opencv-python
Reading https://pypi.python.org/simple/opencv-python/
No local packages or download links found for opencv-python
error: Could not find suitable distribution for Requirement.parse('opencv-python')

How can I get rid of the dependency check? Is this error due to a hard dependency on the opencv-python package - i.e. my source build of opencv incl. python bindings does not count?

Peter

rigid motion of the head

Hi Adrian

does your code can give rigid motion of the face? I mean 6 rigid values: 3 displacements, 3 rotational angles of the whole head?

Problem installing dependencies

Hello , I have a problem when i try to install the dependences for the face alignment lib. The problem appears to be on the dlib, but i don't know how to fix it. Here are my logs
Running setup.py bdist_wheel for dlib ... error Complete output from command /home/tasos/anaconda2/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-KtTWRq/dlib/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmpSfH_cypip-wheel- --python-tag cp27: running bdist_wheel running build Detected Python architecture: 64bit Detected platform: linux2 Configuring cmake ... /home/tasos/anaconda2/bin/cmake: /home/tasos/anaconda2/bin/../lib/libstdc++.so.6: version GLIBCXX_3.4.20' not found (required by /home/tasos/anaconda2/bin/cmake)
error: cmake configuration failed!


Failed building wheel for dlib
Running setup.py clean for dlib
Failed to build dlib
Installing collected packages: dlib
Running setup.py install for dlib ... error
Complete output from command /home/tasos/anaconda2/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-build-KtTWRq/dlib/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-p5mzmg-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
Detected Python architecture: 64bit
Detected platform: linux2
Removing build directory /tmp/pip-build-KtTWRq/dlib/./tools/python/build
Configuring cmake ...
/home/tasos/anaconda2/bin/cmake: /home/tasos/anaconda2/bin/../lib/libstdc++.so.6: version GLIBCXX_3.4.20' not found (required by /home/tasos/anaconda2/bin/cmake) /home/tasos/anaconda2/bin/cmake: /home/tasos/anaconda2/bin/../lib/libstdc++.so.6: version CXXABI_1.3.9' not found (required by /home/tasos/anaconda2/bin/cmake)
error: cmake configuration failed!

----------------------------------------

Command "/home/tasos/anaconda2/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-build-KtTWRq/dlib/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-p5mzmg-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-KtTWRq/dlib/
`

Could you suggest a solution?
Thank you.

coordinate system of 3D landmarks?

Hi,

I'm interested in getting the 3D landmarks position with respect to the camera, having a pretty accurate intrinsic camera parameters estimate. I thought I could do that by applying solvePnP to the 2D and 3D landmarks to obtain the face position and rotation if the reference frame of the 3D landmarks is the center of the face/head, but after some tests this is not really clear to me.

Could you please tell me the coordinate system of the 3D landmarks, in order to compute the 3D location of these landmarks wrt to camera? Or is there any straightforward way to compute the 3D camera landmarks already with your method?

Thanks!

ImportError: dlopen: cannot load any more object with static TLS

I get the following error. I'm using python 2.7, CUDA 7.5

Traceback (most recent call last):
File "", line 1, in
File "build/bdist.linux-x86_64/egg/face_alignment/init.py", line 7, in
File "build/bdist.linux-x86_64/egg/face_alignment/api.py", line 5, in
File "/usr/local/lib/python2.7/dist-packages/torch/init.py", line 53, in
from torch._C import *
ImportError: dlopen: cannot load any more object with static TLS

test_error!!!!!!

when I run examples,it needs download model; I quit it,and then i run it again, the error comes up!!!!!!!!!!!!!!!!!!!So what can I do????

ERROR: test_predict_points (main.Tester)

Traceback (most recent call last):
File "test/facealignment_test.py", line 7, in test_predict_points
fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, enable_cuda=False)
File "build/bdist.linux-x86_64/egg/face_alignment/api.py", line 131, in init
map_location=lambda storage,
File "/home/gaozhihua/anaconda2/lib/python2.7/site-packages/torch/serialization.py", line 261, in load
return _load(f, map_location, pickle_module)
File "/home/gaozhihua/anaconda2/lib/python2.7/site-packages/torch/serialization.py", line 418, in _load
deserialized_objects[key]._set_from_file(f, offset)
RuntimeError: unexpected EOF. The file might be corrupted.


Ran 1 test in 4.614s

FAILED (errors=1)

Boost.Python.ArgumentError: Python argument types in

Python 2.7.14 |Anaconda, Inc.| (default, Oct 16 2017, 17:29:19)
[GCC 7.2.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import face_alignment
from skimage import io

fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, enable_cuda=True, flip_input=False)
Traceback (most recent call last):
File "", line 1, in
File "build/bdist.linux-x86_64/egg/face_alignment/api.py", line 84, in init
Boost.Python.ArgumentError: Python argument types in
cnn_face_detection_model_v1.init(cnn_face_detection_model_v1, str)
did not match C++ signature:
init(_object*, std::string)

## I installed pytorch ,cudatoolkit and cudnn via conda,but if I set enable cuda=True, the mistake occurs and if I set enable cuda=False, it works fine. Is anyone can help?

Error in get_landmarks

Hi!
I'm trying to reproduce an example:


import face_alignment
import scipy.io as io

fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, enable_cuda=True, flip_input=False)

input = io.imread('../test/assets/aflw-test.jpg')
preds = fa.get_landmarks(input)

and get an error:

RuntimeErrorTraceback (most recent call last)
<ipython-input-3-4ce31f2d4338> in <module>()
      1 image = io.imread('/home/moon/work/FacePalmProject/datasets/angle/stas_angle/WIN_20170802_11_11_39_Pro.jpg')
----> 2 preds = fa.get_landmarks(image)[0]

/usr/local/lib/python2.7/dist-packages/face_alignment-0.1.0-py2.7.egg/face_alignment/api.pyc in get_landmarks(self, input_image, all_faces)
    181 
    182                 out = self.face_alignemnt_net(
--> 183                     Variable(inp, volatile=True))[-1].data.cpu()
    184                 if self.flip_input:
    185                     out += flip(self.face_alignemnt_net(Variable(flip(inp),

/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
    257         for hook in self._forward_pre_hooks.values():
    258             hook(self, input)
--> 259         result = self.forward(*input, **kwargs)
    260         for hook in self._forward_hooks.values():
    261             hook_result = hook(self, input, result)

/usr/local/lib/python2.7/dist-packages/face_alignment-0.1.0-py2.7.egg/face_alignment/models.pyc in forward(self, x)
    173 
    174     def forward(self, x):
--> 175         x = F.relu(self.bn1(self.conv1(x)), True)
    176         x = F.max_pool2d(self.conv2(x), 2)
    177         x = self.conv3(x)

/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
    257         for hook in self._forward_pre_hooks.values():
    258             hook(self, input)
--> 259         result = self.forward(*input, **kwargs)
    260         for hook in self._forward_hooks.values():
    261             hook_result = hook(self, input, result)

/usr/local/lib/python2.7/dist-packages/torch/nn/modules/conv.pyc in forward(self, input)
    253     def forward(self, input):
    254         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 255                         self.padding, self.dilation, self.groups)
    256 
    257 

/usr/local/lib/python2.7/dist-packages/torch/nn/functional.pyc in conv2d(input, weight, bias, stride, padding, dilation, groups)
     51     f = ConvNd(_pair(stride), _pair(padding), _pair(dilation), False,
     52                _pair(0), groups, torch.backends.cudnn.benchmark, torch.backends.cudnn.enabled)
---> 53     return f(input, weight, bias)
     54 
     55 

RuntimeError: Expected object of type CPUFloatType but found type CUDAFloatType for argument #3 'weight'

maybe someone can help solve this problem.

Scaling after update

Hi,
thanks for sharing your code.

After your last update the new scale value is 195:
scale = (d.right() - d.left() + d.bottom() - d.top()) / 195.0
but 3D predictions are still scaled with 200:
pts_img = torch.cat((pts_img, depth_pred * (1.0 / (256.0 / (200.0 * scale)))), 1)

is this intended?

Visualizing FAN and FAN3D

Hi adrian, I would like to visualize this graph so that I can port this to dlib. I have tried some pytorch visualization scripts here but I have failed to get them working.

Can you help me in this? Thanks a lot for your effort!

sudo python setup.py install can't work

hi, when I run 'sudo python setup.py install', it can't work.
the error is:
error in face_alignment setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers

Could you help me? Thanks a lot.

Any tips to do real-time face alignment?

Hello Adrian,
Congratulations on achieving so excellent results! I test the 2D FAN and show the feature points in gpu mode. It cost about one minute to do face alignment. I find process time is independent of image size which times on python script. Is there any tips for me to do real-time face alignment?(10fps is hope for me)

fix process_folder

Current code doesn't use 'path' input.
Correct version:

    def process_folder(self, path, all_faces=False):
        types = ('*.jpg', '*.png')
        images_list = []
        for files in types:
            images_list.extend(glob.glob(path + files))

        predictions = []
        for image_name in images_list:
            predictions.append(
                image_name, self.get_landmarks(image_name, all_faces))

        return predictions

error: Installed distribution six 1.9.0 conflicts with requirement six>=1.10

Environment:

  1. CentOS 7
  2. Python 2.7.5

I have installed pyTorch using the following commands from pytorch.org:

pip install http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp27-cp27mu-manylinux1_x86_64.whl
pip install torchvision 

Then I clone this project and start to install as follows:

pip install -r requirements.txt
python setup.py install

When running pip install -r requirements.txt, the installation process stopped because of dependencies of boost and python-devel. So I also installed these dependencies:

yum install boost-devel.x86_64
yum install python-devel.x86_64 

Then I run pip install -r requirements.txt again and this time it was successfully installed.
But when I tried to run python setup.py install, I got the following error:

[root@localhost face-alignment]# python setup.py install
running install
running bdist_egg
running egg_info
writing requirements to face_alignment.egg-info/requires.txt
writing face_alignment.egg-info/PKG-INFO
writing top-level names to face_alignment.egg-info/top_level.txt
writing dependency_links to face_alignment.egg-info/dependency_links.txt
reading manifest file 'face_alignment.egg-info/SOURCES.txt'
writing manifest file 'face_alignment.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/face_alignment
copying build/lib/face_alignment/__init__.py -> build/bdist.linux-x86_64/egg/face_alignment
copying build/lib/face_alignment/api.py -> build/bdist.linux-x86_64/egg/face_alignment
copying build/lib/face_alignment/models.py -> build/bdist.linux-x86_64/egg/face_alignment
copying build/lib/face_alignment/utils.py -> build/bdist.linux-x86_64/egg/face_alignment
byte-compiling build/bdist.linux-x86_64/egg/face_alignment/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/face_alignment/api.py to api.pyc
byte-compiling build/bdist.linux-x86_64/egg/face_alignment/models.py to models.pyc
byte-compiling build/bdist.linux-x86_64/egg/face_alignment/utils.py to utils.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying face_alignment.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying face_alignment.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying face_alignment.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying face_alignment.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying face_alignment.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying face_alignment.egg-info/zip-safe -> build/bdist.linux-x86_64/egg/EGG-INFO
creating 'dist/face_alignment-0.1.0-py2.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing face_alignment-0.1.0-py2.7.egg
Removing /usr/lib/python2.7/site-packages/face_alignment-0.1.0-py2.7.egg
Copying face_alignment-0.1.0-py2.7.egg to /usr/lib/python2.7/site-packages
face-alignment 0.1.0 is already the active version in easy-install.pth

Installed /usr/lib/python2.7/site-packages/face_alignment-0.1.0-py2.7.egg
Processing dependencies for face-alignment==0.1.0
error: Installed distribution six 1.9.0 conflicts with requirement six>=1.10

I googled this but learned nothing.
Also I tried running the example from example/, but it failed:

[root@localhost examples]# python detect_landmarks_in_image.py 
Traceback (most recent call last):
  File "detect_landmarks_in_image.py", line 4, in <module>
    import matplotlib.pyplot as plt
  File "/usr/lib64/python2.7/site-packages/matplotlib/pyplot.py", line 115, in <module>
    _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
  File "/usr/lib64/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
    globals(),locals(),[backend_name],0)
  File "/usr/lib64/python2.7/site-packages/matplotlib/backends/backend_tkagg.py", line 6, in <module>
    from six.moves import tkinter as Tk
  File "/usr/lib/python2.7/site-packages/six.py", line 199, in load_module
    mod = mod._resolve()
  File "/usr/lib/python2.7/site-packages/six.py", line 113, in _resolve
    return _import_module(self.mod)
  File "/usr/lib/python2.7/site-packages/six.py", line 80, in _import_module
    __import__(name)
ImportError: No module named Tkinter

RuntimeError when get_landmarks

I got a problem like this ๏ผš
---------------------------------------------------------------------
face_align_error
---------------------------------------------------------------------
(And other programs using cuda are ok but this program with face-align-lib failed )

How to solve it ? Thanks.

where to put the pre-trained model to make the detecte code work

I had dowloaded the 4 pretrained model(2dfan,3dfan,2dto3d,3d depth),but I don't know where I should put them so the detect code would work.
I ran the detect code under the example fold without pretrained model,turned out that it'll automaticlly downloads the model accordingly,but I couldn't find where the model is

process_folder does not work correctly

The process_folder function is not utilizing the 'path' parameter.

The line images_list.extend(glob.glob(files)) should be images_list.extend(glob.glob(os.path.join(path,files)))

def process_folder(self, path, all_faces=False):
        types = ('*.jpg', '*.png')
        images_list = []
        for files in types:
            images_list.extend(glob.glob(files))

        predictions = []
        for image_name in images_list:
            predictions.append(
                image_name, self.get_landmarks(image_name, all_faces))

        return predictions

Other models available other than 4-stacked?

Hello Adrian,

Thanks for sharing your code in pytorch. I noticed that you have probably tested several models with less stacked FAN here.

However, only 4-stacked pretrained model is available to download from your website. I would like to know that if it is possible to share smaller models even though the performance might be a little bit worse.

Thank you very much in advance.

pytorch function should be followed by '_' or not ? in function get_preds_fromhm

Please notice function get_preds_fromhm in face_aligment/utils.py

line 112: preds[..., 1].add_(-1).div_(hm.size(2)).floor().add_(1)
I think it should be preds[..., 1].add_(-1).div_(hm.size(2)).floor_().add_(1), otherwise the results will be qual to preds[..., 1].add_(-1).div_(hm.size(2))

line 124: preds[i, j].add(diff.sign().mul(.25))
I think it should be preds[i, j].add_(diff.sign().mul(.25)), otherwise these two loops always do nothing to the results.

In pytorch, with function followed by '_', it will change value of variables, but for function without a '_', it won't change the value of variabels. for example, function floor() is defferent from floor_().

@1adrianb

Another issue with Dockerfile: pytorch install fail

Traceback (most recent call last):
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 158, in save_modules
yield saved
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 199, in setup_context
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 67, in save_path
yield saved
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 199, in setup_context
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 58, in save_argv
yield saved
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 199, in setup_context
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 84, in override_temp
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 199, in setup_context
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 94, in pushd
yield saved
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 199, in setup_context
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 254, in run_setup
_execfile(setup_script, ns)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 49, in _execfile
exec(code, globals, locals)
File "/tmp/easy_install-uh5npp3w/torch-0.1.2.post1/setup.py", line 11, in

RuntimeError: PyTorch does not currently provide packages for PyPI (see status at pytorch/pytorch#566).

Please follow the instructions at http://pytorch.org/ to install with miniconda instead.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "setup.py", line 54, in
'Programming Language :: Python :: 3.6',
File "/opt/conda/envs/pytorch-py35/lib/python3.5/distutils/core.py", line 148, in setup
dist.run_commands()
File "/opt/conda/envs/pytorch-py35/lib/python3.5/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/install.py", line 117, in do_egg_install
cmd.run()
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 411, in run
self.easy_install(spec, not self.no_deps)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 672, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 634, in _tmpdir
yield str(tmpdir)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 653, in easy_install
return self.install_item(None, spec, tmpdir, deps, True)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 700, in install_item
self.process_distribution(spec, dist, deps)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 745, in process_distribution
[requirement], self.local_index, self.easy_install
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/pkg_resources/init.py", line 863, in resolve
replace_conflicting=replace_conflicting
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/pkg_resources/init.py", line 1141, in best_match
return self.obtain(req, installer)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/pkg_resources/init.py", line 1153, in obtain
return installer(requirement)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 672, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 634, in _tmpdir
yield str(tmpdir)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 672, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 698, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 879, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 1118, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 1104, in run_setup
run_setup(setup_script, args)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 257, in run_setup
raise
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 199, in setup_context
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 182, in save_pkg_resources_state
yield saved
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 199, in setup_context
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 170, in save_modules
saved_exc.resume()
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 145, in resume
six.reraise(type, exc, self._tb)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/pkg_resources/_vendor/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 158, in save_modules
yield saved
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 199, in setup_context
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 67, in save_path
yield saved
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 199, in setup_context
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 58, in save_argv
yield saved
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 199, in setup_context
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 84, in override_temp
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 199, in setup_context
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 94, in pushd
yield saved
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 199, in setup_context
yield
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 254, in run_setup
_execfile(setup_script, ns)
File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/setuptools/sandbox.py", line 49, in _execfile
exec(code, globals, locals)
File "/tmp/easy_install-uh5npp3w/torch-0.1.2.post1/setup.py", line 11, in

RuntimeError: PyTorch does not currently provide packages for PyPI (see status at pytorch/pytorch#566).

Please follow the instructions at http://pytorch.org/ to install with miniconda instead.

The command '/bin/sh -c python setup.py install' returned a non-zero code: 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.