Coder Social home page Coder Social logo

keras-vggface's Introduction

Hi there ๐Ÿ‘‹

  • ๐Ÿ”ญ Iโ€™m currently working on Computer Vision and Reinforcement Learning.
  • ๐ŸŽฏ I'm experienced in Deep Learning using Pytorch and Tensorflow.

Linkedin Badge Twitter Badge

keras-vggface's People

Contributors

callmek avatar cvaugh avatar iamgroot42 avatar nakarinh14 avatar rcmalli avatar tiangolo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-vggface's Issues

weight decay

Is there any way to add weight decay to the implemented vggface? Since a weight decay of 0.0005 is reported in the original paper.

wan to load the pretrained model from local file

Please run this code and share your library versions

import tensorflow as tf
import keras
import keras_vggface

print(tf.__version__)
print(keras.__version__)
print(keras_vggface.__version__)

1.8.0
2.1.2
0.5

Bug reports:
I want to load the pretrained model from local file, not download it on demand
The file is the same file from the url in util.py(RESNET50_WEIGHTS_PATH_NO_TOP = 'https://github.com/rcmalli/keras-vggface/releases/download/v2.0/rcmalli_vggface_tf_notop_resnet50.h5')

But the code below raises a ValueError

my code
VGGFace(model='resnet50',
include_top=False,
input_shape=(224, 224, 3),
weights='path/to/pretrained/rcmalli_vggface_tf_notop_resnet50.h5'
pooling='avg')

error report
File "/keras-vggface/keras_vggface/vggface.py", line 64, in VGGFace
raise ValueError('The weights argument should be either '
ValueError: The weights argument should be either None (random initialization) or vggface(pre-training on VGGFace Datasets).

Code Sample:

Please replace this line with a code sample to replicate the issue.

How to extract feature in VGG16 for a test image?

it has a way to extract the features of a pretrained model:
vgg_model = VGGFace() # pooling: None, avg or max
out = vgg_model.get_layer('fc7').output
vgg_model_fc7 = Model(vgg_model.input, out)
img = image.load_img(imagepath, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = x[:, :, :, ::-1]
vgg_model_fc7_preds = vgg_model_fc7.predict(x)
print(vgg_model_fc7_preds[0])
Is predict (before last line) a tool for feature extraction (for getting output of layer) or for prediction of user?
If it is for feature extraction, why name it "predict" ?

How did you train vggface model, what parameters did you use?

I would like to know how did you train model whose weights are loaded into chosen model (Resnet, Senet).

Did you train the model in keras, how you implemented triplet loss, what training parameters did you use (iterations, learning rate etc.) to get amazing results?

The information that I asked about above is for Resnet50 architecture.

How to train from scratch for new data.

Hi,
I am interesting in this project. I ran and got good result.
This project used weight file is trained,
Can you share me the data or code or anything else like this to train from scratch?
Thanks

Own Weight File

Hello ,
How may I add my own weight file instead of using vggface.
Is there a way to do this from VGGFACE() arguments

Which version of ResNet-50 train was converted?

On the VGGFace2 paper, the authors have two versions of ResNet-50, one trained from scratch on VGGFace2 dataset, and other trained on MS-Celeb-1M and fine-tuned on VGGFace2.
Which one was converted to Keras and is in this repository?

Thank you!

senet50

hello
Can senet extract face feature now?

I'm trying to test predictions on VGG Face test data and it is not producing the expected results

I have a simple test based on the latest VGG test data:
model = VGGFace(model='resnet50')
img = image.load_img('test/n000001/0485_02.jpg', target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = utils.preprocess_input(x, version=1)
preds = model.predict(x)
print(utils.decode_predictions(preds)[0][0][0])
It produces:
b' Stevie_Ray'
The picture of that of the Dalai Lama.
What datasets were used here? Thanks!

What 2017 paper?

'Based on RESNET50 architecture -> new paper(2017) '

I was wondering what paper this is referring to, as I cannot see a new VGGface paper from Oxford in their publication list.

Thanks.

About Senet50

tensorflow v1.4
keras : v2.1.1
keras_vggface : v0.5

Bug reports:

Senet50 architecture is not working as intended. The weights or architecture implementation should be checked again.

README.md example fails

Tried to run the example on the README of this repo and got this:

Traceback (most recent call last):
  File "feature_extractors.py", line 128, in <module>
    main()
  File "feature_extractors.py", line 120, in main
    vgg_model_fc7 = Model(image_input, out)
  File "/home/bh/anaconda3/envs/keras2/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 88, in wrapper
    return func(*args, **kwargs)
  File "/home/bh/anaconda3/envs/keras2/lib/python3.6/site-packages/keras/engine/topology.py", line 1704, in __init__
    str(layers_with_complete_input))
RuntimeError: Graph disconnected: cannot obtain value for tensor Tensor("input_3:0", shape=(?, 224, 224, 3), dtype=float32) at layer "input_3". The following previous layers were accessed without issue: []

Conda enviroment (irrelevant packages excluded for clarity):

keras                     2.0.2                    py36_0  
keras-vggface             0.3                       <pip>
numpy                     1.12.1                   py36_0  
opencv                    3.1.0               np112py36_1   
python                    3.6.1                         0  
scipy                     0.19.0              np112py36_0  
tensorflow                1.1.0               np112py36_0  
theano                    0.9.0                    py36_0  

Trainable param for predefined layers

TF: 1.13.0-dev20190204
Keras: 2.2.4
Keras-VGGFace 0.5

Keras has a trainable param for layers. I would like to set this to False when I load the predefined weights for fine tuning, so that only the top layers I define are trained. I probably can do this myself with a PR but is there another mechanism to achieve this?

I am trying to use VGGFace with resnet50, but with smaller image dimensions, I was wondering how can I do this without re-training the model from scratch...

Please run this code and share your library versions

import tensorflow as tf
import keras
import keras_vggface

print(tf.__version__)
print(keras.__version__)
print(keras_vggface.__version__)

Using TensorFlow backend.
1.13.1
2.2.4
0.5

**Bug reports:**

Please replace this line with a brief summary of your issue AND if reporting a build issue include the link from:

**Code Sample:**

I am trying to use VGGFace with resnet50, but with smaller image dimensions, I was wondering how can I do this without re-training the model from scratch...

base_model = VGGFace(model='resnet50', include_top=False, inputshape=(96, 96, 3))

**Error**
Negative dimension size caused by subtracting 7 from 3 for 'vggface_resnet50/avg_pool/AvgPool' (op: 'AvgPool') with input shapes: [?,3,3,2048].

Training And Testing on own data

Can I know how can i test this on own data?
I even want to know how the numbers after prediction on an image produced.
I got a number 269 on same image. I want to know what is that number and also how the images in the pre-trained model's dataset are aranged?

TF result seems not to be correct

The resulf of 'Prediction: Tensorflow backend with 'tf' dimension ordering' and 'Prediction: Theano backend with 'th' dimension ordering' are very different.

Theano backend + th ordering is correct.
Tensorflow backend + th ordering is correct.
But,
Tensorflow backend + tf ordering is wrong.

Thank you.

Extracting fc7 layers, Why most of the points are 0?

from keras_vggface.vggface import VGGFace
vgg_features = VGGFace(include_top=True, input_shape=(224, 224, 3))
vgg_features.layers.pop()
vgg_features.layers.pop()
vgg_features.outputs = [vgg_features.layers[-1].output]
vgg_features.layers[-1].outbound_nodes = []

I was extracting 4096 Features through the code.
Why the Result array has lot of zeroes in it ?

Finetuning with VGG16, val_loss and val_acc remain constant

Library versions
Tensorflow 1.5.1
Keras 2.2.2
keras_vggface 0.5

Bug reports:
Finetuning vggface (VGG16) using UTK dataset subset
Softmax Multiclass Logistic Regression

val_loss never improves
val_acc remains constant

Code Sample:

nb_class = 7
hidden_dim = 512

logging.debug("Loading data...")
image, gender, age, _, image_size, _ = load_data(input_path)
X_data = image
y_data_a = np_utils.to_categorical(age, 7)

vgg_model = VGGFace(include_top=False, input_shape=(224, 224, 3))

for layer in vgg_model.layers:
    layer.trainable = False

last_layer = vgg_model.get_layer('pool5').output
x = Flatten(name='flatten')(last_layer)
x = Dense(hidden_dim, activation='relu', name='fc6')(x)
x = Dense(hidden_dim, activation='relu', name='fc7')(x)
out = Dense(nb_class, activation='softmax', name='fc8')(x)
model = Model(vgg_model.input, out)

sgd = SGD(lr=0.00001, momentum=0.9)
model.compile(optimizer=sgd, loss=["categorical_crossentropy"],
              metrics=['accuracy'])
...
hist = model.fit(X_train, y_train_a, batch_size=batch_size, epochs=nb_epochs, callbacks=callbacks,
                     validation_data=(X_test, y_test_a))

Predictions Incorrect

I've seen a number of issues raised about the predictions not being correct. When predicting on a face that's included in the VGG2 dataset I get all incorrect predictions. I've included a screen shot of the predictions. I am using the version 2 processing. The only change I made to the code was to replace keras with tensorflow.keras. However, I tested using base keras as well and I am not seeing correct predictions either.

tf.keras.backend.image_data_format()
model = VGGFace(model='resnet50',  weights='vggface')

def predictFacePic(imagePath):
    img = image.load_img(imagePath, target_size=(224, 224))
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    x = utils.preprocess_input(x, version=2)
    preds = loaded_model.predict(x)
    return utils.decode_predictions(preds)

Screen Shot 2019-10-15 at 11 01 45 AM

load_model from keras.models

Versions

print(tf.__version__)
print(keras.__version__)
print(keras_vggface.__version__)


1.14.0
2.2.4
0.6

Bug reports:

I am trying to load the rcmalli_vggface_tf_resnet50.h5 model via the load_model. I would like to replace VGGFace(model ='resnet50') because it will need internet connexion to fetch the url. This is why i get this ERROR :
URL fetch failure on https://github.com/rcmalli/keras-vggface/releases/download/v2.0/rcmalli_vggface_tf_notop_resnet50.h5

from keras.models import load_model
load_model('.\\models\\rcmalli_vggface_tf_resnet50.h5')

after calling load_model i had this error :

ValueError: Cannot create group in read only mode.

About Prediction Result

Hi, @rcmalli . I tried to use your model's predictions usage recently, and it returned me a number result, like "Predicted: 30". What's the number's meaning? And why did the prediction give two totally different number to two different pictures from the same person?
Thanks.

512 Features instead of 4096

Hi there, thank you for posting this implementation. I have a question. I am confused at why I seem to be getting 512 dimensional representations when I set include_top = False instead of 4096 dimensions. I can get 4096 dimensional representations by setting include_top = True and using the output from layer fc7, however I am curious about what is going on in the former case. The 512 dimension representations seem to perform better for the purpose of comparing face similarity too.

How to extract feature in fc7 for a test image?

sorry to bother you. I am a fresh hand for face recognition and keras. I want to use your Feature Extractioncode to get the face feature in FC7. But I find the most elements of the FC7 feature are zero. Is this right?
below is my code:

from keras.engine import  Model
from keras.layers import Input
from keras_vggface import VGGFace
import numpy as np
from keras.preprocessing import image

image_input = Input(shape=(224, 224, 3))
# for theano uncomment
# image_input = Input(shape=(3,224, 224))

# Convolution Features
#vgg_model_conv = VGGFace(include_top=False, pooling='avg') # pooling: None, avg or max
#vgg_model_conv = VGGFace(include_top=False) # pooling: None, avg or max

# FC7 Features
vgg_model = VGGFace() # pooling: None, avg or max
out = vgg_model.get_layer('fc7').output
#vgg_model_fc7 = Model(image_input, out)
vgg_model_fc7 = Model(vgg_model.input, out)

# Change the image path with yours.
img = image.load_img('images/chip_2.png', target_size=(224, 224))
print type(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
# TF order aka 'channel-last'
x = x[:, :, :, ::-1]
# TH order aka 'channel-first'
# x = x[:, ::-1, :, :]
# Zero-center by mean pixel
x[:, 0, :, :] -= 93.5940
x[:, 1, :, :] -= 104.7624
x[:, 2, :, :] -= 129.1863

vgg_model_fc7_preds = vgg_model_fc7.predict(x)
print vgg_model_fc7_preds[0]

Setup fails on Theano

When Tensorflow is ont installed setup fails. In Setup.py it checks only for tensorflow.

LFW benchmarking

Hi --

Have you ever used this package to reproduce the LFW benchmark numbers as reported in the original paper?

Thanks
Ben

VGG-Face (2015 model) - average face image?

Hi @rcmalli

First of all, thank you for sharing this amazing work/repository.

I'm using transfer learning to fine-tune VGG-Face (2015 model).

I know I have to apply the same image pre-processing to my training images as in the original paper (i.e. "The input to all networks is a face image of size 224ร—224 with the average face image (computed from the training set) subtracted"), but I run into a doubt: should I use the average face image of the original training dataset or should I use the average face image of my training dataset?

I've tried to find the answer but without success. Any clues?
Thanks

ImportError: cannot import name _ni_support

Using TensorFlow backend.
Traceback (most recent call last):
  File "test.py", line 3, in <module>
    from keras.preprocessing import image
  File "/home/user/anaconda2/envs/vggface/lib/python2.7/site-packages/keras/preprocessing/image.py", line 11, in <module>
    import scipy.ndimage as ndi
  File "/home/user/anaconda2/envs/vggface/lib/python2.7/site-packages/scipy/ndimage/__init__.py", line 161, in <module>
    from .filters import *
  File "/home/user/anaconda2/envs/vggface/lib/python2.7/site-packages/scipy/ndimage/filters.py", line 35, in <module>
    from . import _ni_support
ImportError: cannot import name _ni_support

I have reinstalled the scipy package, but same thing still happening, any ideas? Thanks

TypeError: _obtain_input_shape() got an unexpected keyword argument 'require_flatten'

I download keras_vggface version4.0. But this error still exists.
I call the function like this:
vgg_model_conv = VGGFace(include_top=False, input_shape=(224, 224, 3), pooling='avg')

Then I change to require_flatten to include_top. It works.
input_shape = _obtain_input_shape(input_shape,
default_size=224,
min_size=48,
data_format=K.image_data_format(),
include_top=include_top)

utils.preprocess_input() does not work for batch of images(like shape = (60,24,24,3) )

Please run this code and share your library versions

import tensorflow as tf
import keras
import keras_vggface

print(tf.__version__)
print(keras.__version__)
print(keras_vggface.__version__)

Bug reports:

Please replace this line with a brief summary of your issue AND if reporting a build issue include the link from:

Code Sample:

Please replace this line with a code sample to replicate the issue.

Wrong image preprocessing

Hi,

Thank you very much for providing this keras version code.

Just want to mention, I think the image processing is wrong in your code.

Directly resizing to 224 may lead problem, which can degrade the performance, because the aspect ratio of the face has been changed.

Best

about Zero-center by mean pixel

TF order aka 'channel-last'
x = x[:, :, :, ::-1]
Zero-center by mean pixel
x[:, 0, :, :] -= 93.5940
x[:, 1, :, :] -= 104.7624
x[:, 2, :, :] -= 129.1863

the last three rows shouldn't be the following?:
x[:, : , :, 0] -= 93.5940
x[:, : , :, 1] -= 104.7624
x[:, : , :, 2] -= 129.1863
it makes no sense to do zero-center for the first three rows of an image.

cannot import name '_obtain_input_shape'

Bug reports:

File "test.py", line 3, in
import keras_vggface
File "/home/tung/.local/lib/python3.6/site-packages/keras_vggface/init.py", line 1, in
from keras_vggface.vggface import VGGFace
File "/home/tung/.local/lib/python3.6/site-packages/keras_vggface/vggface.py", line 9, in
from keras_vggface.models import RESNET50, VGG16, SENET50
File "/home/tung/.local/lib/python3.6/site-packages/keras_vggface/models.py", line 15, in
from keras.applications.imagenet_utils import _obtain_input_shape
ImportError: cannot import name '_obtain_input_shape'

Code Sample:

import keras_vggface

Triplet loss for training

Can I know if you use triplet based loss to train the model or not for the VGG16 and RESNET50 model?

Thank you very much

Port caffemodel to hdf5

@rcmalli original VGGFACE1 .caffemodel is around 553MB size ; can you refer links / method that you convert these models to ~90 MB of hdf5 without loss of accuracy ? I am using your ported model for face verification task ; now i want to train my own model in caffe and port it to keras.

How I finetune the last FC layer?

First of all thank you for your efforts.

I am trying to use the VGGFace model to do a Facial Expression Recognition System.
To do it I am trying to train only the last Fully Connected (FC) layer (the layer before the Softmax) with a dataset of 1576 images (8 classes * 197 pics per class).

I tried different approaches such as:

  1. Separate the model in two submodels, the part of convolutions and the FC part. But then I can't merge them together.
  2. Tried to freeze the Convs layers (Trainable=False) and train the FC layers. But i've got an error of dimensions.

This is what I have : http://stackoverflow.com/questions/40692495/keras-error-with-training-dimension-is-not-what-is-expected

I've really like to use your model (I've already got it implemented but the train stage doesn't seem to work) but if you can explain how to do the finetune and then merge the models I will always be grateful.

Thank you very much.

Example code errors out

keras.__version__, tf.__version__
('2.0.5', '1.3.0')
import numpy as np
from keras.preprocessing import image
from keras_vggface.vggface import VGGFace
from keras_vggface import utils
from keras.layers import Flatten, Dense, Input

# tensorflow
model = VGGFace()

error dump

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-58-e1aca8cf44e3> in <module>()
      6 
      7 # tensorflow
----> 8 model = VGGFace()

/opt/conda/lib/python3.6/site-packages/keras_vggface/vggface.py in VGGFace(include_top, weights, input_tensor, input_shape, pooling, classes)
     79                                       min_size=48,
     80                                       data_format=K.image_data_format(),
---> 81                                       require_flatten=include_top)
     82 
     83     if input_tensor is None:

TypeError: _obtain_input_shape() got an unexpected keyword argument 'require_flatten'

keras_vggface on tensorflow-gpu

Following are version:
tensorflow-gpu : 1.4
keras :2.1.6

Problem:
When installing keras_vggface with pip, it is uninstalling my tensorflow-gpu and installing tensorflow which is using CPU. I want it to use GPU.

Please tell me the solution for it.

Thanks

Error when running: tensorflow ImportError: cannot import name 'utils'

When running the example I get this error:

ImportError                               Traceback (most recent call last)
<ipython-input-2-6bbdce0116de> in <module>()
      2 from keras.preprocessing import image
      3 from keras_vggface.vggface import VGGFace
----> 4 from keras_vggface import utils
      5 
      6 # tensorflow

ImportError: cannot import name 'utils'

Pre-processing of images

Hey,
I'll be grateful if someone could answer if it is necessary to do the pre-processing of our particular images if we're trying to fine-tune this models.

In that case, I need to perform my own mean subtraction of that particular images?

Thank you.

How can I add layers by myself

Thank you for the code.
I want to self define my fully connect layer, and use your include_top=False option.
But since your return is a object of Model(), I think I can not add layer. What should I do?

Basically, I want want to change the nb_class.
Thank you.

Align faces before feature generation and comparison?

All,
I'm experiencing very poor results when comparing images which have the face in different parts of the image (and perhaps different sizes), e.g. portrait photo vs a full-body photo.

I suspect that a face detection and alignment stage needs to be run. Does this need to match the specific alignment undertaken in the training stage? Are there pixel coordinates for the eyes to be placed? Presumably everyone is performing processing of this type before matching?

Any help is very much appreciated.

The example on Feature extraction

For feature extraction, wouldn't it simply be the code for "Prediction" except I would do:

model = VGGFace(include_top=False, pooling='avg')

(this is with reference to Keras' documentation on using their VGG16 model for feature extraction)

Moreover, I see that your code for pre-processing an image is similar to Keras' pre_processing function for imagenet_utils except I noticed that the values for zero-center by mean pixel are different, are they supposed to be?

Thanks!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.