Coder Social home page Coder Social logo

keras-facenet's Introduction

keras-facenet

Facenet implementation by Keras2

Pretrained model

You can quickly start facenet with pretrained Keras model (trained by MS-Celeb-1M dataset).

  • Download model from here and save it in model/keras/

You can also create Keras model from pretrained tensorflow model.

Demo

Environments

Ubuntu16.04 or Windows10
python3.6.2
tensorflow: 1.3.0
keras: 2.1.2

keras-facenet's People

Contributors

nyoki-mtl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-facenet's Issues

Different probability predictions everytime

Hi folks,
I added a predict_proba function to the clf SVC object to estimate the probability of prediction and it goes something like this:

`

                    pred = le.inverse_transform(clf.predict(embs))
                    pred1 = clf.predict_proba(embs)
                    pred1 = max(max(pred1.tolist()))
                    
                    if pred1>0.71:
                        print(pred[0],' detected with probability ', pred1)

`

The problem I'm facing is that it gives me different probabilities for the same trained face with the same lighting conditions etc everytime for my built-in DELL laptop webcam feed. Also when an untrained face is made to stand infront of it, the probability of a trained face is always greater than that untrained face. Do you guys think the problem is arising because I'm using an inferior 0.3 MP 640x480 feed as compared to training on higher-res higher-quality dataset of images? Does the dataset and the feed from the webcam have to be of same resolution? I really need help as this is a major cause of headache.

Error in demo-images notebook

Getting the following error while loading the model (on tensorflow-gpu (1.13.1))

Any guess?

model_path = '../model/keras/model/facenet_keras.h5'
model = load_model(model_path)


WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.

IndexError Traceback (most recent call last)
in
1 model_path = '../model/keras/model/facenet_keras.h5'
----> 2 model = load_model(model_path)

/usr/local/lib/python3.5/dist-packages/keras/engine/saving.py in load_model(filepath, custom_objects, compile)
417 f = h5dict(filepath, 'r')
418 try:
--> 419 model = _deserialize_model(f, custom_objects, compile)
420 finally:
421 if opened_new_file:

/usr/local/lib/python3.5/dist-packages/keras/engine/saving.py in _deserialize_model(f, custom_objects, compile)
223 raise ValueError('No model found in config.')
224 model_config = json.loads(model_config.decode('utf-8'))
--> 225 model = model_from_config(model_config, custom_objects=custom_objects)
226 model_weights_group = f['model_weights']
227

/usr/local/lib/python3.5/dist-packages/keras/engine/saving.py in model_from_config(config, custom_objects)
456 'Sequential.from_config(config)?')
457 from ..layers import deserialize
--> 458 return deserialize(config, custom_objects=custom_objects)
459
460

/usr/local/lib/python3.5/dist-packages/keras/layers/init.py in deserialize(config, custom_objects)
53 module_objects=globs,
54 custom_objects=custom_objects,
---> 55 printable_module_name='layer')

/usr/local/lib/python3.5/dist-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
143 config['config'],
144 custom_objects=dict(list(_GLOBAL_CUSTOM_OBJECTS.items()) +
--> 145 list(custom_objects.items())))
146 with CustomObjectScope(custom_objects):
147 return cls.from_config(config['config'])

/usr/local/lib/python3.5/dist-packages/keras/engine/network.py in from_config(cls, config, custom_objects)
1030 if layer in unprocessed_nodes:
1031 for node_data in unprocessed_nodes.pop(layer):
-> 1032 process_node(layer, node_data)
1033
1034 name = config.get('name')

/usr/local/lib/python3.5/dist-packages/keras/engine/network.py in process_node(layer, node_data)
989 # and building the layer if needed.
990 if input_tensors:
--> 991 layer(unpack_singleton(input_tensors), **kwargs)
992
993 def process_layer(layer_data):

/usr/local/lib/python3.5/dist-packages/keras/engine/base_layer.py in call(self, inputs, **kwargs)
455 # Actually call the layer,
456 # collecting output(s), mask(s), and shape(s).
--> 457 output = self.call(inputs, **kwargs)
458 output_mask = self.compute_mask(inputs, previous_mask)
459

/usr/local/lib/python3.5/dist-packages/keras/layers/core.py in call(self, inputs, mask)
685 if has_arg(self.function, 'mask'):
686 arguments['mask'] = mask
--> 687 return self.function(inputs, **arguments)
688
689 def compute_mask(self, inputs, mask=None):

/usr/local/lib/python3.5/dist-packages/keras/layers/core.py in (inputs, scale)
88 rate: float between 0 and 1. Fraction of the input units to drop.
89 noise_shape: 1D integer tensor representing the shape of the
---> 90 binary dropout mask that will be multiplied with the input.
91 For instance, if your inputs have shape
92 (batch_size, timesteps, features) and

IndexError: tuple index out of range

EOFError

Unable to load the pre-trained model shared using Google Drive.
Got the following error.

EOFError: EOF read where object expected

Error in demo-images notebook

Hi, I'm getting the following error when I run the 9th cell, I tried changing image_dirpath and rechecking for loops, but none of them worked:

FileNotFoundError Traceback (most recent call last)
in
2 for name in names:
3 image_dirpath = image_dir_basepath + name
----> 4 image_filepaths = [os.path.join(image_dirpath, f) for f in os.listdir(image_dirpath)]
5 embs = calc_embs(image_filepaths)
6 for i in range(len(image_filepaths)):

FileNotFoundError: [WinError 3] The system cannot find the path specified: '../data/images/LarryPage'

Cell 9th:
data = {}
for name in names:
image_dirpath = image_dir_basepath + name
image_filepaths = [os.path.join(image_dirpath, f) for f in os.listdir(image_dirpath)]
embs = calc_embs(image_filepaths)
for i in range(len(image_filepaths)):
data['{}{}'.format(name, i)] = {'image_filepath' : image_filepaths[i],
'emb' : embs[i]}

h5 model and weight file

I followed the notebook files.
But, there is some problem for creating keras model file from tf ckpt.
Svm and image results are weird although It successfully created the h5 file.
I think that has problem in converting process for my case.
Can you upload you keras model file?

my env : tf1.3, keras2.1.2, python2.7 or 3.6

Model Architecture

Can you please post the code for model architecture or put the model as a json file, this would help the load the weights in different python versions?
Thank you

Bug in inception_resnet_v1.py

As loading the provided file does not work with a different Python version, I tried to convert the weights from David Sandberg's pre-trained model via the provided code. The resulting embeddings were arbitrary, as already mentioned in one of the other comments. I tracked the issue down to a bug in the inception_resnet_v1.py code, line 102:

x = Lambda(scaling,
output_shape=K.int_shape(up)[1:],
arguments={'scale': scale})(up)

should actually say

up = Lambda(scaling,
output_shape=K.int_shape(up)[1:],
arguments={'scale': scale})(up)

as x is the input to the block that needs to go into the final addition with the scaled result computed in the block. However, with the current state of the code, x will be overwritten and the output of the block will be a scaled version of the result computed in the block.

Adding this change enabled me to export a model that gives me the same embeddings as David Sandberg's model.

Best,
Kira

Training in demo-webcam

After executing the line

f.train()

The following error is returned

ValueError                                Traceback (most recent call last)
<ipython-input-10-770cb541f283> in <module>()
----> 1 f.train()

<ipython-input-7-720c27c39f20> in train(self)
     73             embs.append(embs_)
     74 
---> 75         embs = np.concatenate(embs)
     76         le = LabelEncoder().fit(labels)
     77         y = le.transform(labels)

<__array_function__ internals> in concatenate(*args, **kwargs)

ValueError: need at least one array to concatenate

What is the problem? And how to solve it?

License

Could you please let me know the license type of this repository

image normalization

i was able to create the embedding using the converted CoreML keras model but the input image was not normalized thus the output was incorrect.

I used the tensorflow method tf.image.per_image_standardization(image) to train the data thus would like to use the similar method. Do I need to implement it myself in Swift? or Should I add the layer in the keras model ?

Error message running tf_to_keras notebook

When running the last cell of tf_to_keras notebook (changing model folder and checkpoint to the latest model available), I get the following error:

Loading numpy weights from ../model/keras/npy_weights/
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-14-903bce1a4b25> in <module>
      8             weight_arr = np.load(os.path.join(npy_weights_dir, weight_file))
      9             weights.append(weight_arr)
---> 10         layer.set_weights(weights)
     11 
     12 print('Saving weights...')

~/anaconda3/envs/venv/lib/python3.6/site-packages/keras/engine/base_layer.py in set_weights(self, weights)
   1055                                  str(pv.shape) +
   1056                                  ' not compatible with '
-> 1057                                  'provided weight shape ' + str(w.shape))
   1058             weight_value_tuples.append((p, w))
   1059         K.batch_set_value(weight_value_tuples)

ValueError: Layer weight shape (1792, 128) not compatible with provided weight shape (1792, 512)

Any insight would be really appreciated...

Unable to load pre-trained model

Unable to load the pre-trained model shared using Google Drive.
Got the following error.

ValueError: bad marshal data (unknown type code)

Core ML model output

Core ML model output seems not scaled, how much scale should I apply while converting?
Keras output:
array([[-2.6317966e-01, -3.3678617e-02, 2.2661042e+00, -6.7546511e-01,
2.6711123e+00, 2.1380582e+00, 3.1932313e+00, -1.0266680e+00,
-1.2064759e+00, -3.7772853e+00, 1.4777870e+00, -4.5663638e+00,
5.3783691e-01, 6.9581866e-01, -3.2163723e+00, -2.2745862e+00,
1.9304838e+00, -4.4520035e+00, -1.1737602e+00, -4.1146436e+00,
2.8376224e+00, -6.1216235e+00, 8.1349170e-01, 4.7025423e+00,
2.4575691e+00, 4.7649293e+00, 3.1503294e+00, 1.7728921e-02,
-3.4083718e-01, -5.7512856e+00, -1.0213958e+00, 5.9897904e+00,
-2.0491607e+00, 2.1902418e+00, -2.5166994e-01, 1.1277127e+00,
4.1162009e+00, -2.1709535e+00, 4.1017589e-01, 7.4210954e-01,
2.5226521e+00, -2.1406457e-01, 1.8106477e+00, 7.8078407e-01,
-5.4581828e+00, 3.3939710e+00, 3.6321692e+00, 5.1154441e-01,
-5.9473997e-01, -6.4269938e+00, 4.8341846e+00, 1.7983266e+00,
9.3833292e-01, 1.2895137e+00, 1.9546670e+00, 6.5272875e+00,
9.4690269e-01, -5.3456831e+00, 2.5641103e+00, 4.0326028e+00,
-2.4558032e+00, -6.3662320e-01, 1.0621898e+00, 1.5926079e+00,
5.3224262e-02, 1.2270166e+00, 2.2601867e+00, 7.6212996e-01,
4.0221577e+00, 3.7582583e+00, 1.6832346e+00, 3.1141477e+00,
-2.5827744e+00, 3.7479327e+00, -2.5769188e+00, 3.5341086e+00,
-1.0932776e+00, 2.3388093e+00, -3.6909742e+00, -3.2998741e+00,
-1.4890214e-02, 2.6787040e+00, 1.6687028e+00, -1.0944015e+00,
6.9286877e-01, 1.8361024e+00, 4.1195717e+00, -6.4409890e+00,
9.2169154e-01, 2.6727970e+00, 2.2134098e-01, -3.7779233e+00,
8.3170211e-01, -2.0311317e+00, 6.9612050e-01, 7.6587849e+00,
3.3811717e+00, -3.0122232e+00, 2.9068935e+00, 2.9820104e+00,
-3.9757938e+00, -3.4195602e+00, -1.9991946e-01, 2.5219235e-01,
-3.4377544e+00, -7.1006083e-01, -5.6879859e+00, 3.5768299e+00,
1.8099457e+00, -1.5660530e+00, -5.2569504e+00, -1.1321362e+00,
2.7814114e+00, -3.8275607e+00, 3.7700075e-01, 2.9798200e+00,
-1.3275119e+00, 8.1083018e-01, -8.8596046e-01, -6.0629940e+00,
1.9946741e+00, -4.0994754e+00, 3.5792630e+00, 1.3484008e+00,
-1.8742681e-04, 1.3709370e+00, 3.6508691e+00, -3.8959811e+00]],
dtype=float32)
Converted model output:
[855.0784912109375,-4063.724365234375,737.659912109375,-7393.7275390625,-2245.569580078125,-1595.0166015625,-2120.451904296875,3890.713623046875,509.8944702148438,-3713.39208984375,-166.2485809326172,2475.9208984375,12017.4716796875,1453.450561523438,-4773.72509765625,-3715.33837890625,1487.673828125,-2976.453369140625,2839.177734375,987.212646484375,-139.4358978271484,-1500.54736328125,-1800.843627929688,2315.100830078125,2692.251220703125,1220.254272460938,8099.76123046875,-2944.90283203125,-1743.339721679688,-7470.42041015625,-2737.231689453125,4109.24609375,8797.529296875,-2345.7978515625,-7959.0234375,332.7884826660156,2513.694091796875,550.3179321289062,-1115.5908203125,1945.220092773438,3336.918212890625,1632.697387695312,435.2661743164062,1182.497314453125,-3632.15478515625,-2488.587890625,-566.3248901367188,-66.20777130126953,6390.865234375,-10544.40625,3047.216552734375,2446.52197265625,-3668.0166015625,-3492.718017578125,2651.781494140625,-1474.755859375,-407.1371154785156,-1065.56591796875,83.17457580566406,890.1207885742188,871.6825561523438,-945.396728515625,-3681.1923828125,2485.04736328125,3726.015380859375,-1097.35205078125,1911.19677734375,-2232.069580078125,1297.970458984375,-1134.264770507812,-4204.21142578125,3786.914794921875,-2623.253173828125,-1265.18505859375,1580.590454101562,-5637.24072265625,-2561.71240234375,314.7735595703125,2296.7470703125,-5458.6435546875,-388.1426696777344,3363.203857421875,2543.2294921875,2422.521240234375,-1095.759887695312,7738.43359375,-63.55666732788086,-5111.3408203125,-4203.83837890625,3556.224609375,2779.212890625,118.7055282592773,924.0149536132812,-5947.62548828125,1709.426879882812,6934.91650390625,4084.002197265625,-1921.875854492188,1726.5087890625,453.2798767089844,-857.8190307617188,-566.110595703125,5079.982421875,351.6022644042969,-6870.47802734375,2958.294921875,-4496.14794921875,1221.89111328125,-1598.419189453125,-2361.905517578125,-6226.74462890625,-1889.393798828125,-171.6868286132812,-2751.922607421875,2748.287841796875,3023.27392578125,-5400.4267578125,1489.955200195312,6292.98828125,-3763.5556640625,-2693.62060546875,-763.9926147460938,-3135.6083984375,4158.576171875,2564.194580078125,-1977.328979492188,352.8983154296875,-95.43745422363281]
converted model output with 1/255 scale:
[22.72692,-111.8219,21.57746,-205.979,-62.09193,-43.63466,-59.90541,105.5358,13.35077,-102.3288,-4.489635,66.16525,330.7473,41.02428,-131.7471,-101.4517,41.12999,-79.95216,79.33298,25.1811,-3.271009,-39.42453,-51.64153,63.79717,73.8793,35.71499,223.8051,-79.38886,-47.4924,-205.7483,-75.49256,114.1817,242.6053,-62.9214,-216.9569,9.832993,68.27296,13.29188,-28.98055,55.34957,91.76531,43.88745,13.52798,32.19314,-102.5722,-70.43962,-14.59233,-4.072866,176.7958,-292.1834,82.75787,68.06502,-100.6124,-96.46475,73.84764,-40.1497,-11.05663,-29.77695,2.676188,21.43342,22.9453,-27.11697,-101.4999,71.70054,102.3595,-28.63747,53.31374,-59.51252,37.61403,-32.10843,-116.7825,103.3929,-68.22962,-37.54702,42.9712,-152.4216,-69.04482,8.215605,63.2452,-151.812,-11.31077,91.50224,68.38104,70.26328,-26.96819,211.9508,-1.72123,-140.1563,-116.7873,101.6846,77.99618,4.845797,24.48302,-165.1091,46.5722,191.0615,111.0762,-53.38795,47.65784,10.37083,-23.10145,-12.50677,140.9233,8.339808,-187.8266,82.70412,-125.4531,34.69426,-45.9505,-68.5683,-168.2019,-50.66414,-6.257753,-73.38413,78.46208,84.6222,-147.3123,42.28506,176.3857,-100.7463,-75.18451,-20.3528,-85.68301,115.6824,72.76437,-57.86979,12.71073,-1.227953]

Got errors when run webcam demo

My code:

#coding:utf-8
import numpy as np
import cv2
import matplotlib.pyplot as plt
import signal
from IPython import display
from sklearn.svm import SVC
from sklearn.preprocessing import LabelEncoder
from skimage.transform import resize
from keras.models import load_model
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

cascade_path = './model/cv2/haarcascade_frontalface_alt2.xml'

model_path = './model/keras/model/facenet_keras.h5'
model = load_model(model_path)

def prewhiten(x):
    if x.ndim == 4:
        axis = (1, 2, 3)
        size = x[0].size
    elif x.ndim == 3:
        axis = (0, 1, 2)
        size = x.size
    else:
        raise ValueError('Dimension should be 3 or 4')

    mean = np.mean(x, axis=axis, keepdims=True)
    std = np.std(x, axis=axis, keepdims=True)
    std_adj = np.maximum(std, 1.0/np.sqrt(size))
    y = (x - mean) / std_adj
    return y

def l2_normalize(x, axis=-1, epsilon=1e-10):
    output = x / np.sqrt(np.maximum(np.sum(np.square(x), axis=axis, keepdims=True), epsilon))
    return output

def calc_embs(imgs, margin, batch_size):
    aligned_images = prewhiten(imgs)
    pd = []
    for start in range(0, len(aligned_images), batch_size):
        pd.append(model.predict_on_batch(aligned_images[start:start+batch_size]))
    embs = l2_normalize(np.concatenate(pd))

    return embs

class FaceDemo(object):
    def __init__(self, cascade_path):
        self.vc = None
        self.cascade = cv2.CascadeClassifier(cascade_path)
        self.margin = 10
        self.batch_size = 1
        self.n_img_per_person = 10
        self.is_interrupted = False
        self.data = {}
        self.le = None
        self.clf = None
        
    def _signal_handler(self, signal, frame):
        self.is_interrupted = True
        
    def capture_images(self, name='Unknown'):
        vc = cv2.VideoCapture(0)
        self.vc = vc
        if vc.isOpened():
            is_capturing, _ = vc.read()
        else:
            is_capturing = False

        imgs = []
        signal.signal(signal.SIGINT, self._signal_handler)
        self.is_interrupted = False
        while is_capturing:
            is_capturing, frame = vc.read()
            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            faces = self.cascade.detectMultiScale(frame,
                                         scaleFactor=1.1,
                                         minNeighbors=3,
                                         minSize=(100, 100))
            if len(faces) != 0:
                face = faces[0]
                (x, y, w, h) = face
                left = x - self.margin // 2
                right = x + w + self.margin // 2
                bottom = y - self.margin // 2
                top = y + h + self.margin // 2
                img = resize(frame[bottom:top, left:right, :],
                             (160, 160), mode='reflect')
                imgs.append(img)
                cv2.rectangle(frame,
                              (left-1, bottom-1),
                              (right+1, top+1),
                              (255, 0, 0), thickness=2)

            plt.imshow(frame)
            plt.title('{}/{}'.format(len(imgs), self.n_img_per_person))
            plt.xticks([])
            plt.yticks([])
            display.clear_output(wait=True)
            if len(imgs) == self.n_img_per_person:
                vc.release()
                self.data[name] = np.array(imgs)
                break
            try:
                plt.pause(0.1)
            except Exception:
                pass
            if self.is_interrupted:
                vc.release()
                break
                
    def train(self):
        labels = []
        embs = []
        names = self.data.keys()
        for name, imgs in self.data.items():
            embs_ = calc_embs(imgs, self.margin, self.batch_size)    
            labels.extend([name] * len(embs_))
            embs.append(embs_)

        embs = np.concatenate(embs)
        le = LabelEncoder().fit(labels)
        y = le.transform(labels)
        print(embs.shape)
        print(y.shape)
        clf = SVC(kernel='linear', probability=True).fit(embs, y)
        
        self.le = le
        self.clf = clf
        
    def infer(self):
        vc = cv2.VideoCapture(0)
        self.vc = vc
        if vc.isOpened():
            is_capturing, _ = vc.read()
        else:
            is_capturing = False

        signal.signal(signal.SIGINT, self._signal_handler)
        self.is_interrupted = False
        while is_capturing:
            is_capturing, frame = vc.read()
            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            faces = self.cascade.detectMultiScale(frame,
                                         scaleFactor=1.1,
                                         minNeighbors=3,
                                         minSize=(100, 100))
            pred = None
            if len(faces) != 0:
                face = faces[0]
                (x, y, w, h) = face
                left = x - self.margin // 2
                right = x + w + self.margin // 2
                bottom = y - self.margin // 2
                top = y + h + self.margin // 2
                img = resize(frame[bottom:top, left:right, :],
                             (160, 160), mode='reflect')
                embs = calc_embs(img[np.newaxis], self.margin, 1)
                pred = self.le.inverse_transform(self.clf.predict(embs))
                cv2.rectangle(frame,
                              (left-1, bottom-1),
                              (right+1, top+1),
                              (255, 0, 0), thickness=2)
            plt.imshow(frame)
            plt.title(pred)
            plt.xticks([])
            plt.yticks([])
            display.clear_output(wait=True)
            try:
                plt.pause(0.1)
            except Exception:
                pass
            if self.is_interrupted:
                vc.release()
                break

f = FaceDemo(cascade_path)
f.capture_images('XinyuDu')
f.train()
f.infer()

Error info as follows:

File "F:\Baiduyun\TensorFlowProjects\keras-facenet-master\webcam.py", line 128, in train
    clf = SVC(kernel='linear', probability=True).fit(embs, y)
  File "C:\Users\Administrator\Anaconda3\envs\ktf\lib\site-packages\sklearn\svm\base.py", line 150, in fit
    y = self._validate_targets(y)
  File "C:\Users\Administrator\Anaconda3\envs\ktf\lib\site-packages\sklearn\svm\base.py", line 506, in _validate_targets
    % len(cls))
ValueError: The number of classes has to be greater than one; got 1

Get error with webcam demo

Sir,

when I run the f.train(). it shows below
How can I solve it?


ValueError Traceback (most recent call last)
in
----> 1 f.train()

in train(self)
73 embs.append(embs_)
74
---> 75 embs = np.concatenate(embs)
76 le = LabelEncoder().fit(labels)
77 y = le.transform(labels)

ValueError: need at least one array to concatenate

using coreml model

i got this error while trying to test the corml model :

"[coreml] A Core ML custom neural network layer requires an implementation named 'scaling' which was not found in the global namespace."
and
"[coreml] Error creating Core ML custom layer implementation from factory for layer "scaling"."

How to use Gray scale image?

I am using Histogram equalisation for sharpening the image, which works well for me.

I would like to change the input layer size from 1601603 to 1601601. Is their any way to achieve it with tweaking the model itself.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.