Coder Social home page Coder Social logo

asingh33 / cnngesturerecognizer Goto Github PK

View Code? Open in Web Editor NEW
958.0 44.0 354.0 17.16 MB

Gesture recognition via CNN. Implemented in Keras + Tensorflow/Theano + OpenCV

License: MIT License

Python 100.00%
gesture-recognition machine-learning theano python tensorflow keras

cnngesturerecognizer's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cnngesturerecognizer's Issues

Gesture Probability Error

Hey, when i run trackgesture.py then 3 screen open up. The problem is Gesture Probability window is always black and not showing any probability

Pretrained model seems broken

Hi @asingh33,

This is a very impressive work! I'd like to try the demo, but it seems that the hdf5 file is broken in someway. I've only got a 35.06K file after download it from the github.

Is it possible for you to provide a new version? Thanks a lot.

Always detecting PUNCH

HI,

First I congratulate you for your wonderful work. I tried to predict gestures with retrained weights, but I always get PUNCH as predicted output. Along with it, i get few errors too. I am attaching the screenshot for your reference. it would be great if i could get an earliest response
gesture

About ori_4015imgs_weights.hdf5 this file

Environment :Python3

File "E:\Other softwares\network\Anaconda\lib\site-packages\h5py_hl\files.py", line 272, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "E:\Other softwares\network\Anaconda\lib\site-packages\h5py_hl\files.py", line 102, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (D:\Build\h5py\h5py-2.7.0\h5py_objects.c:2853)
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (D:\Build\h5py\h5py-2.7.0\h5py_objects.c:2811)
File "h5py\h5f.pyx", line 78, in h5py.h5f.open (D:\Build\h5py\h5py-2.7.0\h5py\h5f.c:2130)
OSError: Unable to open file (File signature not found)

PS:
I don‘t have "(D:\Build\h5py\h5py-2.7.0\h5py_objects.c:2853)"this folder and file, even "D:\Build"this folder does not exist.

adding new gestures

hi !
I am new to this part of coding and am unable to understand how to add a new gesture to the given setup .
could you walk me through it . ??
thanks

K.set_image_dim_ordering('th') assumes (channels, rows, cols)

First of all, thanks for the amazing work.

Regarding the issue, title, in file gestureCNN.py, you have the following at line 28:
K.set_image_dim_ordering('th')

If you want to use:
K.set_image_dim_ordering('tf')

Then, in line 322, you should use:
X_train.reshape(X_train.shape[0], img_rows, img_cols, img_channels)

Instead of:
X_train = X_train.reshape(X_train.shape[0], img_channels, img_rows, img_cols)

That's why you get the error "ValueError: Negative dimension size caused by subtracting 3 from 1..."

I'll try to fix it and submit a pull request.

Check the following article for further reference: Keras backends

Error about h5py

@asingh33 Thank you very much for your information,I download the model from google but I got an error in using it.I don't know how to solve it.
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
TypeError: Indexing elements must be in increasing order
My version:numpy 1.13.1
h5py 2.9.0
win10
tensorflow 1.2.1
keras 2.0.2
python3.6.1
what's your version of h5py and numpy?Would you please tell me how to solve the error above?

IOError: Unable to open file (File signature not found)

Excuse me sir,when I choose 1 to use pretrained model, I meet this problem:

loading ori_4015imgs_weights.hdf5
Traceback (most recent call last):
File "trackgesture.py", line 299, in
Main()
File "trackgesture.py", line 175, in Main
mod = myNN.loadCNN(0)
File "/Users/Arduino/Downloads/CNNGestureRecognizer-master/gestureCNN.py", line 174, in loadCNN
model.load_weights(fname)
File "/Users/Arduino/tensosflow/lib/python2.7/site-packages/keras/models.py", line 721, in load_weights
f = h5py.File(filepath, mode='r')
File "/Users/Arduino/anaconda2/lib/python2.7/site-packages/h5py/_hl/files.py", line 271, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/Users/Arduino/anaconda2/lib/python2.7/site-packages/h5py/_hl/files.py", line 101, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (File signature not found)

Im sorry that Im not really know what the problem is, my mac is cpu only, so I dont train model by myself. Do you know how to solve it? Please tell me. Thanks!!!

git clone error

Downloading ori_4015imgs_weights.hdf5 (157 MB) Error downloading object: ori_4015imgs_weights.hdf5 (013a4a7): Smudge error: Error downloading ori_4015imgs_weights.hdf5 (013a4a7be0029315b8f77bac155e960b1b9996066c2d9a253623aae2adb29600): batch response: This repository is over its data quota. Purchase more data packs t o restore access.

When I git clone this repository, something error happens as above.

label in gestureCNN uses integer inputs

Replace lines 297 to 303 in gestureCNN with these. Other wise, training won't work on windows, python 3.6

s = 0
    r = samples_per_class

    for classIndex in range(nb_classes):
        label[int(s):int(r)] = classIndex
        s = r
        r = s + samples_per_class

Two kinds of images and hwo to speed the app up?

You use two kinds of images, binary and skinMask. Would this method reduce accuracy? If I train the model with skinMask, and then use the model to predict skinMask only, would it be better?
I run the app on my laptop with i5-5200U CPU,however,it couldn't run smoothly when predicting. Without GPU, is there any other way to speed the app up?

training new arm gestures

Hey,
Sorry to bother you again, but i am a complete beginner in this field.
I was able to train new finger gestures, but the problem statement of my project changed a little, so i was wondering whether the same program can be retrained to work properly for complete arm gestures like recognizing gestures like raised right arm, both arm raised, etc.
Also, i was wondering, when i added gestures, I just took images and named them after the gesture, and trained it, but i was wondering while training how does the program know which gesture a picture is a sample of.

Thank you again , and sorry for bothering you,
Aadarsh Pratik

Validation accuracy doesn't increase while retraining the model

Hello sir,
It is a great reference as my own research. However, I got a problem while I retrained the model. After I retrained your original model without any new data , there wasn't any increase in validation accuracy (log below). Do you know what might be the reasons of this issue?Thank you.

( By the way, I didn't modify your code except for printing numpy array)


rex@Zeus:~/Desktop/python_hr/retrained_5CNN$ python trackgesture.py
Using TensorFlow backend.


What would you like to do ?
1- Use pretrained model for gesture recognition & layer visualization
2- Train the model (you will require image samples for training under .\imgfolder)
3- Visualize feature maps of different layers of trained model
2


conv2d_1 (Conv2D) (None, 32, 198, 198) 320
activation_1 (Activation) (None, 32, 198, 198) 0
conv2d_2 (Conv2D) (None, 32, 196, 196) 9248
activation_2 (Activation) (None, 32, 196, 196) 0
max_pooling2d_1 (MaxPooling2 (None, 32, 98, 98) 0
dropout_1 (Dropout) (None, 32, 98, 98) 0
flatten_1 (Flatten) (None, 307328) 0
dense_1 (Dense) (None, 128) 39338112
activation_3 (Activation) (None, 128) 0
dropout_2 (Dropout) (None, 128) 0
dense_2 (Dense) (None, 5) 645
activation_4 (Activation) (None, 5) 0
Total params: 39,348,325
Trainable params: 39,348,325
Non-trainable params: 0


(4015, 40000)
Press any key
samples_per_class - 803 total_image - 4015
Train on 3212 samples, validate on 803 samples
Epoch 1/15
2017-08-18 11:50:13.901818: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-18 11:50:13.901850: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-18 11:50:13.901855: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-18 11:50:13.901859: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-18 11:50:13.901862: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-08-18 11:50:14.108379: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties:
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.835
pciBusID 0000:02:00.0
Total memory: 7.92GiB
Free memory: 7.30GiB
2017-08-18 11:50:14.108407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0
2017-08-18 11:50:14.108413: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y
2017-08-18 11:50:14.108419: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0)
3212/3212 [==============================] - 33s - loss: 1.7759 - acc: 0.2008 - val_loss: 1.6096 - val_acc: 0.2067
Epoch 2/15
3212/3212 [==============================] - 31s - loss: 1.6020 - acc: 0.2481 - val_loss: 1.6106 - val_acc: 0.2080
Epoch 3/15
3212/3212 [==============================] - 31s - loss: 1.5640 - acc: 0.2905 - val_loss: 1.6233 - val_acc: 0.2105
Epoch 4/15
3212/3212 [==============================] - 31s - loss: 1.4404 - acc: 0.4019 - val_loss: 1.6827 - val_acc: 0.1768
Epoch 5/15
3212/3212 [==============================] - 31s - loss: 1.2607 - acc: 0.4978 - val_loss: 1.7530 - val_acc: 0.1843
Epoch 6/15
3212/3212 [==============================] - 31s - loss: 1.0425 - acc: 0.6077 - val_loss: 1.9313 - val_acc: 0.1768
Epoch 7/15
3212/3212 [==============================] - 31s - loss: 0.8596 - acc: 0.6868 - val_loss: 2.0786 - val_acc: 0.1781
Epoch 8/15
3212/3212 [==============================] - 31s - loss: 0.7027 - acc: 0.7335 - val_loss: 2.2749 - val_acc: 0.1781
Epoch 9/15
3212/3212 [==============================] - 31s - loss: 0.6062 - acc: 0.7746 - val_loss: 2.6289 - val_acc: 0.1893
Epoch 10/15
3212/3212 [==============================] - 32s - loss: 0.5311 - acc: 0.7948 - val_loss: 2.7380 - val_acc: 0.1868
Epoch 11/15
3212/3212 [==============================] - 31s - loss: 0.4637 - acc: 0.8207 - val_loss: 2.8698 - val_acc: 0.1880
Epoch 12/15
3212/3212 [==============================] - 32s - loss: 0.4362 - acc: 0.8216 - val_loss: 3.0280 - val_acc: 0.1880
Epoch 13/15
3212/3212 [==============================] - 32s - loss: 0.3905 - acc: 0.8450 - val_loss: 3.2075 - val_acc: 0.1893
Epoch 14/15
3212/3212 [==============================] - 31s - loss: 0.3513 - acc: 0.8621 - val_loss: 3.3562 - val_acc: 0.1930
Epoch 15/15
3212/3212 [==============================] - 31s - loss: 0.3271 - acc: 0.8702 - val_loss: 3.4325 - val_acc: 0.1893

line 295 of gestureCNN.py is wrong cause by numpy's version

I got an error at gestureCNN.py in line 295. This may cause by my numpy's version which is 1.14.2. I change the code from "total_images / nb_classes" to "total_images // nb_classes", and then the code is working fine.
File "gestureCNN.py", line 299, in initializers label[s:r] = classIndex
TypeError: slice indices must be integers or None or have an index method
My English is not very good and I hope u can understand it. BTW this demo is very good and help me a lot.

load weights error

Using TensorFlow backend.

What would you like to do ?
1- Use pretrained model for gesture recognition & layer visualization
2- Train the model (you will require image samples for training under .\imgfolder)
3- Visualize feature maps of different layers of trained model
4- Exit
1
Will load default weight file


Layer (type) Output Shape Param #

conv2d_1 (Conv2D) (None, 32, 198, 198) 320


activation_1 (Activation) (None, 32, 198, 198) 0


conv2d_2 (Conv2D) (None, 32, 196, 196) 9248


activation_2 (Activation) (None, 32, 196, 196) 0


max_pooling2d_1 (MaxPooling2 (None, 32, 98, 98) 0


dropout_1 (Dropout) (None, 32, 98, 98) 0


flatten_1 (Flatten) (None, 307328) 0


dense_1 (Dense) (None, 128) 39338112


activation_3 (Activation) (None, 128) 0


dropout_2 (Dropout) (None, 128) 0


dense_2 (Dense) (None, 5) 645


activation_4 (Activation) (None, 5) 0

Total params: 39,348,325.0
Trainable params: 39,348,325.0
Non-trainable params: 0.0


loading ori_4015imgs_weights.hdf5
Traceback (most recent call last):

File "", line 1, in
runfile('E:/CNNGestureRecognizer-master/trackgesture.py', wdir='E:/CNNGestureRecognizer-master')

File "D:\ProgramData\Anaconda3\envs\gesture\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 704, in runfile
execfile(filename, namespace)

File "D:\ProgramData\Anaconda3\envs\gesture\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "E:/CNNGestureRecognizer-master/trackgesture.py", line 375, in
Main()

File "E:/CNNGestureRecognizer-master/trackgesture.py", line 214, in Main
mod = myNN.loadCNN(0)

File "E:\CNNGestureRecognizer-master\gestureCNN.py", line 208, in loadCNN
model.load_weights(fname)

File "D:\ProgramData\Anaconda3\envs\gesture\lib\site-packages\keras\models.py", line 706, in load_weights
topology.load_weights_from_hdf5_group(f, layers)

File "D:\ProgramData\Anaconda3\envs\gesture\lib\site-packages\keras\engine\topology.py", line 2895, in load_weights_from_hdf5_group
original_backend)

File "D:\ProgramData\Anaconda3\envs\gesture\lib\site-packages\keras\engine\topology.py", line 2836, in preprocess_weights_for_loading
weights[0] = conv_utils.convert_kernel(weights[0])

File "D:\ProgramData\Anaconda3\envs\gesture\lib\site-packages\keras\utils\conv_utils.py", line 86, in convert_kernel
return np.copy(kernel[slices])

File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper

File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper

File "D:\ProgramData\Anaconda3\envs\gesture\lib\site-packages\h5py_hl\dataset.py", line 553, in getitem
selection = sel.select(self.shape, args, dsid=self.id)

File "D:\ProgramData\Anaconda3\envs\gesture\lib\site-packages\h5py_hl\selections.py", line 90, in select
sel[args]

File "D:\ProgramData\Anaconda3\envs\gesture\lib\site-packages\h5py_hl\selections.py", line 367, in getitem
raise TypeError("Indexing elements must be in increasing order")

TypeError: Indexing elements must be in increasing order


how to solve it? thanks.

When I run gestureCNN.py,I have meet errors

When I run gestureCNN.py on windows platform, I meet the following errors:
ImportError:Failed to import pydot.You must install pydot and graphviz for 'pydotprint' to work.
After I pip install pydot,pip install graphviz.The errors are still.
How can I to do this?
Thank you .

Overfitting problem

Hi, you made a good work. I just wanted to say that the model is overfitting because the dataset is from one person. So it can be better to mention it in the readme.

Dataset

Hello sir,
Thankyou for the work you have done.
I was planning to use your model for a project.
But I was facing problem in downloading the dataset as its not available on the repository.
If you could provide me with the dataset link or anything,I would be highly indebted and
I can use that to train my model.
Thanks in advance

Predicting only the NOTHING class

When I try to predict using the pre trained model it only predicts the NOTHING class everytime with 99.9% accuracy I am running theano in CPU.

OSError: Unable to open file (file signature not found)

model.load_weights(fname)
File "C:\Users\Raghu\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\network.py", line 1157, in load_weights
with h5py.File(filepath, mode='r') as f:
File "C:\Users\Raghu\AppData\Local\Programs\Python\Python36\lib\site-packages\h5py_hl\files.py", line 394, in init
swmr=swmr)
File "C:\Users\Raghu\AppData\Local\Programs\Python\Python36\lib\site-packages\h5py_hl\files.py", line 170, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5f.pyx", line 85, in h5py.h5f.open
OSError: Unable to open file (file signature not found)

Solution to the error : Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution'

I found the solution to this problem :
When you initiate a convolution layer,
the order of parameters of the input size should be (rows, cols, channel)
but in your code gestureCNN.py, loadCNN, line 151 says that:
model.add(Conv2D(nb_filters, (nb_conv, nb_conv),
padding='valid',
input_shape=(img_channels, img_rows, img_cols)))

so you should add a new parameter called "data_format" and set it to 'channels_first' like this :

model.add(Conv2D(nb_filters, (nb_conv, nb_conv),
padding='valid',data_format='channels_first',
input_shape=(img_channels, img_rows, img_cols)))

About the dataset used to obtain the weights provided

Hi
Thanks for sharing your code and the trained model
I`ve run your code
May I ask some questions about the data set you used?

  • How many samples do you have for each gesture?
  • You created yourself or downloaded it from the internet?

Many thanks!

ImportError

I encountered these ImportError problems when opening the trackgesture.py in windows, and I have installed the pillow package.
Please help me, thank you!!

↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓

C:\CNNGestureRecognizer-master>set "KERAS_BACKEND=tensorflow"
C:\CNNGestureRecognizer-master>python trackgesture.py
Using TensorFlow backend.
Traceback (most recent call last):
File "trackgesture.py", line 16, in
import gestureCNN as myNN
File "C:\CNNGestureRecognizer-master\gestureCNN.py", line 35, in
from PIL import Image
File "C:\Users\s9021\Anaconda3\lib\site-packages\PIL\Image.py", line 58, in
from . import _imaging as core
ImportError: DLL load failed: The specified module could not be found

Error while running trackgesture.py file

I got this error when running the trackgesture.py file. The error occurred while loading the ori_4015imgs_weights.hdf5 file.

File "trackgesture.py", line 299, in
Main()
File "trackgesture.py", line 180, in Main
mod = myNN.loadCNN(w)
File "/home/raj/CNNGestureRecognizer-master/gestureCNN.py", line 174, in loadCNN
model.load_weights(fname)
File "/home/raj/anaconda3/envs/gest2/lib/python2.7/site-packages/keras/engine/network.py", line 1171, in load_weights
with h5py.File(filepath, mode='r') as f:
File "/home/raj/anaconda3/envs/gest2/lib/python2.7/site-packages/h5py/_hl/files.py", line 312, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/raj/anaconda3/envs/gest2/lib/python2.7/site-packages/h5py/_hl/files.py", line 142, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (file signature not found)

IOError: Unable to open file (File signature not found)

Traceback (most recent call last):
File "trackgesture.py", line 299, in
Main()
File "trackgesture.py", line 175, in Main
mod = myNN.loadCNN(0)
File "/Users/aadarshpratik/anaconda2/CNNGestureRecognizer/gestureCNN.py", line 174, in loadCNN
model.load_weights(fname)
File "/Users/aadarshpratik/anaconda2/lib/python2.7/site-packages/keras/models.py", line 708, in load_weights
with h5py.File(filepath, mode='r') as f:
File "/Users/aadarshpratik/anaconda2/lib/python2.7/site-packages/h5py/_hl/files.py", line 271, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/Users/aadarshpratik/anaconda2/lib/python2.7/site-packages/h5py/_hl/files.py", line 101, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (File signature not found)

ValueError: The truth value of an array with more than one element is ambiguous.

hi, i have runed your program and success recognized gestures, it is amazing,thanks for your work,but I can't use visualizeLayers() function.
it callback ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
I can not reallize the following code:

if img <= len(imlist):

image = np.array(Image.open('./imgs/' + 0-[img.all() - 1]).convert('L')).flatten()

can you demonstrate it?
it is great pleasure for your reply!

didn't you detect hand?

I haven't run your code, but as your demo shows, I have a question,
do you detect hand? because I see the hand bounding box is solid, I
guess your work is just gesture recognition and not hand detect, is that?

research paper

I am interested in your work. can you please provide me research paper .Because I need paper to do synopsis and other documents. please

cant open file name="bw_4015imgs_weights.hdf5"

I load your ori_4015xxx.hdf5 file successfully,but i failed to open another file(bw_4015imgs_weights.hdf5).
When i run your code ,press 3 to visualize and then press 1,an error occured:Unable to open file:name=‘bw_4015imgs_weights.hdf5’,errno=2,error message='no such file or directory'

I also cant find this file in your code,
How can i do to solve this problem?

Issues during running

Thank you very much sir for helping me understand cnn. Here is what I got wrong.
screen shot 2018-11-22 at 17 59 40

Would you please tell me what I should do after this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.