Coder Social home page Coder Social logo

keras-fcn's People

Contributors

jihongju avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-fcn's Issues

Requesting License + input on upstream Keras Semantic Segmentation design

I came across your repository and it looks like good work and for that reason I'm submitting this request.

François Chollet, Keras' author, said he is interested in directly incorporating dense prediction/FCN into the Keras API, so I'm seeking suggestions/feedback/contributions at keras-team/keras#6538.

Also, could you add a license so it is clear how this can be used? I suggest the MIT license which is the same as Keras, it is pretty simple and lets people use it as they would like:

The MIT License (MIT)

Copyright (c) <year> <copyright holders>

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Thanks for your consideration!

New CropLike Layer

A CropLike layer should take two tensors as inputs, [origin, target], and crop the origin tensor so that it matches the shape of target.

Training on VOC2011

The README suggests that we should be able to train on VOC2011 just by downloading the dataset and running train.py. Is it really the case? My training seems to converge for the first few epochs, but then the val_loss stops to improve early, and diverge from training loss. In fact the best val_loss I can have is around 1.06. Do you have an idea why?

Thank you!

Epoch 14/100
1112/1112 [==============================] - 878s - loss: 1.0603 - acc: 0.7734 - val_loss: 1.1059 - val_acc: 0.7623
Epoch 15/100
1112/1112 [==============================] - 880s - loss: 1.0474 - acc: 0.7751 - val_loss: 1.0961 - val_acc: 0.7652
Epoch 16/100
1112/1112 [==============================] - 869s - loss: 1.0273 - acc: 0.7784 - val_loss: 1.1116 - val_acc: 0.7609
Epoch 17/100
1112/1112 [==============================] - 869s - loss: 1.0228 - acc: 0.7781 - val_loss: 1.1651 - val_acc: 0.7596
Epoch 18/100
1112/1112 [==============================] - 869s - loss: 1.0054 - acc: 0.7812 - val_loss: 1.1100 - val_acc: 0.7643
Epoch 19/100
1112/1112 [==============================] - 869s - loss: 0.9971 - acc: 0.7834 - val_loss: 1.1266 - val_acc: 0.7609
Epoch 20/100
1112/1112 [==============================] - 869s - loss: 0.9881 - acc: 0.7833 - val_loss: 1.1472 - val_acc: 0.7581
[...]
Epoch 44/100
1112/1112 [==============================] - 869s - loss: 0.6450 - acc: 0.8553 - val_loss: 1.2859 - val_acc: 0.7561
Epoch 45/100
1112/1112 [==============================] - 868s - loss: 0.6358 - acc: 0.8582 - val_loss: 1.2139 - val_acc: 0.7645
Epoch 46/100
1112/1112 [==============================] - 869s - loss: 0.6012 - acc: 0.8688 - val_loss: 1.3206 - val_acc: 0.7573
Epoch 47/100
1112/1112 [==============================] - 868s - loss: 0.5956 - acc: 0.8704 - val_loss: 1.2663 - val_acc: 0.7626

FCN in train.py

Hi,
where is the FCN in the train.py line=63?
from keras_fcn import FCN ?

filter parameter not used for blocks.vgg_fc

Currently block.vgg_fc has a filters argument but the calls to set up the layers are set to a constant 4096

def vgg_fc(filters, weight_decay=0., block_name='block5'):
...
    def f(x):
        fc6 = Conv2D(filters=4096, kernel_size=(7, 7),
                     activation='relu', padding='same',
                     dilation_rate=(2, 2),
                     kernel_initializer='he_normal',
                     kernel_regularizer=l2(weight_decay),
name='{}_fc6'.format(block_name))(x)
...

Is that intentional? For my use case I needed to decrease the number of filters.

I'd be happy to submit a PR if that's just a minor bug fix

U-Net: Error when checking target: expected activation_1 to have 3 dimensions, but got array with shape (1, 224, 224, 21)

Hi, I'm trying this out with the U-Net Architecture but i keep running into this error, I'm not sure what I might be doing wrong. This is what the definition of my model looks like:

`def get_unet(self):

	inputs = Input((self.img_rows, self.img_cols,3))
	#print(inputs.shape)
	conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
	conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
	pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
	conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
	conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
	pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
	conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
	conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
	pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
	conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
	conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
	drop4 = Dropout(0.5)(conv4)
	pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
	conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
	conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
	drop5 = Dropout(0.5)(conv5)

	up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
	merge6 = concatenate([drop4,up6], axis = 3)
	conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
	conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)

	up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
	merge7 = concatenate([conv3,up7], axis = 3)
	conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
	conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)

	up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
	merge8 = concatenate([conv2,up8], axis = 3)
	conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
	conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)

	up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
	merge9 = concatenate([conv1,up9], axis = 3)
	conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
	conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
	conv9 = Conv2D(21, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
	conv9 = Conv2D(21, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
	print(conv9.shape)

	
	reshape = Reshape((21,self.img_rows * self.img_cols))(conv9)
	print(reshape.shape)

	permute = Permute((2,1))(reshape)
	print(permute.shape)

	activation = Activation('softmax')(permute)
	
	print(activation.shape)
	model = Model(input = inputs, output = activation)

	return model`

how train other models

Hi,
for train my model ,i used your code (voc_generator.py , train.py, init_args.yml). but my output model has output shape (1,121,121,21), where should be change in the files, I've tested all possible states in 'init_args.yml'

the error is ;
Traceback (most recent call last):
File "C:\cnn\inceptionV4\keras-fcn\keras-fcn-master\voc2011\train.py", line 121, in
callbacks=[lr_reducer, early_stopper, csv_logger])
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\legacy\interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\engine\training.py", line 1902, in fit_generator
class_weight=class_weight)
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\engine\training.py", line 1636, in train_on_batch
check_batch_axis=True)
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\engine\training.py", line 1315, in _standardize_user_data
exception_prefix='target')
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\engine\training.py", line 139, in _standardize_input_data
str(array.shape))
ValueError: Error when checking target: expected decoder to have shape (1, 121, 121, 21) but got array with shape (1, 500, 500, 21)

error:ValueError: output of generator should be a tuple `(x, y, sample_weight)` or `(x, y)`. Found: None

Hi,
thanks for keras-fcn code.

after run train.py these error occur .
keras version is 2 and run code in widows with python 3.5. and pascal voc 2011 is in data folder

Using TensorFlow backend.
Epoch 1/100
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\engine\training.py", line 612, in data_generator_task
generator_output = next(self._generator)
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\preprocessing\image.py", line 732, in next
return self.next(*args, **kwargs)
File "C:\cnn\inceptionV4\keras-fcn\keras-fcn-master\voc2011\voc_generator.py", line 113, in next
x = self.image_set_loader.load_img(fn)
File "C:\cnn\inceptionV4\keras-fcn\keras-fcn-master\voc2011\voc_generator.py", line 203, in load_img
raise IOError('Image {} does not exist.'.format(img_path))
OSError: Image ../data/VOC2011/JPEGImages/b'2009_002423'.jpg does not exist.

Traceback (most recent call last):
File "C:\cnn\inceptionV4\keras-fcn\keras-fcn-master\voc2011\train.py", line 88, in
callbacks=[lr_reducer, early_stopper, csv_logger])
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\legacy\interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\models.py", line 1124, in fit_generator
initial_epoch=initial_epoch)
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\legacy\interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\engine\training.py", line 1877, in fit_generator
str(generator_output))
ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None

VOC2011 training ends up with unchanged acc

hi @JihongJu

Thanks for sharing fcn program.

I followed your instruction to train voc2011. the intermediate results is shown below. Since three epoch, the loss did not change too much. After ten epoch, the acc even become constant.

I trained with batch_size of 1 or 4, which leads to no change at all. Also, I tried training with or without weights, and still no change.

Epoch 1/100
278/278 [==============================] - 82s 295ms/step - loss: 14.3584 - acc: 0.7594 - val_loss: 3.5362 - val_acc: 0.7539

Epoch 00001: val_loss improved from inf to 3.53623, saving model to ./model/pascal.hdf5
Epoch 2/100
278/278 [==============================] - 80s 287ms/step - loss: 2.5873 - acc: 0.7622 - val_loss: 2.0838 - val_acc: 0.7539

Epoch 00002: val_loss improved from 3.53623 to 2.08382, saving model to ./model/pascal.hdf5
Epoch 3/100
278/278 [==============================] - 80s 288ms/step - loss: 1.8258 - acc: 0.7622 - val_loss: 1.6789 - val_acc: 0.7539

Epoch 00003: val_loss improved from 2.08382 to 1.67890, saving model to ./model/pascal.hdf5
Epoch 4/100
278/278 [==============================] - 80s 288ms/step - loss: 1.5551 - acc: 0.7622 - val_loss: 1.4830 - val_acc: 0.7539

Epoch 00004: val_loss improved from 1.67890 to 1.48300, saving model to ./model/pascal.hdf5
Epoch 5/100
278/278 [==============================] - 80s 287ms/step - loss: 1.4069 - acc: 0.7622 - val_loss: 1.4185 - val_acc: 0.7539
Epoch 00005: val_loss improved from 1.48300 to 1.41854, saving model to ./model/pascal.hdf5
Epoch 6/100
278/278 [==============================] - 80s 287ms/step - loss: 1.3249 - acc: 0.7622 - val_loss: 1.3197 - val_acc: 0.7539

Epoch 00006: val_loss improved from 1.41854 to 1.31969, saving model to ./model/pascal.hdf5
Epoch 7/100
278/278 [==============================] - 80s 287ms/step - loss: 1.2747 - acc: 0.7622 - val_loss: 1.2816 - val_acc: 0.7539

Epoch 00007: val_loss improved from 1.31969 to 1.28161, saving model to ./model/pascal.hdf5
Epoch 8/100
278/278 [==============================] - 79s 285ms/step - loss: 1.2439 - acc: 0.7622 - val_loss: 1.2859 - val_acc: 0.7539

Epoch 00008: val_loss did not improve from 1.28161
Epoch 9/100
278/278 [==============================] - 80s 286ms/step - loss: 1.2223 - acc: 0.7622 - val_loss: 1.2398 - val_acc: 0.7539

Epoch 00009: val_loss improved from 1.28161 to 1.23977, saving model to ./model/pascal.hdf5
Epoch 10/100
278/278 [==============================] - 79s 286ms/step - loss: 1.2084 - acc: 0.7622 - val_loss: 1.2319 - val_acc: 0.7539

Epoch 00010: val_loss improved from 1.23977 to 1.23188, saving model to ./model/pascal.hdf5
Epoch 11/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1997 - acc: 0.7622 - val_loss: 1.2195 - val_acc: 0.7539

Epoch 00011: val_loss improved from 1.23188 to 1.21950, saving model to ./model/pascal.hdf5
Epoch 12/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1930 - acc: 0.7622 - val_loss: 1.2187 - val_acc: 0.7539

Epoch 00012: val_loss improved from 1.21950 to 1.21867, saving model to ./model/pascal.hdf5
Epoch 13/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1903 - acc: 0.7622 - val_loss: 1.2112 - val_acc: 0.7539

Epoch 00013: val_loss improved from 1.21867 to 1.21119, saving model to ./model/pascal.hdf5
Epoch 14/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1894 - acc: 0.7622 - val_loss: 1.2606 - val_acc: 0.7539

Epoch 00014: val_loss did not improve from 1.21119
Epoch 15/100
278/278 [==============================] - 79s 286ms/step - loss: 1.1894 - acc: 0.7622 - val_loss: 1.2147 - val_acc: 0.7539

Epoch 00015: val_loss did not improve from 1.21119
Epoch 16/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1869 - acc: 0.7622 - val_loss: 1.2135 - val_acc: 0.7539

Epoch 00016: val_loss did not improve from 1.21119
Epoch 17/100
278/278 [==============================] - 79s 286ms/step - loss: 1.1831 - acc: 0.7622 - val_loss: 1.2101 - val_acc: 0.7539

Epoch 00017: val_loss improved from 1.21119 to 1.21015, saving model to ./model/pascal.hdf5
Epoch 18/100
278/278 [==============================] - 79s 286ms/step - loss: 1.1850 - acc: 0.7622 - val_loss: 1.2129 - val_acc: 0.7539

Epoch 00018: val_loss did not improve from 1.21015
Epoch 19/100
278/278 [==============================] - 79s 286ms/step - loss: 1.1838 - acc: 0.7622 - val_loss: 1.2188 - val_acc: 0.7539

Epoch 00019: val_loss did not improve from 1.21015
Epoch 20/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1822 - acc: 0.7622 - val_loss: 1.2079 - val_acc: 0.7539

Epoch 00020: val_loss improved from 1.21015 to 1.20793, saving model to ./model/pascal.hdf5
Epoch 21/100
278/278 [==============================] - 79s 286ms/step - loss: 1.1835 - acc: 0.7622 - val_loss: 1.2083 - val_acc: 0.7539

Epoch 00021: val_loss did not improve from 1.20793
Epoch 22/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1817 - acc: 0.7622 - val_loss: 1.2146 - val_acc: 0.7539

Epoch 00022: val_loss did not improve from 1.20793
Epoch 23/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1836 - acc: 0.7622 - val_loss: 1.2028 - val_acc: 0.7539

Epoch 00023: val_loss improved from 1.20793 to 1.20280, saving model to ./model/pascal.hdf5
Epoch 24/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1805 - acc: 0.7622 - val_loss: 1.2089 - val_acc: 0.7539

Epoch 00024: val_loss did not improve from 1.20280
Epoch 25/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1816 - acc: 0.7622 - val_loss: 1.2148 - val_acc: 0.7539

Epoch 00025: val_loss did not improve from 1.20280
Epoch 26/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1830 - acc: 0.7622 - val_loss: 1.2038 - val_acc: 0.7539

Epoch 00026: val_loss did not improve from 1.20280
Epoch 27/100
278/278 [==============================] - 79s 286ms/step - loss: 1.1792 - acc: 0.7622 - val_loss: 1.2053 - val_acc: 0.7539

Epoch 00027: val_loss did not improve from 1.20280
Epoch 28/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1846 - acc: 0.7622 - val_loss: 1.2032 - val_acc: 0.7539

Epoch 00028: val_loss did not improve from 1.20280
Epoch 29/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1808 - acc: 0.7622 - val_loss: 1.2062 - val_acc: 0.7539

Epoch 00029: val_loss did not improve from 1.20280
Epoch 30/100
278/278 [==============================] - 79s 285ms/step - loss: 1.1801 - acc: 0.7622 - val_loss: 1.2030 - val_acc: 0.7539

Epoch 00030: val_loss did not improve from 1.20280

Would you please tell me what's wrong and what should I do for the next.

Looking forward to your reply.

Thanks a lot

Train my own dataset?

Now I have some pictures, but how i create label(labelme?) ? And then how I train this datasets?

missing backend when not installed from source

hi
I tried installing using the :
$ pip install git+https://github.com/JihongJu/keras-fcn.git
when trying to import, it gives me this error :
..... import keras_fcn.backend as K1 ImportError: No module named backend

when installed from source though, this error goes away.

import error: No module named 'keras_fcn.backend'

Import failed.
I ran the code cell right after I finished the import statement on colab and got a ModuleNotFoundError. But I just installed it using pip3.

      1 import keras.backend as K
----> 2 import keras_fcn.backend as K1
      3 from keras.utils import conv_utils
      4 from keras.engine.topology import Layer
      5 from keras.engine import InputSpec

ModuleNotFoundError: No module named 'keras_fcn.backend'

---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.
-----------------------------------------

Trying to Train VGG16 Model for localizing Text from natural images. Used Dataset MSRA-TD500

Hi,
First of all, I want to say this library is awesome.

Actually, I am trying to localize Text from natural images. I am trying to train a single Image from MSRA-TD500 Dataset using VGG16 network given by you. But unfortunately, the model is NOT converging as per the expectations.

For understanding, I just want to train my network on single image and test on the same image. But that itself is NOT Happening.

I am using 'ADAM' Optimizer and 'Categorical Crossentroy' as functions and using 2 Classes to categorize text and non-text areas.

This is how it is getting trained. For pre-processing, I am subtracting mean pixels from original image and then dividing the image by standard deviation.

1/1 [==============================] - 64s - loss: 0.7233 - acc: 0.4443
Epoch 2/10
1/1 [==============================] - 51s - loss: 3.2022 - acc: 0.8014
Epoch 3/10
1/1 [==============================] - 52s - loss: 3.2022 - acc: 0.8014
Epoch 4/10
1/1 [==============================] - 52s - loss: 3.2022 - acc: 0.8014
Epoch 5/10
1/1 [==============================] - 52s - loss: 3.2022 - acc: 0.8014
Epoch 6/10
1/1 [==============================] - 51s - loss: 3.2022 - acc: 0.8014
Epoch 7/10
1/1 [==============================] - 52s - loss: 3.2022 - acc: 0.8014
Epoch 8/10
1/1 [==============================] - 51s - loss: 3.2022 - acc: 0.8014
Epoch 9/10
1/1 [==============================] - 51s - loss: 3.2022 - acc: 0.8014
Epoch 10/10
1/1 [==============================] - 51s - loss: 3.2021 - acc: 0.8014

Can you suggest something on this issue...
Thanks ...

Can you give me an example how to test it

Hi, I am new to this.
I have run trian.py and preseve the model. But I do not know how to use this model to detect my own image.
I will appreciate it if you can give me an example that I can run directly.

how fit other model with voc-generator?

Hi,
i want to use your files (train.py & voc-generator.py) in my model , but fcn model has num-output=21
that is number of classes , while my model output is a conv2d filter=1 . where should be change for train my model?

also for run your code in python 3 and windows os should be change these lines:
in train.py
csv_logger = CSVLogger(
'output{}_fcn_vgg16.csv'.format(datetime.datetime.now().strftime("%Y%m%d-%H%M%S")))

in voc_generator.py
self.filenames = np.loadtxt(image_set, dtype=bytes, delimiter="\n").astype(str)

How do I test the picture

I have completed the training of the model. So I want to test the picture. So I use the model.predict(x). I get the shape with(1,500,500,21) , but any element in shape is NaN。 So how do I test it.

Model always predicts the dominant class

Did not configure the model at all, simply ran

> from keras_fcn import FCN
> fcn_vgg19 = FCN_VGG19(input_shape=(500, 500, 3), classes=21,  
>                       weights='imagenet', trainable_encoder=True)
> fcn_vgg19.compile(optimizer='rmsprop',
>                   loss='categorical_crossentropy',
>                   metrics=['accuracy'])
> fcn_vgg19.fit(X_train, y_train, batch_size=8, epochs=20)

on the BDD dataset of 20 classes.

input size: (batch_size, width, height, channels)
output size: (batch_size, width, height, n_classes)

Assuming data is correct, is the model for certain bug-free?

ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (250, 250, 3)

Curious what I might be doing wrong in this initialization:
I am copying this from the README and I get the following error when run:

ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (250, 250, 3)

it must be in how I am loading my images?

fcn_vgg16 = FCN(
        input_shape=(250, 250, 3),
        classes=3,
        weights=None,
        trainable_encoder=True
    )

    fcn_vgg16.compile(
        optimizer='adam',
        loss='categorical_crossentropy',
        metrics=['accuracy']
    )

    fcn_vgg16.fit_generator(training_dataset(), verbose=2, steps_per_epoch=1500, max_queue_size=10, epochs=1)

training_dataset loads the input images as follows:

img_input = img_to_array(load_img(path_input))
img_target = img_to_array(load_img(path_target))
yield (img_input, img_target)

# (Pdb++) img_input.shape
# (250, 250, 3)
# (Pdb++) img_target.shape
# (250, 250, 3)

where img_to_array and load_img are imported from

from keras.preprocessing.image import (
    load_img,
    img_to_array,
    array_to_img
)

I think I'm not passing the batch_size properly, which means I'm misunderstand Keras's fit_generator(...) requirements

Classification or Segmentation

Hello,

The FCN models which you have offered are classification models or segmentation?
if they are segmentation models, would you please tell me how can I fine-tune it with a pre-trained model?

the predicted result is nan

HI

I run your program and find that parameters in some layers are ## nan, then I run the program with mse, it seems ok. Is there something wrong with the crossentropy?

weight converter

Hello,
i find the converter for convert caffe weight to tensorflow . can use this weight in the keras.?

New ReshapeLike Layer

Similiar as #10, a ResizeLike layer should take two tensors as inputs, [origin, target], and resize (using for example bilinear interpolation) the origin tensor so that it matches the shape of target.

Some typos that worth mentioning

Hi JihongJu,

Thanks for developing a wrapper for FCN models under Keras. My teammates and I find this repo really helpful to play with.

Nonetheless, below are some issues that we've encountered. We have developed manual workarounds, but to save others' time in debugging (and modifying) the source codes, I would like to raise them here.

  1. THE FCN with VGG 19 example in readme.MD is not working. That is because FCN object is referring to FCN_VGG16 only, and FCN_VGG19 is not defined in the __init__.py file. One work around is to modify __init__.py so that it looks like the following:
"""fcn init."""

from .models import (
    FCN,
    FCN_VGG16,
    FCN_VGG19
)
  • Plus, in the models.py the description for FCN_VGG19 is wrong. Currently it reads as the following:
def FCN_VGG19(input_shape, classes, weight_decay=0,
              trainable_encoder=True, weights='imagenet'):
    """Fully Convolutional Networks for semantic segmentation with VGG16.

But it is indeed for VGG19.

  1. In order to load the pre-trained weights, the package would automatically download them if they're not found under the .keras/models folder. This is implemented under the encoders.py file. But the following line is wrong. It should be looking for '{}_weights_tf_dim_ordering_tf_kernels_notop.h5' instead:
# load pre-trained weights
        if weights is not None:
            weights_path = get_file(
                 '{}_weights_tf_dim_ordering_tf_kernels.h5'.format(name),
                 weights,
                 cache_subdir='models')

Please review. Thanks!

ResourceExhaustedError while runing the program on VGG16

Caused by op u'block5_fc6/truncated_normal/TruncatedNormal', defined at:
File "/home/robotics/keras-fcn/FCN-16.py", line 2, in
fcn_vgg16 = FCN(input_shape=(500, 500, 3), classes=21, weights='imagenet', trainable_encoder=True)
File "/home/robotics/keras-fcn/keras_fcn/models.py", line 29, in FCN
return FCN_VGG16(*args, **kwargs)
File "/home/robotics/keras-fcn/keras_fcn/models.py", line 54, in FCN_VGG16
weights=weights, trainable=trainable_encoder)
File "/home/robotics/keras-fcn/keras_fcn/encoders.py", line 150, in init
trainable=trainable)
File "/home/robotics/keras-fcn/keras_fcn/encoders.py", line 124, in init
weights=weights, trainable=trainable)
File "/home/robotics/keras-fcn/keras_fcn/encoders.py", line 61, in init
y = fc_block(x)
File "/home/robotics/keras-fcn/keras_fcn/blocks.py", line 64, in f
name='{}_fc6'.format(block_name))(x)
File "/home/robotics/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 575, in call
self.build(input_shapes[0])
File "/home/robotics/anaconda2/lib/python2.7/site-packages/keras/layers/convolutional.py", line 134, in build
constraint=self.kernel_constraint)
File "/home/robotics/anaconda2/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/home/robotics/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 396, in add_weight
weight = K.variable(initializer(shape),
File "/home/robotics/anaconda2/lib/python2.7/site-packages/keras/initializers.py", line 208, in call
dtype=dtype, seed=self.seed)
File "/home/robotics/anaconda2/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 3590, in truncated_normal
return tf.truncated_normal(shape, mean, stddev, dtype=dtype, seed=seed)
File "/home/robotics/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/random_ops.py", line 172, in truncated_normal
shape_tensor, dtype, seed=seed1, seed2=seed2)
File "/home/robotics/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_random_ops.py", line 316, in _truncated_normal
seed=seed, seed2=seed2, name=name)
File "/home/robotics/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/home/robotics/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/robotics/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1204, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[7,7,512,4096]
[[Node: block5_fc6/truncated_normal/TruncatedNormal = TruncatedNormalT=DT_INT32, dtype=DT_FLOAT, seed=87654321, seed2=8705206, _device="/job:localhost/replica:0/task:0/gpu:0"]]
Here are my errors when I run the program on VGG16.
My PC is:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.645
pciBusID 0000:01:00.0
Total memory: 10.90GiB
Free memory: 341.00MiB
I think the problem is about the GPU, but I don't know how to figure out it. BTW, I am the beginner of this method.
I will appreciate it if you could give me some advice. Thanks~

number of output classes

the num_ouput=21 , that is number of classes , "20 object class + 1 background".
but in the voc 2011 is not any background sample image .
then , should be num_output = 20 ?

Thanks.

StopIteration

Hi ,thanks for your code!
I have a problem. Can you help me ?

Loading weights...
Epoch 1/100
Traceback (most recent call last):
File "/home/ilab/biyoner/sementic_seg/keras-fcn/voc2011/train.py", line 88, in
callbacks=[early_stopper, csv_logger, checkpointer, nan_terminator])
File "/home/ilab/biyoner/keras/local/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/home/ilab/biyoner/keras/local/lib/python2.7/site-packages/keras/engine/training.py", line 2115, in fit_generator
generator_output = next(output_generator)
File "/home/ilab/biyoner/keras/local/lib/python2.7/site-packages/keras/utils/data_utils.py", line 557, in get
six.raise_from(StopIteration(e), e)
File "/home/ilab/biyoner/keras/local/lib/python2.7/site-packages/six.py", line 737, in raise_from
raise value
StopIteration

Process finished with exit code 1

Thanks.

ValueError: Unknown layer: BilinearUpSampling2D

Dear @JihongJu thank you very much for sharing your code work. The train.py is running smoothly. After that, I want to run the infer.py, then the following line raise an error like this "ValueError: Unknown layer: BilinearUpSampling2D"

model = load_model('output/fcn_vgg16_weights.h5',
custom_objects={'CroppingLike2D': CroppingLike2D,
#'mean_categorical_crossentropy': mean_categorical_crossentropy})
'flatten_categorical_crossentropy': flatten_categorical_crossentropy(classes=21)})

I have tried to find out a solution but I can't make it right. could you please suggest me a way to solve this error.
One more thing, Do I need to write a new python file for call the score.py files method or somethings else?

Thanks in advanced

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.