Coder Social home page Coder Social logo

karolzak / keras-unet Goto Github PK

View Code? Open in Web Editor NEW
338.0 13.0 108.0 31.95 MB

Helper package with multiple U-Net implementations in Keras as well as useful utility tools helpful when working with image semantic segmentation tasks. This library and underlying tools come from multiple projects I performed working on semantic segmentation tasks

License: MIT License

Python 100.00%
unet unet-image-segmentation unet-keras keras keras-tensorflow segmentation semantic-segmentation u-net deep-learning deeplearning

keras-unet's People

Contributors

coreynoone avatar emkarr avatar gkaissis avatar karolzak avatar moritzknolle avatar muminoff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-unet's Issues

Is development still occuring?

I think this package is great! Is it still under active development? If yes, my team and I would like to contribute, as we have recently started using it for our work.

Floating input sizes Implementation

Hello,

Is it possible to somehow implement a floating input sizes in custom Keras Unet (custom_vnet.py)?
For instance, I have images of different shapes: 1024x1024x3, 3072x3072x3, 512x512x3. If I use a fixed input, I have to severely downsample bigger images loosing quality and important features.

As far as I know, for floating image sizes it's recommended to use (None, None, channels) as an input shape, but I doubt it's enough...

Trying to use Satellite net for 3 channel rgb images

I am utilising 3 channel rgb images and 1 channel mask images for training. So, how can I utilise them to train of satellite unet as I can see in the notebooks that the image is grayscale and mask also is of one channel. Should I make the mask also 3 channel ?Please suggest the solution.

IoU smooth parameters incompatible with input being np.uint8

Thanks again for this nice package.

JFYI, I've noticed an issue while using metrics.iou when the input is np.uint8 and the smooth parameter is a float -by default:

import numpy as np
from keras_unet.metrics import iou

tshape = (30, 50)

input = np.zeros(tshape, dtype=np.uint8)

iou(input, input, smooth=1.).numpy()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
   6841   message = e.message + (" name: " + name if name is not None else "")
   6842   # pylint: disable=protected-access
-> 6843   six.raise_from(core._status_to_exception(e.code, message), None)
   6844   # pylint: enable=protected-access
   6845 

/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)

InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a uint8 tensor but is a float tensor [Op:AddV2]iou

To sum up:

  • Input np.uint8 with smooth type float generates an error
  • Input np.uint8 with smooth type np.uint8 generates works
  • Input np.float32 with smooth type float/np.uint8 works

utils.plot_imgs returns an empty table

Hello,

I'm trying to plot templates and masks using utils.plot_imgs function. Here's my code:

template = cv2.imread(r"E:\Templates\741499_1.jpg")
true_mask = cv2.imread(r"E:\Masks\741499_1.jpg")
pred_mask = cv2.imread(r"E:\Predictions\741499_1.jpg")
assert isinstance(template, (np.ndarray)) == isinstance(true_mask, (np.ndarray)) == isinstance(pred_mask, (np.ndarray))
plot_imgs(org_imgs=template, mask_imgs=true_mask, pred_imgs=pred_mask, nm_img_to_plot=1)

Unfortunately, I get an empty table as a result:
изображение

Am I missing something? My images are 512x512 RGB.

satellite_unet custom_objects: '<' not supported between instances of 'function' and 'str'

I'm trying to continue training model "satellite_unet" by loading weights. but got this issue when using custom_objects

here is the code
`model = satellite_unet(
input_shape,
)
callback_checkpoint = ModelCheckpoint(
os.path.join(checkpoint_path, 'weights.{epoch:02d}.{val_loss:.4f}.hdf5'),
verbose=1,
monitor='val_loss',
save_best_only=True
)
model.compile(
optimizer=SGD(lr=0.01, momentum=0.99),
loss='binary_crossentropy',
metrics=[iou, iou_thresholded]
)

####### if checkpoints exist get last_epoch ....

model = load_model(os.path.join(checkpoint_path, checkpoint_path_file), custom_objects={
"iou": iou,
"iou_thresholded": iou_thresholded,
})
history = model.fit(
train_gen,
steps_per_epoch=len(x_train),
epochs=EPOCHS,
validation_data=(x_val, y_val),
callbacks=[callback_checkpoint],
initial_epoch=last_epoch
)`

Index out bounds for axis 0?

@karolzak
Thank you very much for your awesome repo. I am using you code for getting patches and reconstructing from patches, but it produces the following error:
When I use plot_patches:
Image Shape: (3648, 5472, 3) ### original image
Image_crop shape: (169, 512, 512, 3)

Traceback (most recent call last):

 runfile('E:/PythonSampleCode/SliceMerge/slicemerge1.py', wdir='E:/PythonSampleCode/SliceMerge')
Reloaded modules: cv2, cv2.cv2, cv2.data
Image Shape: (3648, 5472, 3)
Image_crop shape:  (169, 512, 512, 3)
Traceback (most recent call last):

  File "<ipython-input-31-b244771ea495>", line 1, in <module>
    runfile('E:/PythonSampleCode/SliceMerge/slicemerge1.py', wdir='E:/PythonSampleCode/SliceMerge')

  File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 786, in runfile
    execfile(filename, namespace)

  File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "E:/PythonSampleCode/SliceMerge/slicemerge1.py", line 195, in <module>
    org_img_size=(x.shape[0], x.shape[1]), stride=256, size=256) # required - original size of the image

  File "E:/PythonSampleCode/SliceMerge/slicemerge1.py", line 110, in plot_patches
    axes[i, j].imshow(img_arr[jj])

IndexError: index 169 is out of bounds for axis 0 with size 169

when I use reconstruct_from_patches:

    stride=256) # use only if stride is different from patch size

  File "E:/PythonSampleCode/SliceMerge/slicemerge1.py", line 166, in reconstruct_from_patches
    ] = img_arr[kk, :, :, layer]

IndexError: index 169 is out of bounds for axis 0 with size 169

The crop size is 512 and stride is 256

Is there any suggestion?

Error during loading a model

There is a model which was created using Jupyter notebook with filename 20200528-model-3.h5. Now a new different notebook is created to load and test the model. But, model loading gets failed.

Here is the code:

import tensorflow as tf
model_filename = '20200528-model-3.h5'
model = tf.keras.models.load_model(model_filename)

And error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-20-11211a1508b6> in <module>
      1 model_filename = '20200528-model-3.h5'
----> 2 model = tf.keras.models.load_model(model_filename)

~/unetnew/env/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/save.py in load_model(filepath, custom_objects, compile)
    144   if (h5py is not None and (
    145       isinstance(filepath, h5py.File) or h5py.is_hdf5(filepath))):
--> 146     return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
    147 
    148   if isinstance(filepath, six.string_types):

~/unetnew/env/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py in load_model_from_hdf5(filepath, custom_objects, compile)
    182       # Compile model.
    183       model.compile(**saving_utils.compile_args_from_training_config(
--> 184           training_config, custom_objects))
    185 
    186       # Set optimizer weights.

~/unetnew/env/lib/python3.7/site-packages/tensorflow_core/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
    455     self._self_setattr_tracking = False  # pylint: disable=protected-access
    456     try:
--> 457       result = method(self, *args, **kwargs)
    458     finally:
    459       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

~/unetnew/env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, distribute, **kwargs)
    427     with K.get_graph().as_default():
    428       # Save all metric attributes per output of the model.
--> 429       self._cache_output_metric_attributes(metrics, weighted_metrics)
    430 
    431       # Set metric attributes on model.

~/unetnew/env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _cache_output_metric_attributes(self, metrics, weighted_metrics)
   1840         output_shapes.append(output.shape.as_list())
   1841     self._per_output_metrics = training_utils.collect_per_output_metric_info(
-> 1842         metrics, self.output_names, output_shapes, self.loss_functions)
   1843     self._per_output_weighted_metrics = (
   1844         training_utils.collect_per_output_metric_info(

~/unetnew/env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in collect_per_output_metric_info(metrics, output_names, output_shapes, loss_fns, is_weighted)
    880       metric_name = get_metric_name(metric, is_weighted)
    881       metric_fn = get_metric_function(
--> 882           metric, output_shape=output_shapes[i], loss_fn=loss_fns[i])
    883 
    884       # If the metric function is not stateful, we create a stateful version.

~/unetnew/env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in get_metric_function(metric, output_shape, loss_fn)
   1105   """
   1106   if metric not in ['accuracy', 'acc', 'crossentropy', 'ce']:
-> 1107     return metrics_module.get(metric)
   1108 
   1109   is_sparse_categorical_crossentropy = (

~/unetnew/env/lib/python3.7/site-packages/tensorflow_core/python/keras/metrics.py in get(identifier)
   3062     return deserialize(identifier)
   3063   elif isinstance(identifier, six.string_types):
-> 3064     return deserialize(str(identifier))
   3065   elif callable(identifier):
   3066     return identifier

~/unetnew/env/lib/python3.7/site-packages/tensorflow_core/python/keras/metrics.py in deserialize(config, custom_objects)
   3054       module_objects=globals(),
   3055       custom_objects=custom_objects,
-> 3056       printable_module_name='metric function')
   3057 
   3058 

~/unetnew/env/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    320       obj = module_objects.get(object_name)
    321       if obj is None:
--> 322         raise ValueError('Unknown ' + printable_module_name + ':' + object_name)
    323     # Classes passed by name are instantiated with no args, functions are
    324     # returned as-is.

ValueError: Unknown metric function:iou

This version loads the model but prediction output consists zero values:

from keras_unet.models import custom_unet
model_filename = '20200528-model-3.h5'
model = custom_unet(
    input_shape,
    filters=64,
    use_batch_norm=True,
    dropout=0.01,
    dropout_change_per_layer=0.0,
    num_layers=4
)
model.load_weights(model_filename)

Inputs don't match Outputs

I've been working on this, and just can't get it to work. It can't matching inputs to outputs.

Input data: two folders of images (400 x 400 pixels) in a master folder called TrainingData

import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)

from keras.preprocessing.image import ImageDataGenerator
from keras import models
from keras_unet.models import custom_unet

model = custom_unet(
input_shape=(400, 400, 3),
use_batch_norm=False,
num_classes=2,
filters=64,
dropout=0.2,
output_activation='sigmoid')


from keras.utils import multi_gpu_model
#parallel_model = multi_gpu_model(model, gpus=4, cpu_merge=True, cpu_relocation=True)
parallel_model = multi_gpu_model(model, gpus=4)


train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
validation_split = 0.2)

train_generator = train_datagen.flow_from_directory(
    '<path>/TrainingData',
    target_size=(400, 400),
    batch_size=32,
    class_mode='binary',
    shuffle=True,
    subset='training')

validation_generator = train_datagen.flow_from_directory(
    '<path>/TrainingData',
    target_size=(400, 400),
    batch_size=32,
    class_mode='binary',
    shuffle=True,
    subset='validation')
    
from keras.callbacks import ModelCheckpoint
filepath="<newpath>/Unetweights.best.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]

from collections import Counter
counter = Counter(train_generator.classes)
max_val = float(max(counter.values()))
class_weights = {class_id : max_val/num_images for class_id, num_images in counter.items()}

parallel_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])

history=parallel_model.fit_generator(train_generator,
steps_per_epoch = train_generator.samples // 32,
validation_data = validation_generator,
validation_steps = validation_generator.samples // 32,
epochs = 75,
class_weight=class_weights, callbacks=callbacks_list)

I get the following error:

Epoch 1/75
Traceback (most recent call last):
File "", line 6, in
File "/home/bly/anaconda3/envs/tf_gpu/lib/python3.7/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/bly/anaconda3/envs/tf_gpu/lib/python3.7/site-packages/keras/engine/training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "/home/bly/anaconda3/envs/tf_gpu/lib/python3.7/site-packages/keras/engine/training_generator.py", line 217, in fit_generator
class_weight=class_weight)
File "/home/bly/anaconda3/envs/tf_gpu/lib/python3.7/site-packages/keras/engine/training.py", line 1211, in train_on_batch
class_weight=class_weight)
File "/home/bly/anaconda3/envs/tf_gpu/lib/python3.7/site-packages/keras/engine/training.py", line 789, in _standardize_user_data
exception_prefix='target')
File "/home/bly/anaconda3/envs/tf_gpu/lib/python3.7/site-packages/keras/engine/training_utils.py", line 128, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking target: expected conv2d_38 to have 4 dimensions, but got array with shape (32, 1)

I honestly don't know what I am doing wrong.

Training on Jetson NX

Hi,

I'm relatively new to training NNs, so I'm wondering if I'm just underestimating the size of U-Nets, or overestimating the abilities of a Jetson NX, to train my dataset. It has 8GB of ram. A tool, jetson_stats's jtop program, says the CPU has 3GB and the GPU has 3.2GB though, and i don't see it use up more memory.

I've made changes to the kz-izbi-challenge notebook, to train on images at 256*256 size. I decreased batch size to batch_size=1, to try help it out.

But it seems to run out of memory, after training for one epoch, regardless of steps_per_epoch

ResourceExhaustedError:  OOM when allocating tensor with shape[32,128,256,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[node functional_1/concatenate_3/concat (defined at <ipython-input-24-2b12f42da9e0>:7) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 [Op:__inference_test_function_3313]

Any idea if it's feasible to train a unet on a Jetson NX?
Thanks

ValueError: logits and labels must have the same shape

I receive the above error when attempting to train the vanilla_unet. The pipeline works for a unet I wrote that uses "same" padding but was hoping to try the vanilla_unet for it's "valid" padding. I'm not sure what else in my code could have caused this error and would have expected these conditions to have been handled inside the vanilla_unet. Any Recommendations?

initial stack trace below:

Traceback (most recent call last):
File "C:\Users\dpoiesz\Repos\bespin\venv\bespin\lib\site-packages\tensorflow_core\python\framework\tensor_shape.py", line 926, in merge_with
new_dims.append(dim.merge_with(other[i]))
File "C:\Users\dpoiesz\Repos\bespin\venv\bespin\lib\site-packages\tensorflow_core\python\framework\tensor_shape.py", line 309, in merge_with
self.assert_is_compatible_with(other)
File "C:\Users\dpoiesz\Repos\bespin\venv\bespin\lib\site-packages\tensorflow_core\python\framework\tensor_shape.py", line 276, in assert_is_compatible_with
(self, other))
ValueError: Dimensions 256 and 68 are not compatible

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\dpoiesz\Repos\bespin\venv\bespin\lib\site-packages\tensorflow_core\python\ops\nn_impl.py", line 167, in sigmoid_cross_entropy_with_logits
labels.get_shape().merge_with(logits.get_shape())
File "C:\Users\dpoiesz\Repos\bespin\venv\bespin\lib\site-packages\tensorflow_core\python\framework\tensor_shape.py", line 929, in merge_with
raise ValueError("Shapes %s and %s are not compatible" % (self, other))
ValueError: Shapes (None, 256, 256, 1) and (None, 68, 68, 1) are not compatible

Loading the model weights

I have trained my model and got the weights in a .h5 file. Now I am trying to load them and generate feature maps. So just wanted to know whether my approach is Ok. Weights are in the file model_v1.h5
I used :

from keras_unet.models import satellite_unet

input_shape = x_train[0].shape

model = satellite_unet(
    input_shape,
    #use_batch_norm=False,
    num_classes=1,
    #filters=64,
    #dropout=0.2,
    output_activation='sigmoid',
    num_layers=4
)


from keras.callbacks import ModelCheckpoint


model_filename = 'model_v1.h5'
callback_checkpoint = ModelCheckpoint(
    model_filename, 
    verbose=1, 
    monitor='val_loss', 
    save_best_only=True,
)


from keras.optimizers import Adam, SGD
from keras_unet.metrics import iou, iou_thresholded
from keras_unet.losses import jaccard_distance
from keras import metrics

model.compile(
    #optimizer=Adam(), 
    optimizer=SGD(lr=0.01, momentum=0.99),
    loss='binary_crossentropy',
    #loss=jaccard_distance,
    metrics=[iou, iou_thresholded,metrics.binary_accuracy]
)

model.load_weights('model_v1.h5')

OR

should I load_weights directly after declaring the model and not create checkpoints and compile it :

from keras_unet.models import satellite_unet

input_shape = x_train[0].shape

model = satellite_unet(
    input_shape,
    #use_batch_norm=False,
    num_classes=1,
    #filters=64,
    #dropout=0.2,
    output_activation='sigmoid',
    num_layers=4
)

model.load_weights('model_v1.h5')



Please advise.

Multiclass segmentation with different labels and satellite unet

I just have a problem in which I have images out of which some have 2 classes and some have 3 classes. So, basically the 2 classes are already there in all images but some have additional one class. Now, while dealing with multiclass segmentation ,I have to make the size (width,height,classes) so in this case it will cause inconsistency. As all the images should have same dimensions. Kindly guide.

Further, I have to apply multiclass segmentation for satellite unet specifically and not use custom unet for that. As specified in the docs, that custom unet supports multiclass segementation. Kindly guide.

Image Patching -> Image Reconstructing Workflow

Hello,

I'm trying to implement image patching -> image reconstructing tecnhique. There're two helpful functions in the repository: get_patches and reconstruct_from_patches. On the whole the algorithm is quite clear:

  1. get a bigger image,
  2. crop it to desired patches
  3. feed patches to Unet as inputs
  4. reconstruct predictions to a bigger mask

There're some aspects I'd like to clarify though. How am I supposed to create batches from source images of various sizes?

For instance, my batch size is 8. My input size is 512x512. I take the first two images from the set: one image is 1024x1024 and another one is 2048 x 2048. After cropping I'll get 4 patches from the first image and 16 patches from the second image. Thus, the first batch will be 4 patches from image one and 4 patches from image two. It's clear that Keras won't reconstruct the second image from just 4 pieces.

Does it mean that I should have masks for patches, not for images?

Multiclass segmentation

Hello @karolzak ,

Thanks for this great repo. In Customizable U-Net, it seems like multiclass segmentation can be done. But, what is the proper dataset format? With one hot encoding, one ground mask image per class for each example is needed. There is another way, assign each pixel its class (1, 2, 3, ...). But you use normalization to force label values between 0 and 1.

Could you provide some insight about this, please?

Input 0 of layer "sequential_1" is incompatible with the layer: expected shape=(None, 1, 3), found shape=(None, 3)

Hello @karolzak @Anne-Andresen , Can you please help me with the below problem.

I am currently trying to train a basic neural network with 3 different values from a dataset and predict another value. (Regression).

But I receive an input 0 incompatibility error. Kindly help.

Input NumPy array:
features.shape #(1700, 3) labels.shape #(1700, )

Neural Network

nn_model = tf.keras.Sequential([
    tf.keras.layers.Dense(64, activation='relu',input_shape = (1,3)),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(1)
])

nn_model.summary()

Summary:

 dense_39 (Dense)            (None, 1, 64)             256       
                                                                 
 dense_40 (Dense)            (None, 1, 64)             4160      
                                                                 
 dense_41 (Dense)            (None, 1, 1)              65        

=================================================================
Total params: 4,481
Trainable params: 4,481
Non-trainable params: 0


Error:

history = nn_model.fit(
    features,
    labels,
    epochs=100,
    verbose=1,
)

ValueError: Input 0 of layer "sequential_1" is incompatible with the layer: expected shape=(None, 1, 3), found shape=(None, 3)

Issue with running on multiple GPUs

Copy-pasting comments from #12


@muminoff:
I cannot run custom unet with multi-gpu. I followed distributed training part in Tensorflow documentation, but no luck. It seems I need to refactor code and use custom distributed training (namely strategy.experimental_distribute_dataset).


@karolzak:
Can you share the code you used, TF/Keras version and error msg? That way I might be able to help you out or at least investigate it.


@muminoff:
I haven't tried tf.keras.utils.multi_gpu_model since it is deprecated. But, I tried with tf.distribute.MirroredStrategy().

And, here is my code:

from keras_unet.models import custom_unet
from keras.callbacks import ModelCheckpoint
from keras.optimizers import Adam, SGD
from keras_unet.metrics import iou, iou_thresholded
from keras_unet.losses import jaccard_distance

strategy = tf.distribute.MirroredStrategy()
with strategy.scope():

    input_shape = x_train[0].shape

    model = custom_unet(
        input_shape,
        filters=32,
        use_batch_norm=True,
        dropout=0.3,
        dropout_change_per_layer=0.0,
        num_layers=6
    )

    model.summary()

    model_filename = 'model-v2.h5'

    callback_checkpoint = ModelCheckpoint(
        model_filename, 
        verbose=1, 
        monitor='val_loss', 
        save_best_only=True,
    )

    model.compile(
        optimizer=Adam(), 
        #optimizer=SGD(lr=0.01, momentum=0.99),
        loss='binary_crossentropy',
        #loss=jaccard_distance,
        metrics=[iou, iou_thresholded]
    )

    history = model.fit_generator(
        train_gen,
        steps_per_epoch=200,
        epochs=50,
        validation_data=(x_val, y_val),
        callbacks=[callback_checkpoint]
    )

Error:

ValueError: `handle` is not available outside the replica context or a `tf.distribute.Strategy.update()` call.

fyi, using multi_gpu_model raises following exception:

ValueError: ('Expected `model` argument to be a `Model` instance, got ', <keras.engine.training.Model object at 0x7f1b347372d0>)

@karolzak:
can you specify the version that you're using for TF/Keras? This seem to be related to that problem.


@muminoff:

tf.__version__
'2.1.0'

keras.__version__
'2.3.1'
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():

    input_shape = x_train[0].shape

    model = custom_unet(
        input_shape,
        filters=32,
        use_batch_norm=True,
        dropout=0.3,
        dropout_change_per_layer=0.0,
        num_layers=6
    )

model.summary()

model_filename = 'model-v2.h5'

callback_checkpoint = ModelCheckpoint(
    model_filename, 
    verbose=1, 
    monitor='val_loss', 
    save_best_only=True,
)

model.compile(
    optimizer=Adam(), 
    #optimizer=SGD(lr=0.01, momentum=0.99),
    loss='binary_crossentropy',
    #loss=jaccard_distance,
    metrics=[iou, iou_thresholded]
)

history = model.fit_generator(
    train_gen,
    steps_per_epoch=200,
    epochs=50,
    validation_data=(x_val, y_val),
    callbacks=[callback_checkpoint]
)
ValueError: `handle` is not available outside the replica context or a `tf.distribute.Strategy.update()` call.

V-Net not working

Hi,
I think there might be something wrong with the v-net implementation.
When calling custom_unet (on its own) I get something like :

<function keras_unet.models.custom_unet.custom_unet(input_shape,num_class ... ='sigmoid')>

When calling custom_vnet:

<module 'keras_unet.models.custom_vnet' ... custom_vnet.py'>

So the module rather than the function.

Best, Kirsten

Modifying image to accept 8 band satellite image

Does anyone know how to modify the satellite Unet to accept 8 band image layers? I currently have dimension errors using the network.

my sample patch size is:
model = satellite_unet(input_shape=(128, 128, 8))
Training image is (456,128,128,8)
and mask is (456,128,128,3)

Where may i modify the convolutional layer to accept multibands?
Error: Input 0 is incompatible with layer model_3: expected shape=(None, 128, 128, 8), found shape=(32, 128, 128, 3)

logits and labels must have the same first dimension

Using the following code;

import nltk
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
import string
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

# Download NLTK data
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('stopwords')

# Load data
with open('data.txt', 'r', encoding='utf-8') as f:
    raw_data = f.read()

# Preprocess data
def preprocess(data):
    # Tokenize data
    tokens = nltk.word_tokenize(data)
    
    # Lowercase all words
    tokens = [word.lower() for word in tokens]
    
    # Remove stopwords and punctuation
    stop_words = set(stopwords.words('english'))
    tokens = [word for word in tokens if word not in stop_words and word not in string.punctuation]
    
    # Lemmatize words
    lemmatizer = WordNetLemmatizer()
    tokens = [lemmatizer.lemmatize(word) for word in tokens]
    
    return tokens

# Preprocess data
processed_data = [preprocess(qa) for qa in raw_data.split('\n')]

# Set parameters
vocab_size = len(processed_data)
embedding_dim = 64
max_length = 5
trunc_type='pre'
padding_type='pre'
oov_tok = "<OOV>"
training_size = len(processed_data)

# Create tokenizer
tokenizer = Tokenizer(num_words=vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(processed_data)
word_index = tokenizer.word_index

# Create sequences
sequences = tokenizer.texts_to_sequences(processed_data)
padded_sequences = pad_sequences(sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)

# Create training data
training_data = padded_sequences[:training_size]
training_labels = padded_sequences[:training_size]

# Build model
model = tf.keras.Sequential([
    tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Conv1D(64, 5, activation='relu'),
    tf.keras.layers.MaxPooling1D(pool_size=2, strides=2, padding='same'),
    tf.keras.layers.LSTM(64),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(vocab_size, activation='softmax')
])

# Compile model
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

# Train model
num_epochs = 50
history = model.fit(training_data, training_labels, epochs=num_epochs, verbose=2)

# Define function to predict answer
def predict_answer(model, tokenizer, question):
    # Preprocess question
    question = preprocess(question)
    # Convert question to sequence
    sequence = tokenizer.texts_to_sequences([question])
    # Pad sequence
    padded_sequence = pad_sequences(sequence, maxlen=max_length, padding=padding_type, truncating=trunc_type)
    # Predict answer
    pred = model.predict(padded_sequence)[0]
    # Get index of highest probability
    idx = np.argmax(pred)
    # Get answer
    answer = tokenizer.index_word[idx]
    return answer


# Start chatbot
while True:
    question = input('You: ')
    answer = predict_answer(model, tokenizer, question)
    print('Chatbot:', answer)

I get the error

logits and labels must have the same first dimension, got logits shape [32,50] and labels shape [160]

Logits first dimension is equal to training data, but labels first dimension is the product of training data and max_length. When i change max_length to 1, i get this error;

ValueError: One of the dimensions in the output is <= 0 due to downsampling in conv1d. Consider increasing the input size. Received input shape [None, 1, 64] which would produce output shape with a zero or negative value in a dimension.

when i change it to 0, i get this;

Negative dimension size caused by subtracting 5 from 0 for '{{node sequential/conv1d/Conv1D}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](sequential/conv1d/Conv1D/ExpandDims, sequential/conv1d/Conv1D/ExpandDims_1)' with input shapes: [?,1,0,64], [1,5,64,64].

Is this an easy fix?

I'm using tensorflow 2.11.0 and keras 2.11.0

loss and iou is nan in satellite_unit

I'm trying to train satellite_unit model but I got nan for loss, I tried your example on custome_unit it worked good on satellite images but when I'm trying satellite_unit i got this
here is the important part of the code
`model = satellite_unet(
input_shape,
)

callback_checkpoint = ModelCheckpoint(
SAVED_MODEL,
verbose=1,
monitor='val_loss',
save_best_only=True,
)

model.compile(
optimizer=SGD(lr=0.01, momentum=0.99),
loss='binary_crossentropy',
# loss=jaccard_distance,
metrics=[iou, iou_thresholded]
)

history = model.fit(
train_gen,
steps_per_epoch=200,
epochs=5,
validation_data=(x_val, y_val),
callbacks=[callback_checkpoint]
)`

Here is the result for 5 epochs

50/50 [==============================] - ETA: 0s - loss: nan - iou: nan - iou_thresholded: 3.8429 Epoch 00001: val_loss did not improve from inf 50/50 [==============================] - 251s 5s/step - loss: nan - iou: nan - iou_thresholded: 3.8429 - val_loss: nan - val_iou: nan - val_iou_thresholded: 1.6688e-07 Epoch 2/5 50/50 [==============================] - ETA: 0s - loss: nan - iou: nan - iou_thresholded: 1.3761e-07 Epoch 00002: val_loss did not improve from inf 50/50 [==============================] - 250s 5s/step - loss: nan - iou: nan - iou_thresholded: 1.3761e-07 - val_loss: nan - val_iou: nan - val_iou_thresholded: 1.6688e-07 Epoch 3/5 50/50 [==============================] - ETA: 0s - loss: nan - iou: nan - iou_thresholded: 1.4146e-07 Epoch 00003: val_loss did not improve from inf 50/50 [==============================] - 252s 5s/step - loss: nan - iou: nan - iou_thresholded: 1.4146e-07 - val_loss: nan - val_iou: nan - val_iou_thresholded: 1.6688e-07 Epoch 4/5 50/50 [==============================] - ETA: 0s - loss: nan - iou: nan - iou_thresholded: 1.3630e-07 Epoch 00004: val_loss did not improve from inf 50/50 [==============================] - 245s 5s/step - loss: nan - iou: nan - iou_thresholded: 1.3630e-07 - val_loss: nan - val_iou: nan - val_iou_thresholded: 1.6688e-07 Epoch 5/5 50/50 [==============================] - ETA: 0s - loss: nan - iou: nan - iou_thresholded: 1.3698e-07 Epoch 00005: val_loss did not improve from inf 50/50 [==============================] - 245s 5s/step - loss: nan - iou: nan - iou_thresholded: 1.3698e-07 - val_loss: nan - val_iou: nan - val_iou_thresholded: 1.6688e-07

Input 0 is incompatible with layer model

I'm trying the default training (satellite unet) and train went good but when I'm trying to test the model I got the error
ValueError: Input 0 is incompatible with layer model: expected shape=(None, 384, 384, 3), found shape=(None, 348, 3)

Here is my test code

model_file = 'model_satellite.h5'
input_shape = (384, 384, 3)

model = satellite_unet(
    input_shape
)

model.load_weights(model_file)
image = np.array(Image.open("test.jpg").resize((348, 348)))
print("shape: ", image.shape)
shape:  (348, 348, 3)
pr_mask = model.predict(image).round()

Loss function return shape

There is a jaccard_distance loss function implemented in this package and I'm suspicious about it.

Supposing You are feeding keras model with labels formed as (batch_size, dim_1, dim_2, class) the loss func should return either a (batch_size) array of per sample losses or a scalar number - the loss over the whole batch.

This jaccard_distance :
intersection = K.sum(K.abs(y_true * y_pred), axis=-1) sum_ = K.sum(K.abs(y_true) + K.abs(y_pred), axis=-1) jac = (intersection + smooth) / (sum_ - intersection + smooth)

returns an array of matrices if (batch_size, dim_1, dim_2, class) labeling is used.

Proper mask shape for a multiclass task

Hello,

I'm trying to use custom_unet for a multiclass task. In my case all masks inputs have a following shape
(BATCH_SIZE, NUMBER_OF_CLASSES, IMAGE_WIDTH, IMAGE_HEIGHT, 1).
Since I have 4 different colours on the black background and my images're 256x256 pixels and I use batch size of 4, I end up with the following shape:
(4, 4, 256, 256, 1).
Unfortunately, custom_unet doesn't like this shape giving me this error:

<...>
  File "C:\Users\E-soft\Anaconda3\envs\Explorium\lib\site-packages\keras_unet\losses.py", line 44, in jaccard_distance
    intersection = K.sum(K.abs(y_true * y_pred), axis=-1)
  File "C:\Users\E-soft\Anaconda3\envs\Explorium\lib\site-packages\tensorflow_core\python\ops\math_ops.py", line 899, in binary_op_wrapper
    return func(x, y, name=name)
  File "C:\Users\E-soft\Anaconda3\envs\Explorium\lib\site-packages\tensorflow_core\python\ops\math_ops.py", line 1206, in _mul_dispatch
    return gen_math_ops.mul(x, y, name=name)
  File "C:\Users\E-soft\Anaconda3\envs\Explorium\lib\site-packages\tensorflow_core\python\ops\gen_math_ops.py", line 6698, in mul
    _six.raise_from(_core._status_to_exception(e.code, message), None)
  File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [4,4,256,256] vs. [4,256,256,4] [Op:Mul] name: loss/conv2d_18_loss/mul/

It seems that I should reshape my input for masks, Should it be
(BATCH_SIZE IMAGE_WIDTH, IMAGE_HEIGHT, NUMBER_OF_CLASSES, 1)?

Error with the utils file while using RGB images

I am using the utils.py functions to work with RGB images. My mode works with RGB images and RGB masks and generates RGB masks (3 channel images). I am working with (256,256,3) images and masks. While running the code, I get the following error wit the plot_imgs function: How should we modify this plot function t work with RGB images? Thanks in advance.

ValueError Traceback (most recent call last)
in
----> 1 plot_imgs(org_imgs=X_train, mask_imgs=y_train, pred_imgs=preds_train, nm_img_to_plot=1)

in plot_imgs(org_imgs, mask_imgs, pred_imgs, nm_img_to_plot, figsize, alpha, color)
145 mask_to_rgba(
146 zero_pad_mask(pred_imgs[im_id], desired_size=org_imgs_size),
--> 147 color=color,
148 ),
149 cmap=get_cmap(pred_imgs),

in mask_to_rgba(mask, color)
59 w = mask.shape[1]
60 zeros = np.zeros((h, w))
---> 61 ones = mask.reshape(h, w)
62 if color == "red":
63 return np.stack((ones, zeros, zeros, ones), axis=-1)

ValueError: cannot reshape array of size 196608 into shape (256,256)

Multiple Input

Hi, thanks for sharing the Repo. Its very usefull.
It is possible to have a multiple Input architecture with Unet? I have two similar images and would like give this two images as Input. The change in Architeccture is not the problem, thats easy. But which components of the code I must change? Loss-Function? Optimizer? In which way I set my labels if I have two Inputs?

thanks :)

Is this multi-class segmentation possible?

Hi karolzak,

I've had some good progress training unet segmentation in the cloud with your library.
I can train binary segmentation fine now, at least.

Now I have RGB images, and png masks where np.unique(pixel_options) = [0 1 2 3 255]
I am interested in tracking class ids 1 and 3.

I've looked through your answers to others regarding multi-class segmentation, and it came down to using multiple binary segmentation networks, by running multiple Keras Sessions.

Otherwise, there was an option to +1 to num_classes and change the output shape to (n, w, h, num_classes). Then each class gets a full (w x h) binary mask of its own. (You are still using sigmoid and binary_crossentropy for this?)

From other blogs I've read, there's one-hot encoding techniques, where each class gets its own binary mask as output above, and then they use a softmax function and then argmax each pixel to get the winning class of each pixel.

Then there's what I think I'm interested in,
where images are (n, 256 , 256, 1) (i.e. gray-scale)
and masks are also (n, 256 , 256, 1) because the pixel options are just integers from 0 to 255.
That's what I want, as output, too. I'll read the prediction mask pixel values to get the class numbers.
I see Keras recommends 'sparse_categorical_crossentropy' as the loss function for this use case, and then it apparently doesn't matter if you use sigmoid or softmax.

But I'm just a bit stuck on whether this will work.

I'm thinking then, to take the resulting PNG with [0 1 2 3 255] values, and map each class to a colour.

Any advice on the 'integer' classes use case, as opposed to the 'one hot vector' style answers I've read in the other issues?

Thanks

Augmenting validation set shouldn't be mandatory

In utils.py, the validation set is augmented. AFAIK the validation set should be fixed and be as close as real world data. In this case it is manually changed and won't be the same across two epochs.

Please let me know if I'm missing something. I'd go with shuffle=False, because if your dataset isn't a multiple of the batch size the dataset won't be the all the time the same and augment=False by default for the aformentioned reasons.

Satellite Unet in multi-gpu

Hello
I wasn't able to run the Satellite Unet in multi-gpu. I didn't have this problem with the custom unet.

Training on my dataset

When I train on my own dataset, the loss and dice_coef increases and iou also increases. I tried a range of learning rates, but it didn't help. Please help me out. In what format are the images and labels?

Including multiple classes in satellite unet

I am finding this library very useful, it's really a great effort. I just wanted to know if I wish to get multiple classes in the predicted output, Will I have to give the masks having pixel values such as 0,1,2 each pixel value signifying separate class or is there any other way out ? I am just a beginner in unet for multi class segmentation ,so please help.

input size

I seem to be having issues with the input size for the vanilla_unet. I am wanting to test this with (64,64,1) or (64,64,3) images, but I receive this error:

ValueError: Negative dimension size caused by subtracting 3 from 2 for '{{node conv2d_7/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](conv2d_6/Relu, conv2d_7/Conv2D/ReadVariableOp)' with input shapes: [?,2,2,64], [3,3,64,64].

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.