Coder Social home page Coder Social logo

bnsreenu / python_for_microscopists Goto Github PK

View Code? Open in Web Editor NEW
3.7K 3.7K 2.3K 161.35 MB

https://www.youtube.com/channel/UC34rW-HtPJulxr5wp2Xa04w?sub_confirmation=1

License: MIT License

Python 1.63% Jupyter Notebook 98.36% HTML 0.01% Procfile 0.01% Dockerfile 0.01%

python_for_microscopists's Introduction

python_for_microscopists's People

Contributors

bnsreenu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python_for_microscopists's Issues

czi file reading & splitting

@bnsreenu Thank you for great tutorials !
I'm having trouble pre-processing for 3D object tracking with trackpy,
I have multi-channel, multi-tile z-stack images in "czi" format directly out from Zeiss and having trouble setting up these files for tutorial compatible format.
any guidance would highly be appreciated!

ValueError LabelEncoder

As can be seen below, the following code was used when training a Unet model through Spyder. I ended up taking a look at a Youtube tutorial by DigitalSreeni,in episode 208 where he looked at multi class semantic segmentation with the help of a Unet neural network. I recreated everything except the images (which are my own), but with the same dimensions, etc. Can anyone let me know what the issue is?

from simple_multi_unet_model import multi_unet_model #Uses softmax

from keras.utils import normalize
import os
import glob
import cv2
import numpy as np
from matplotlib import pyplot as plt

#Resizing images, if needed
SIZE_X = 128
SIZE_Y = 128
n_classes=5 #Number of classes for segmentation

#Capture training image info as a list
train_images = []

for directory_path in glob.glob("/Hydro/128bit/images/"):
for img_path in glob.glob(os.path.join(directory_path, "*.tif")):
img = cv2.imread(img_path, 0)
#img = cv2.resize(img, (SIZE_Y, SIZE_X))
train_images.append(img)

#Convert list to array for machine learning processing
train_images = np.array(train_images)

#Capture mask/label info as a list
train_masks = []
for directory_path in glob.glob("/Hydro/128bit/masks/"):
for mask_path in glob.glob(os.path.join(directory_path, "*.tif")):
mask = cv2.imread(mask_path, 0)
#mask = cv2.resize(mask, (SIZE_Y, SIZE_X), interpolation = cv2.INTER_NEAREST) #Otherwise ground truth changes due to interpolation
train_masks.append(mask)

#Convert list to array for machine learning processing
train_masks = np.array(train_masks)

###############################################
#Encode labels... but multi dim array so need to flatten, encode and reshape
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
n, h, w = train_masks.shape
train_masks_reshaped = train_masks.reshape(-1,1)
train_masks_reshaped_encoded = labelencoder.fit_transform(train_masks_reshaped)
train_masks_encoded_original_shape = train_masks_reshaped_encoded.reshape(n, h, w)

File "C:\Users\anish\208_multiclass_Unet_sandstone.py", line 63, in
n, h, w = train_masks.shape
ValueError: not enough values to unpack (expected 3, got 1)

when using git code 208 while using class weights getting issue

please help :
when using below
history = model.fit(train_x, train_y,
batch_size = 16,
verbose=1,
epochs=10,
validation_data=(test_x,test_y),
class_weight=class_weights,
shuffle= True)

-> 1185 if class_weight:
1186 dataset = dataset.map(_make_class_weight_map_fn(class_weight))
1187 self._inferred_steps = self._infer_steps(steps_per_epoch, dataset)

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

need advise

Hello Dr,
i tried to contact you on linkedin but seems like you aren't there.
First of all thanks for such a great repo that surely helps alot.
BTW i am doing semantic segmentation on a small dataset of endoscopic images. i am able to train a U-NET using segmentation models. i have an image like this:
image
and respective mask for 5-classes excluding background:
image

i have increased the number of training images and masks using augmentation but still not getting good results.
i have training iou of 0.53 while testing iou is just 0.45. i am using sigmoid activation function as i needed multi-output ( there is overlap in masks)

  1. please tell me how to improve my IOU in this case.
  2. can i use same augmentation (that i used to increase dataset) during training to overcome overfitting.
  3. how can i deal this problem as One-vs-REST.

waiting for your kind response
Thanks :)

index 4 is out of bounds for axis 1 with size 3

Sir, I m trying to train the model for the segmentation, but I getting an error in the Index.

"index 4 is out of bounds for axis 1 with size 3"
A similar issue has been already raised but you didn't reply. Please sir it will be a great help if you can address this issue.

Hoping for your positive response earliest.

Sandstone images used in learning courses

Hello Sreeni,

Thanks for your excellent learning courses, they are really amazing! Could you please explain a little about the attached images which are taken from your GitHub page? Is there any document that I can read about these images?

Thanks for your favor
Sandstone_Versa0000

Issue with Y_pred

Hello, I am trying figure out what values that y_pred tensor that passed as argument to jaccard_coef function consists. I usually think it should consists of probabilities of pixels. If yes, how should we multiply the tensor with floats i.e., probabilities of pixels with the Y_true which consists of unsigned integers i.e., 0's or 1's. Please clarify my doubt regarding this.
And when I fit my data with the model with defined jaccard_coef and loss, I am getting a error of
Input 'y' of 'Mul' Op has type float32 that does not match type uint8 of argument 'x'.
Please clarify, Thank you.

from keras.utils import to_categorical

ImportError Traceback (most recent call last)
\R\WIN-LI1\3.6/ipykernel_8504/614165243.py in
5 from sklearn.preprocessing import MinMaxScaler
6 scaler = MinMaxScaler()
----> 7 from keras.utils import to_categorical
8
9 #Use this to preprocess input for transfer learning

ImportError: cannot import name 'to_categorical' from 'keras.utils' (C:\Users\USER\anaconda3\lib\site-packages\keras\utils_init_.py)

150_151_custom_data_augmentation.py has seeding problem

since your code, changes the random seed to do the same transformation on both image and mask, it will cause the number = random.randint(0, len(images)) to have only two different outputs, which results in only two images being used for data augmentation.

to solve this, I changed the transformation function, to get image and mask as input arguments and removed the seed, so when for example an angle is generated through random.randint() function uses that angle on both mask and image, instead of calling the function twice with a seed.

the code below works just fine, without using a seed.
thank you for all your amazing tutorials.


#https://youtu.be/k4TqxHteJ7s
#https://youtu.be/mwN2GGA4mqo
"""
@author: Sreenivas Bhattiprolu
"""

import numpy as np
from matplotlib import pyplot as plt
from skimage.transform import AffineTransform, warp
from skimage import io, img_as_ubyte
import random
import os
from scipy.ndimage import rotate

images_to_generate=1000
seed_for_random = 42

#Define functions for each operation
#Define seed for random to keep the transformation same for image and mask

# Make sure the order of the spline interpolation is 0, default is 3. 
#With interpolation, the pixel values get messed up.
def rotation(image, mask):
    
    angle= random.randint(-180,180)
    r_img = rotate(image, angle, mode='reflect', reshape=False, order=0)
    r_msk = rotate(mask, angle, mode='reflect', reshape=False, order=0)
    return r_img,r_msk

def h_flip(image, mask):
    hflipped_img= np.fliplr(image)
    hflipped_msk= np.fliplr(mask)
    return  hflipped_img,hflipped_msk

def v_flip(image, mask):
    vflipped_img= np.flipud(image)
    vflipped_msk= np.flipud(mask)
    return vflipped_img,vflipped_msk

def v_transl(image, mask):
    n_pixels = random.randint(-64,64)
    vtranslated_img = np.roll(image, n_pixels, axis=0)
    vtranslated_msk = np.roll(mask, n_pixels, axis=0)
    return vtranslated_img,vtranslated_msk

def h_transl(image, mask):
    n_pixels = random.randint(-64,64)
    htranslated_img = np.roll(image, n_pixels, axis=1)
    htranslated_msk = np.roll(mask, n_pixels, axis=1)
    return htranslated_img,htranslated_msk



transformations = {'rotate': rotation,
                      'horizontal flip': h_flip, 
                      'vertical flip': v_flip,
                   'vertical shift': v_transl,
                   'horizontal shift': h_transl
                 }                #use dictionary to store names of functions 

images_path="E:\_palm\dataset_v2\patched_has_tree\w_palm\images\\" #path to original images
masks_path = "E:\_palm\dataset_v2\patched_has_tree\w_palm\masks\\"
img_augmented_path="E:\_palm\dataset_v2\patched_has_tree\w_palm\\augmented\images\\" # path to store aumented images
msk_augmented_path="E:\_palm\dataset_v2\patched_has_tree\w_palm\\augmented\masks\\" # path to store aumented images
images=[] # to store paths of images from folder
masks=[]

for im in os.listdir(images_path):  # read image name from folder and append its path into "images" array     
    images.append(os.path.join(images_path,im))

for msk in os.listdir(masks_path):  # read image name from folder and append its path into "images" array     
    masks.append(os.path.join(masks_path,msk))


i=1   # variable to iterate till images_to_generate

print(len(images))

while i<=images_to_generate: 
    number = random.randint(0, len(images))  #PIck a number to select an image & mask
    print(number)
    image = images[number]
    mask = masks[number]
    #print(image, mask)
    #image=random.choice(images) #Randomly select an image name
    original_image = io.imread(image)
    original_mask = io.imread(mask)
    transformed_image = None
    transformed_mask = None
#     print(i)
    n = 0       #variable to iterate till number of transformation to apply
    transformation_count = random.randint(1, len(transformations)) #choose random number of transformation to apply on the image
    
    while n <= transformation_count:
        key = random.choice(list(transformations)) #randomly choosing method to call
          #Generate seed to supply transformation functions. 
        transformed_image,transformed_mask = transformations[key](original_image, original_mask)
        
        n = n + 1
        
    new_image_path= "%s/augmented_image_%s.png" %(img_augmented_path, i)
    new_mask_path = "%s/augmented_mask_%s.png" %(msk_augmented_path, i)   #Do not save as JPG
    io.imsave(new_image_path, transformed_image)
    io.imsave(new_mask_path, transformed_mask)
    i =i+1
    

image

image

How to download the Electron Microscopy Dataset dataset to run the "227_mito_segm_using_models_from_Keras_Unet_collection.py"?"

I followed the clip and "227_mito_segm_using_models_from_Keras_Unet_collection.py"
We must to have data in images folder and masks folder:
image

And, I download the data at the link (https://www.epfl.ch/labs/cvlab/data/data-em/). But I only can open it with Fiji software, not have the masks folder and images folder.
image
I downloaded and had, as following:
image

Please help me have the data to run the file. Thank you @bnsreenu .

COVID SEIR fit

Hi, First thanks a lot for your very useful and productive videos and codes.
May I ask you please teach me how to predict the COVID with SEIR model and fit the data with SEIR model ... I saw your codes and video with exponential growth, not SEIR

Thanks

Reading czi file

Hi @bnsreenu

Firstly, thank you very much for the amazing videos.
I am sorry, this is not really an issue.

I've a few doubts in reading czi file and I would like to ask those here. Following the video tutorial, the czi file is imported in python like this

import czifile
from skimage import io

img = czifile.imread('file.czi')
print(img.shape)

The output is
(1, 1, 3, 1, 48, 1024, 1024, 1)
I couldn't completely understand what each value in the output refers to

Does 3 denote the number of channels?
Does 48 denote the number of slices in the z-stack for each channel?

I would like to save the z-stack corresponding to channel 1 in tiff format. Could you please offer some advice on how this can be done?

264 - Image outlier detection using alibi-detect

Hey there!

For: 264_alibinet_outlier_detection.py

Where it says:
from alibi_detect.utils import save_detector, load_detector

It should say:
from alibi_detect.utils.saving import save_detector, load_detector

230 ..IndexError: index 4 is out of bounds for axis 1 with size 4

Sir When I try to verify generator in training_landcover_keras_augmentaation.py I got this error, IndexError: index 4 is out of bounds for axis 1 with size 4 Please help I stuck on this from few days [
This error disappears if put num_classes=5 instead of 4 for example
train_img_gen = trainGenerator(train_img_path, train_mask_path, num_class=5)

errorr
](url)
error

index problem

class2_IoU = values[2,2]/(values[2,2] + values[2,1] + values[2,1] + values[2,3] + values[1,2]+ values[3,2]+ values[4,2])

Hello, thanks for your impressive video.
It's the first time I understood how to calculate IoU in my life despite watched lots of tutorial.
However, when I read the code, I found one index problem at line177:
ใ€€ใ€€class2_IoU = values[2,2]/(values[2,2] + values[2,1] + values[2,1] + values[2,3] + values[1,2]+ values[3,2]+ values[4,2])
Is the code should be changed into below?
ใ€€ใ€€class2_IoU = values[2,2]/(values[2,2] + values[2,1] + values[2,3] + values[2,4] + values[1,2]+ values[3,2]+ values[4,2])

Pre-Processing BRATS 2020 (QUESTION)

hi every one
I have a question about this phase of image preprocessing. Does this huge dataset need use filter for remove noise?? which filter is the best?(new one)and Do we have a python source for preprocessing step of BRATS2020 for learning more?
Which course or library is the best for dataset BRATS2020?

validation loss looks like overfitting

Hi,
I have applied this model to other data sets.
I want to forecast the energy consumption using a history of energy consumption and 23 weather features over 4 years.

The validation loss is increasing, which I assume it is a sign of overfitting. and the training loss does not go under 0.2.
I have tried decreasing the learning rate, adding decay rate, and reducing the lstm layers, but I still have overfitting.

How can I modify my model to prevent overfitting (the increasing trend in the validation loss?
image

Thanks for any advice.

Segmentation of Aerial Image

Hi, Everyone
I hope all are doing well

I have a general question to ask, actually I am new to the field of image segmentation and working with aerial image segmentation. So, I like to know that do we need to convert the mask into one channel Mode L image or we can train directly the RGB mask. Actually, the dataset I am using is the ISPRS potsdam dataset where the mask are in RGB masks.

fine tuning

hello , is these models are based on freezing the encoder and replace it with backbones which called fine-tuning or it's a whole new architecture with transfer learning and worth fine-tuning after training? thanx

Binary focal loss

the loss starts at 0.03 and the Jaccard coefficient does not increase and stays 0.018 what could possibly be wrong
The code:
model:
def conv_block(x, filter_size, size, dropout=0.6, batch_norm=True):

conv = layers.Conv2D(size, (filter_size, filter_size), padding="same")(x)
if batch_norm is True:
    conv = layers.BatchNormalization(axis=3)(conv)
conv = layers.Activation("relu")(conv)

conv = layers.Conv2D(size, (filter_size, filter_size), padding="same")(conv)
if batch_norm is True:
    conv = layers.BatchNormalization(axis=3)(conv)
conv = layers.Activation("relu")(conv)

if dropout > 0:
    conv = layers.Dropout(dropout)(conv)

return conv

def repeat_elem(tensor, rep):
# lambda function to repeat Repeats the elements of a tensor along an axis
#by a factor of rep.
# If tensor has shape (None, 256,256,3), lambda will return a tensor of shape
#(None, 256,256,6), if specified axis=3 and rep=2.

 return layers.Lambda(lambda x, repnum: K.repeat_elements(x, repnum, axis=3),
                      arguments={'repnum': rep})(tensor)

def res_conv_block(x, filter_size, size, dropout, batch_norm=True):
'''
Residual convolutional layer.
Two variants....
Either put activation function before the addition with shortcut
or after the addition (which would be as proposed in the original resNet).

1. conv - BN - Activation - conv - BN - Activation 
                                      - shortcut  - BN - shortcut+BN
                                      
2. conv - BN - Activation - conv - BN   
                                 - shortcut  - BN - shortcut+BN - Activation                                     

Check fig 4 in https://arxiv.org/ftp/arxiv/papers/1802/1802.06955.pdf
'''

conv = layers.Conv2D(size, (filter_size, filter_size), padding='same')(x)
if batch_norm is True:
    conv = layers.BatchNormalization(axis=3)(conv)
conv = layers.Activation('relu')(conv)

conv = layers.Conv2D(size, (filter_size, filter_size), padding='same')(conv)
if batch_norm is True:
    conv = layers.BatchNormalization(axis=3)(conv)
#conv = layers.Activation('relu')(conv)    #Activation before addition with shortcut
if dropout > 0:
    conv = layers.Dropout(dropout)(conv)

shortcut = layers.Conv2D(size, kernel_size=(1, 1), padding='same')(x)
if batch_norm is True:
    shortcut = layers.BatchNormalization(axis=3)(shortcut)

res_path = layers.add([shortcut, conv])
res_path = layers.Activation('relu')(res_path)    #Activation after addition with shortcut (Original residual block)
return res_path

def gating_signal(input, out_size, batch_norm=True):
"""
resize the down layer feature map into the same dimension as the up layer feature map
using 1x1 conv
:return: the gating feature map with the same dimension of the up layer feature map
"""
x = layers.Conv2D(out_size, (1, 1), padding='same')(input)
if batch_norm:
x = layers.BatchNormalization()(x)
x = layers.Activation('relu')(x)
return x

def attention_block(x, gating, inter_shape):
shape_x = K.int_shape(x)
shape_g = K.int_shape(gating)

Getting the x signal to the same shape as the gating signal

theta_x = layers.Conv2D(inter_shape, (2, 2), strides=(2, 2), padding='same')(x)  # 16
shape_theta_x = K.int_shape(theta_x)

Getting the gating signal to the same number of filters as the inter_shape

phi_g = layers.Conv2D(inter_shape, (1, 1), padding='same')(gating)
upsample_g = layers.Conv2DTranspose(inter_shape, (3, 3),
                             strides=(shape_theta_x[1] // shape_g[1], shape_theta_x[2] // shape_g[2]),
                             padding='same')(phi_g)  # 16

concat_xg = layers.add([upsample_g, theta_x])
act_xg = layers.Activation('relu')(concat_xg)
psi = layers.Conv2D(1, (1, 1), padding='same')(act_xg)
sigmoid_xg = layers.Activation('sigmoid')(psi)
shape_sigmoid = K.int_shape(sigmoid_xg)
upsample_psi = layers.UpSampling2D(size=(shape_x[1] // shape_sigmoid[1], shape_x[2] // shape_sigmoid[2]))(sigmoid_xg)  # 32

#upsample_psi = repeat_elem(upsample_psi, shape_x[3])

y = layers.multiply([upsample_psi, x])

result = layers.Conv2D(shape_x[3], (1, 1), padding='same')(y)
result_bn = layers.BatchNormalization()(result)
return result_bn

def Attention_ResUNet(input_shape, NUM_CLASSES=1, dropout_rate=0.8, batch_norm=True):
'''
Rsidual UNet, with attention

'''
# network structure
FILTER_NUM = 64 # number of basic filters for the first layer
FILTER_SIZE = 3 # size of the convolutional filter
UP_SAMP_SIZE = 2 # size of upsampling filters
# input data
# dimension of the image depth
inputs = layers.Input(input_shape, dtype=tf.float32)
axis = 3

# Downsampling layers
# DownRes 1, double residual convolution + pooling
conv_128 = res_conv_block(inputs, FILTER_SIZE, FILTER_NUM, dropout_rate, batch_norm)
pool_64 = layers.MaxPooling2D(pool_size=(2,2))(conv_128)
# DownRes 2
conv_64 = res_conv_block(pool_64, FILTER_SIZE, 2*FILTER_NUM, dropout_rate, batch_norm)
pool_32 = layers.MaxPooling2D(pool_size=(2,2))(conv_64)
# DownRes 3
conv_32 = res_conv_block(pool_32, FILTER_SIZE, 4*FILTER_NUM, dropout_rate, batch_norm)
pool_16 = layers.MaxPooling2D(pool_size=(2,2))(conv_32)
# DownRes 4
conv_16 = res_conv_block(pool_16, FILTER_SIZE, 8*FILTER_NUM, dropout_rate, batch_norm)
pool_8 = layers.MaxPooling2D(pool_size=(2,2))(conv_16)
# DownRes 5, convolution only
conv_8 = res_conv_block(pool_8, FILTER_SIZE, 16*FILTER_NUM, dropout_rate, batch_norm)

# Upsampling layers
# UpRes 6, attention gated concatenation + upsampling + double residual convolution
gating_16 = gating_signal(conv_8, 8*FILTER_NUM, batch_norm)
att_16 = attention_block(conv_16, gating_16, 8*FILTER_NUM)
up_16 = layers.UpSampling2D(size=(UP_SAMP_SIZE, UP_SAMP_SIZE), data_format="channels_last")(conv_8)
up_16 = layers.concatenate([up_16, att_16], axis=axis)
up_conv_16 = res_conv_block(up_16, FILTER_SIZE, 8*FILTER_NUM, dropout_rate, batch_norm)
# UpRes 7
gating_32 = gating_signal(up_conv_16, 4*FILTER_NUM, batch_norm)
att_32 = attention_block(conv_32, gating_32, 4*FILTER_NUM)
up_32 = layers.UpSampling2D(size=(UP_SAMP_SIZE, UP_SAMP_SIZE), data_format="channels_last")(up_conv_16)
up_32 = layers.concatenate([up_32, att_32], axis=axis)
up_conv_32 = res_conv_block(up_32, FILTER_SIZE, 4*FILTER_NUM, dropout_rate, batch_norm)
# UpRes 8
gating_64 = gating_signal(up_conv_32, 2*FILTER_NUM, batch_norm)
att_64 = attention_block(conv_64, gating_64, 2*FILTER_NUM)
up_64 = layers.UpSampling2D(size=(UP_SAMP_SIZE, UP_SAMP_SIZE), data_format="channels_last")(up_conv_32)
up_64 = layers.concatenate([up_64, att_64], axis=axis)
up_conv_64 = res_conv_block(up_64, FILTER_SIZE, 2*FILTER_NUM, dropout_rate, batch_norm)
# UpRes 9
gating_128 = gating_signal(up_conv_64, FILTER_NUM, batch_norm)
att_128 = attention_block(conv_128, gating_128, FILTER_NUM)
up_128 = layers.UpSampling2D(size=(UP_SAMP_SIZE, UP_SAMP_SIZE), data_format="channels_last")(up_conv_64)
up_128 = layers.concatenate([up_128, att_128], axis=axis)
up_conv_128 = res_conv_block(up_128, FILTER_SIZE, FILTER_NUM, dropout_rate, batch_norm)

# 1*1 convolutional layers

conv_final = layers.Conv2D(NUM_CLASSES, kernel_size=(1,1))(up_conv_128)
conv_final = layers.BatchNormalization(axis=axis)(conv_final)
conv_final = layers.Activation('sigmoid')(conv_final)  #Change to softmax for multichannel

# Model integration
model = models.Model(inputs, conv_final, name="AttentionResUNet")
return model

input_shape = (256,256,1)
model=Attention_ResUNet(input_shape, NUM_CLASSES=1, dropout_rate=0.6, batch_norm=True)
#model.summary()

DATA:

image_directory = '/content/drive/MyDrive/MRA/new/train/images/'
mask_directory = '/content/drive/MyDrive/MRA/new/train/masks/'

SIZE = 256
image_dataset = [] #Many ways to handle data, you can use pandas. Here, we are using a list format.
mask_dataset = [] #Place holders to define add labels. We will add 0 to all parasitized images and 1 to uninfected.

images = sorted(os.listdir(image_directory))
for i, image_name in enumerate(images): #Remember enumerate method adds a counter and returns the enumerate object
if (image_name.split('.')[1] == 'png'):
#print(image_directory+image_name)
image = cv2.imread(image_directory+image_name, 1)
image = Image.fromarray(image)
image = image.resize((SIZE, SIZE))
image_dataset.append(np.array(image))

#Iterate through all images in Uninfected folder, resize to 64 x 64
#Then save into the same numpy array 'dataset' but with label 1

masks = sorted(os.listdir(mask_directory))
for i, image_name in enumerate(masks):
if (image_name.split('.')[1] == 'png'):
image = cv2.imread(mask_directory+image_name, 0)
image = Image.fromarray(image)
image = image.resize((SIZE, SIZE))
mask_dataset.append(np.array(image))

#Normalize images
image_dataset = np.array(image_dataset)/255.
#D not normalize masks, just rescale to 0 to 1.
mask_dataset = np.expand_dims((np.array(mask_dataset)),3) /255.

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(image_dataset, mask_dataset, test_size = 0.05, random_state = 0)

#Sanity check, view few mages
import random
import numpy as np
image_number = random.randint(0, len(X_train))
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.imshow(np.reshape(X_train[image_number], (256, 256, 3)), cmap='gray')
plt.subplot(122)
plt.imshow(np.reshape(y_train[image_number], (256, 256)), cmap='gray')
plt.show()

TRAINING:
IMG_HEIGHT = X_train.shape[1]
IMG_WIDTH = X_train.shape[2]
IMG_CHANNELS = X_train.shape[3]
num_labels = 1 #Binary
input_shape = (IMG_HEIGHT,IMG_WIDTH,IMG_CHANNELS)
batch_size = 16
#from focal_loss import BinaryFocalLoss

'''
Attention Residual Unet
'''
att_res_unet_model = Attention_ResUNet(input_shape)

att_res_unet_model.compile(optimizer=Adam(lr = 1e-2), loss=BinaryFocalLoss(gamma=2),
metrics=['accuracy', jacard_coef])

#att_res_unet_model.compile(optimizer=Adam(lr = 1e-3), loss='binary_crossentropy',

metrics=['accuracy', jacard_coef])

#print(att_res_unet_model.summary())

start3 = datetime.now()
att_res_unet_history = att_res_unet_model.fit(X_train, y_train,
verbose=1,
batch_size = batch_size,
validation_data=(X_test, y_test ),
shuffle=False,
epochs=50)
stop3 = datetime.now()

#Execution time of the model
execution_time_AttResUnet = stop3-start3
#print("Attention ResUnet execution time is: ", execution_time_AttResUnet)

att_res_unet_model.save('AttResUnet.hdf5')

Docker in ubuntu is not working as smooth

Hello Sreeni,

Thank you for the awesome tutorial videos.
I followed the youtube tutorial to make a container on my ubuntu-os system and faced a few issues did not happen in tutorial. Though I exerted the suggestions in web the problem was not solved and would appreciate it if you can guide me here.
in the beginning the terminal generated this error:
_no such option: --no-cashe-dir The command '/bin/sh -c pip install --no-cashe-dir -r requirements.txt' returned a non-zero code: 2_
when running the last command in Dockerfile:
RUN pip install --no-cashe-dir -r requirements.txt

and when running sudo docker run -v for mapping, I faced a long-standing issue: _docker: invalid reference format_

I could not solve this problem looking to web and thought maybe you might figure what went wrong.

Respectfully
Nilla

Need help with RGB images

Hello, I need help with reading RGB images for Multiclass segmentation in Unet.
I tried using the code you provided in video lesson 208 but I don't know what I'm doing wrong.
I was able to read images in RGB but I can't seem to predict the results. This is the error that I got

"ValueError: Input 0 of layer conv2d_19 is incompatible with the layer: expected axis -1 of input shape to have value 3 but received input with shape [None, 128, 128, 1]"

test_img_norm=test_img[ : , : , 0 ] [ : , : , None]

Can you please explain what this line above does and how can I get it to read an RGB image for prediction.

Unable to use class_weights: `class_weight` not supported for 3+ dimensional targets.

I have 9 classes and one is for background. But when I pass class_weight to model.fit(), an error pops in:
class_weight not supported for 3+ dimensional targets.

I'm using the following set of lines to calculate class_weights

classweights = class_weight.compute_class_weight('balanced', classes = np.unique(y), y= y_reshaped) 
class_weights = {}
for i in range(n_classes):
   class_weights[i] = classweights[i]
print(class_weights)

Queery on Intersection over union metric

Sir,
I am kavitha, research scholar from INDIA. sir, for calculation IOU metric, it indicates the difference between predicted and ground truth bounding divided by union of predicted and ground truth. my doubt is :" why we consider union of predicted and ground truth". Is it notee sufficient to intersect the predicted and ground truth? can you justify this? Thanks in advance.

Regards,
Kavitha

Landcover data

Please upload the landcover data.
The link you shared is not working.
Thanks

Train SRGAN with high-resolution images of different dimension

Good morning,
thank you very much for the grate work.
I am running the code of the SRGAN.
I was able to run the code using high resolution images all of same size (e.g. 240x240 pixels), changing
hr_shape and vgg shape values into the code.
I was wondering if it is possible to train the mode using high resolution images of different sizes,
for example some with dimension of 240x240, ohers of 100x100 etc

thank you very much

nii to tiff

How can I convert and save nii files to tiff

identify less greenery area from satellite image

Dear Sir,
I am an architecture student, For my thesis, I need to identify less greenery area( from the given satellite image to maintain the balanced echo system. I have gone through your videos and used UNET based segmentation code. But, I could not achieve the desired result. Can you guide me how to go about? Which video and code I should refer ?

Error of VAE sample code:

I got the following error when I run the jupyter notebook of your MNIST VAE sample:

TypeError: You are passing KerasTensor(type_spec=TensorSpec(shape=(), dtype=tf.float32, name=None), name='tf.math.reduce_sum/Sum:0', description="created by layer 'tf.math.reduce_sum'"), an intermediate Keras symbolic input/output, to a TF API that does not allow registering custom dispatchers, such as tf.cond, tf.function, gradient tapes, or tf.map_fn. Keras Functional model construction only supports TF API calls that do support dispatching, such as tf.math.add or tf.reshape. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer call and calling that layer on this symbolic input/output.

The error happens at

image

I am using the latest version of TF (2.7.0) and Keras (2.8.0). I also tried older versions of both TF and Keras with no luck.

264 - Image outlier detection using alibi-detect

Its me, again.

After:
image = cv2.imread(image_directory + 'good/' + image_name)

And after:
image = cv2.imread(image_directory + 'bad/' + image_name)

Add:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

Converting from opencv BGR, to pillow RGB, to show the real colors, of the images.

Thanks!

2D U-net model training Brats 2020

hi i have a problem with import function image loader in third part
how to import a function from another file?
and also in validation folder we dont have mask so if we want to use validation folder for validate our model how should i do?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.