Coder Social home page Coder Social logo

ayoolaolafenwa / pixellib Goto Github PK

View Code? Open in Web Editor NEW
1.0K 43.0 261.0 166.32 MB

Visit PixelLib's official documentation https://pixellib.readthedocs.io/en/latest/

License: MIT License

Python 100.00%
computer-vision machine-learning artificial-intelligence image-segmentation semantic-segmentation instance-segmentation video-segmentation deeplab deeplearning maskr-cnn

pixellib's People

Contributors

ayoolaolafenwa avatar elbruno avatar fmorenovr avatar footballdaniel avatar khanfarhan10 avatar mmphego avatar prateekralhan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pixellib's Issues

How to implement batch predict?

Hi Ayoola, many thanks for creating this library.How do I implement batch predict?How can this library process hundreds of images at once?

Why Pixellib is very slow on CPU?

I saw @ayoolaolafenwa's comment on 3 Jun
"Inference time for performing Semantic Segmentation with PixelLib occurs between 7-8 seconds. This is fast enough for obtaining state of the art results with Semantic Segmentation".

But it's not at all 8 seconds.Its taking close to 1 minute in a CPU with 6GB RAM

how to print the segmentation mask value#

Great packages! amazed!
So, my question is that I'm going through your tutorial got the code but didn't work for me. I want to print the Instance segmentation mask value ?? my output code is given below
Another one is how to serve this through an API ??

import pixellib
from pixellib.instance import custom_segmentation
import cv2

instance_seg = custom_segmentation()
instance_seg.inferConfig(num_classes= 2, class_names= ["BG", "butterfly", "squirrel"])
instance_seg.load_model("/content/mask_rcnn_models/mask_rcnn_model.027-0.335725.h5")
segmask, output = instance_seg.segmentImage("/content/Nature/test/butterfly (10).jpg", show_bboxes= True, output_image_name="e_out.jpg")
cv2.imwrite("img.jpg", output)
print(segmask.get('rois'))
for item in segmask.items():
    print(item)

output mask is Flase not get the value. Let me know please

[[ 42  32 172 232]]
('rois', array([[ 42,  32, 172, 232]], dtype=int32))
('class_ids', array([1], dtype=int32))
('scores', array([0.98511064], dtype=float32))
('masks', array([[[False],
        [False],
        [False],
        ...,
        [False],
        [False],
        [False]],

       [[False],
        [False],
        [False],
        ...,
        [False],
        [False],
        [False]],

       [[False],
        [False],
        [False],
        ...,
        [False],
        [False],
        [False]],

       ...,

       [[False],
        [False],
        [False],
        ...,
        [False],
        [False],
        [False]],

       [[False],
        [False],
        [False],
        ...,
        [False],
        [False],
        [False]],

       [[False],
        [False],
        [False],
        ...,
        [False],
        [False],
        [False]]]))

Change detection threshold for rcnn segmentation

How can we adjust the detection threshold while testing ? For the program below, is there someway we could input a threshold score value ?

segment_image = custom_segmentation()
segment_image.inferConfig(num_classes= 1, class_names= ["bg","vol"])
segment_image.load_model("/content/drive/My Drive/Segmentation/mask_rcnn_model.034-0.509586.h5")
segment_image.segmentImage("/content/drive/My Drive/Segmentation/tn1.jpg", show_bboxes=True, output_image_name= "/content/drive/My Drive/Segmentation/n1.jpg")

got error when i deployed my custom instance into Heroku platform service

hi every one
when i deployed my custom instance segmentation to heroku platform services i got the error

File "/app/app.py", line 21, in
2021-01-01T08:54:37.177557+00:00 app[web.1]: from pixellib.instance import custom_segmentation
2021-01-01T08:54:37.177558+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/pixellib/instance.py", line 1, in
2021-01-01T08:54:37.177559+00:00 app[web.1]: import cv2
2021-01-01T08:54:37.177559+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/cv2/init.py", line 5, in
2021-01-01T08:54:37.177560+00:00 app[web.1]: from .cv2 import *

but work very well in my local server flask in my pc

thanks for help

Error when checking input while training

Hello,
Thank you for your amazing sharing. I want to use PixelLib with custom training and segmentation. However, when I'm starting training the model, I'm receiving an error:

ValueError: Error when checking input: expected input_image_meta to have shape (18,) but got array with shape (19,)

Here is my configuration

train_maskrcnn = instance_custom_training()
train_maskrcnn.modelConfig(network_backbone = "resnet101", num_classes= 5, batch_size = 2,
                           detection_threshold=0.45)
train_maskrcnn.load_pretrained_model("../models/pretrained_models/mask_rcnn_coco.h5")
train_maskrcnn.load_dataset("../data/interim/pixelib_fastrcnn/")
train_maskrcnn.train_model(num_epochs = 100, augmentation=True, layers='all',
                        path_trained_models = ".test_model")

Have you got a similar error? If no, how I can prevent this?

Many thanks.

run error

when I run the code
segment_image = semantic_segmentation()
it raise the error:
ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 2048, 64, 2048), (None, 256, 64, 1), (None, 256, 64, 1), (None, 256, 64, 1), (None, 256, 64, 1)]
how can I solve the problem?
wish to get you replay soon

Custom Augmentations in PixelLib

Not quite sure how to implement a custom augmentation pipeline in PixelLib. I have used the albumentations library with great success, but seeing as it is implemented with imgaug, can you display a demo to augment with simple transforms and complex ones?

What I understood is working from imgaug :

seq = iaa.Sequential([
    iaa.Crop(px=(1, 16), keep_size=False),
    iaa.Fliplr(0.5),
    iaa.GaussianBlur(sigma=(0, 3.0))
])

I used in albumentations :

transform_finale = A.Compose([
    A.CLAHE (clip_limit=4.0, tile_grid_size=(8, 8), always_apply=False, p=1),
    A.HorizontalFlip(p=0.5),
    A.RandomBrightnessContrast(p=0.2),
    
])

Could you also provide support for albumentations?

Thanks and any help is much appreciated.

load model have error

Hello
experiment:
import pixellib
from pixellib.semantic import semantic_segmentation
segment_image = semantic_segmentation()
segment_image.load_pascalvoc_model("deeplabv3_xception_tf_dim_ordering_tf_kernels.h5")
segment_image.segmentAsPascalvoc(r"C:\Users\xww\Desktop\test.jpg", output_image_name = r"D:\unity_python\ma_1.jpg")

### have error:
Unable to open file (unable to open file: name = 'deeplabv3_xception_tf_dim_ordering_tf_kernels.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

where is this weights file of "deeplabv3_xception_tf_dim_ordering_tf_kernels.h5" ?

thank you

Deploying trained model to Tesnorflow Serving

I am trying to deploy trained custom model to a TFServing contained, but when I do I end-up with this signature:

The given SavedModel SignatureDef contains the following input(s):
  inputs['input_anchors'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, -1, 4)
      name: serving_default_input_anchors:0
  inputs['input_image'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, -1, -1, 3)
      name: serving_default_input_image:0
  inputs['input_image_meta'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 15)
      name: serving_default_input_image_meta:0

this means that each time I would need to provide a kind of a pre-processing function in order to format these inputs. When I look at the code it seems quite complex to fix some configs and export them in order to build everything we need.

Do you know an easy way of deploying models trained with PixelLib to TFServe ?

It would be great if you have this export feature somehow?

Thnx

Opencv !ssize.empty() in function 'resize'

Im Trying to execute:

change_bg.change_video_bg(single_vid, "bg.jpg", frames_per_second=10, output_video_name=output_vid + new_vid,
                          detect="car")

However im getting following error:

Traceback (most recent call last):
  File "<input>", line 1, in <module>
  File "/snap/pycharm-professional/240/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
    pydev_imports.execfile(filename, global_vars, local_vars)  # execute the script
  File "/snap/pycharm-professional/240/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/home/n/Documents/Background_removal/src/pixellib_main_vid.py", line 18, in <module>
    detect="car")
  File "/home/n/anaconda3/envs/background/lib/python3.7/site-packages/pixellib/tune_bg.py", line 301, in change_video_bg
    img = cv2.resize(img, (h,w))
cv2.error: OpenCV(4.5.1) /tmp/pip-req-build-hj027r8z/opencv/modules/imgproc/src/resize.cpp:4051: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

What might be the cause of it ?

Problem with version 0.4.5.

Dear ayoolaolafenwa,
I was checking your examples and found a problem with the last version. This import is not woriking
from pixellib.custom_train import instance_custom_training
I fixed the version to 0.4.0 and worked right.

By the way, thanks for the project.

ValueError: Shapes (1, 1, 256, 21) and (19, 256, 1, 1) are incompatible

Hello, Here is my code:

import pixellib
from pixellib.semantic import semantic_segmentation
segment_image = semantic_segmentation()
segment_image.load_pascalvoc_model("deeplabv3_xception_tf_dim_ordering_tf_kernels_cityscapes.h5")
segment_image.segmentAsPascalvoc("time.jpg", output_image_name="out.jpg")

my tf is 2.3.0
what should I do ?

Semantic Segmentation with Ade20K

Many thanks for creating this library.
I am fairly new to ML and I am trying to do semantic segmentation using the Ade20K dataset. I was able to obtain the segmented image, classes, and masks. For the project I am working on I would need to know the percentage of pixels that belong to each class. In my specific example, the image gets segmented in 12 classes. Is there a way to obtain what percentage of the pixels correspond to each of these 12 classes?

load_model

hi
kindly help in removing this error

runfile('E:/MCSS/PixelLib-master/demo.py', wdir='E:/MCSS/PixelLib-master')
Reloaded modules: pixellib, pixellib.instance, pixellib.mask_rcnn, pixellib.utils, pixellib.config
Traceback (most recent call last):

File "", line 1, in
runfile('E:/MCSS/PixelLib-master/demo.py', wdir='E:/MCSS/PixelLib-master')

File "C:\Users\dell\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 704, in runfile
execfile(filename, namespace)

File "C:\Users\dell\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "E:/MCSS/PixelLib-master/demo.py", line 14, in
segment_image.load_model('mask_rcnn_coco.h5')

File "E:\MCSS\PixelLib-master\pixellib\instance.py", line 25, in load_model
self.model.load_weights(model_path, by_name= True)

File "E:\MCSS\PixelLib-master\pixellib\mask_rcnn.py", line 2089, in load_weights
from tensorflow.python.keras.saving import hdf5_format

ModuleNotFoundError: No module named 'tensorflow.python.keras.saving'

Running on GPU

I run the semantic segmentation on CPU without any errors, but when run on GPU for the same video I got the following error "UnboundLocalError: local variable 'raw_labels' referenced before assignment." Tensorflow-gpu 2.4.0 is installed.

Why so slow?

deeplabv3+ not need 8sec per img ,why so slow PixelLib? use GPU?

Custom training Epoch 1 restarts agains

Epoch 1/30
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
2020-12-28 10:02:29.703215: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-12-28 10:02:30.194037: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
  1/100 [..............................] - ETA: 47:41 - loss: 4.9516 - rpn_class_loss: 0.1558 - rpn_bbox_loss: 1.6405 - mrcnn_class_loss: 1.7892 - mrcnn_bbox_loss: 0.7086 - mrcnn_mask_loss: 0.6575/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
  2/100 [..............................] - ETA: 45:15 - loss: 5.1448 - rpn_class_loss: 0.2000 - rpn_bbox_loss: 1.9165 - mrcnn_class_loss: 1.7027 - mrcnn_bbox_loss: 0.7267 - mrcnn_mask_loss: 0.5988/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
  3/100 [..............................] - ETA: 50:40 - loss: 5.4883 - rpn_class_loss: 0.2283 - rpn_bbox_loss: 2.4004 - mrcnn_class_loss: 1.5654 - mrcnn_bbox_loss: 0.7351 - mrcnn_mask_loss: 0.5591/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
  4/100 [>.............................] - ETA: 53:20 - loss: 5.1602 - rpn_class_loss: 0.2307 - rpn_bbox_loss: 2.2657 - mrcnn_class_loss: 1.4151 - mrcnn_bbox_loss: 0.7080 - mrcnn_mask_loss: 0.5406/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
 

Starts again from epoch 1.

 99/100 [============================>.] - ETA: 26s - loss: 3.4030 - rpn_class_loss: 0.2289 - rpn_bbox_loss: 1.8715 - mrcnn_class_loss: 0.5229 - mrcnn_bbox_loss: 0.3867 - mrcnn_mask_loss: 0.3930/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
Epoch 1/30
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
  1/100 [..............................] - ETA: 53:58 - loss: 8.0097 - rpn_class_loss: 0.3314 - rpn_bbox_loss: 3.5320 - mrcnn_class_loss: 2.7404 - mrcnn_bbox_loss: 0.9378 - mrcnn_mask_loss: 0.4682/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
  2/100 [..............................] - ETA: 52:49 - loss: 7.3174 - rpn_class_loss: 0.3083 - rpn_bbox_loss: 2.7523 - mrcnn_class_loss: 2.8233 - mrcnn_bbox_loss: 0.9591 - mrcnn_mask_loss: 0.4745/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
  3/100 [..............................] - ETA: 55:52 - loss: 7.1300 - rpn_class_loss: 0.2889 - rpn_bbox_loss: 2.6260 - mrcnn_class_loss: 2.7988 - mrcnn_bbox_loss: 0.9497 - mrcnn_mask_loss: 0.4667/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
  4/100 [>.............................] - ETA: 58:37 - loss: 7.2551 - rpn_class_loss: 0.3070 - rpn_bbox_loss: 2.7443 - mrcnn_class_loss: 2.7764 - mrcnn_bbox_loss: 0.9308 - mrcnn_mask_loss: 0.4967/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
  5/100 [>.............................] - ETA: 1:13:32 - loss: 7.2018 - rpn_class_loss: 0.3089 - rpn_bbox_loss: 2.6986 - mrcnn_class_loss: 2.7592 - mrcnn_bbox_loss: 0.9382 - mrcnn_mask_loss: 0.4968/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
  6/100 [>.............................] - ETA: 1:09:59 - loss: 7.2722 - rpn_class_loss: 0.3149 - rpn_bbox_loss: 2.7582 - mrcnn_class_loss: 2.7657 - mrcnn_bbox_loss: 0.9481 - mrcnn_mask_loss: 0.4853/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
  7/100 [=>............................] - ETA: 1:12:10 - loss: 7.1821 - rpn_class_loss: 0.3109 - rpn_bbox_loss: 2.6810 - mrcnn_class_loss: 2.7698 - mrcnn_bbox_loss: 0.9508 - mrcnn_mask_loss: 0.4695/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
  8/100 [=>............................] - ETA: 1:09:01 - loss: 7.2226 - rpn_class_loss: 0.3056 - rpn_bbox_loss: 2.7524 - mrcnn_class_loss: 2.7584 - mrcnn_bbox_loss: 0.9392 - mrcnn_mask_loss: 0.4669/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
  9/100 [=>............................] - ETA: 1:09:59 - loss: 7.2241 - rpn_class_loss: 0.3076 - rpn_bbox_loss: 2.7430 - mrcnn_class_loss: 2.7643 - mrcnn_bbox_loss: 0.9442 - mrcnn_mask_loss: 0.4650/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
 10/100 [==>...........................] - ETA: 1:08:16 - loss: 7.3216 - rpn_class_loss: 0.3094 - rpn_bbox_loss: 2.8192 - mrcnn_class_loss: 2.7728 - mrcnn_bbox_loss: 0.9390 - mrcnn_mask_loss: 0.4812/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
 11/100 [==>...........................] - ETA: 1:06:36 - loss: 7.2653 - rpn_class_loss: 0.3190 - rpn_bbox_loss: 2.7430 - mrcnn_class_loss: 2.7848 - mrcnn_bbox_loss: 0.9413 - mrcnn_mask_loss: 0.4773/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
 12/100 [==>...........................] - ETA: 1:04:45 - loss: 7.3337 - rpn_class_loss: 0.3177 - rpn_bbox_loss: 2.7954 - mrcnn_class_loss: 2.7964 - mrcnn_bbox_loss: 0.9404 - mrcnn_mask_loss: 0.4838/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)
 13/100 [==>...........................] - ETA: 1:03:42 - loss: 7.2974 - rpn_class_loss: 0.3260 - rpn_bbox_loss: 2.7536 - mrcnn_class_loss: 2.8046 - mrcnn_bbox_loss: 0.9374 - mrcnn_mask_loss: 0.4757/root/anaconda3/envs/pixellib/lib/python3.7/site-packages/skimage/transform/_warps.py:830: FutureWarning: Input image dtype is bool. Interpolation is not defined with bool data type. Please set order to 0 or explicitely cast input image to another data type. Starting from version 0.19 a ValueError will be raised instead of this warning.
  order = _validate_interpolation_order(image.dtype, order)

All weights are saved from epoch 1 only.

Failed to Create/Open: xception_pascalvoc.pb

Hello,

I'm new to PixelLib and have just installed it, following the instructions in the official documentation. I'm using Windows 10 with Python 3.7.3.

What I run theis example: https://gist.github.com/ayoolaolafenwa/019448eb8693337c5e7c70a08041b801#file-image_detect_person-py

I get the following error. Is a file missing?

Traceback (most recent call last):
  File "blur_person.py", line 5, in <module>
    change_bg.load_pascalvoc_model("xception_pascalvoc.pb")
  File "C:\Python37\lib\site-packages\pixellib\tune_bg.py", line 32, in load_pascalvoc_model
    graph_def = tf.compat.v1.GraphDef.FromString(file_handle.read())
  File "C:\Python37\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 117, in read
    self._preread_check()
  File "C:\Python37\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 80, in _preread_check
    compat.path_to_str(self.__name), 1024 * 512)
tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: xception_pascalvoc.pb : The system cannot find the file specified.

CUDA_ERROR_OUT_OF_MEMORY: out of memory

I use cudav11.0 and tensorflow-2.4.1 and I got out of memory error both for testing and training code with batch_size = 1

following configuration did not also solve the problem

from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession

config = ConfigProto(allow_soft_placement=True)
config.gpu_options.allow_growth = True
# config.gpu_options.per_process_gpu_memory_fraction = 0.33
session = InteractiveSession(config=config)

How to use the annotation from tools other than labelme

Hi, Thank you Ayoola Olafenwa and team for the continued support and for this beneficial library to speed up custom training from scratch.

Which section of the code/what changes need to be made, if instead of individual 'jsons' for each image from 'LabelMe', annotated coco for the overall dataset (e.g. from coco annotator) need to be used with PixelLib for training?

This primarily can skip one step of accumulating all the individual jsons' from LabelMe to one json.

Thanks,
Arun

Is there a way to return a transparent background?

Hello, I've been going through the docs and I see alter_bg for color_bg and image_bg and even gray_bg is there a way to return a transparent background?

Appreciate the help and I love the project. I'm new to python and this got me up in running within a few days.

Error while training

ValueError: Error when checking input: expected input_image_meta to have shape (17,) but got array with shape (16,)

Getting this when trying to add few samples of insect images tagged with labelme into Nature dataset. So it is becoming 3 class problem..But working fine with 2 class case of Nature

unable load model in docker container. it shows no error nor loading..

when i use it with gpu in docker container it shows:
E tensorflow/stream_executor/cuda/cuda_driver.cc:1244] could not retrieve CUDA device count: CUDA_ERROR_NOT_INITIALIZED: initialization error

when i run it without GPU it hang on load model line:
segment_video.load_ade20k_model("deeplabv3_xception65_ade20k.h5")

Only show mask in instance segmentation

i'm new in ML, thanks provide pixellib great package.

Is it possible to keep only the mask of the instance segmentation?

The result will be the same as semantic segmentation, only black background and mask.

mask_rcnn_coco.h5, ValueError: You are trying to load a weight file containing 233 layers into a model with 293 layers.

mycode

import pixellib
from pixellib.semantic import semantic_segmentation

segment_image = semantic_segmentation()
segment_image.load_pascalvoc_model("./mask_rcnn_coco.h5")
segment_image.segmentAsPascalvoc("../ImageSegmentation_data/dogcat1.jpg",
output_image_name = "dogcat1_segment.jpg")

error

File "D:\ProgramFiles\Anaconda\envs\ml_env_20191207\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)

File "D:\ProgramFiles\Anaconda\envs\ml_env_20191207\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "E:/CV/ImageSegmentation/ImageSegmentation_code/5行代码快速实现图像分割.py", line 23, in
segment_image.load_pascalvoc_model("./mask_rcnn_coco.h5")

File "D:\ProgramFiles\Anaconda\envs\ml_env_20191207\lib\site-packages\pixellib\semantic.py", line 17, in load_pascalvoc_model
self.model.load_weights(model_path)

File "D:\ProgramFiles\Anaconda\envs\ml_env_20191207\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 181, in load_weights
return super(Model, self).load_weights(filepath, by_name)

File "D:\ProgramFiles\Anaconda\envs\ml_env_20191207\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 1177, in load_weights
saving.load_weights_from_hdf5_group(f, self.layers)

File "D:\ProgramFiles\Anaconda\envs\ml_env_20191207\lib\site-packages\tensorflow_core\python\keras\saving\hdf5_format.py", line 677, in load_weights_from_hdf5_group
' layers.')

ValueError: You are trying to load a weight file containing 233 layers into a model with 293 layers.

Is it possible to remove classifications?

I'm pretty new to ML and I'm looking to use this in a project. I've decided to use the mask_rcnn_coco model, but it has a lot of classifications that I don't really need. I was wondering if it would be possible to remove classifications? Would that speed anything up?

For instance, I only need to detect the objects in this set:{'airplane', 'bicycle', 'boat', 'car', 'motorbus', 'bus', 'motorcycle', 'train', 'truck', 'umbrella'}

Would it be possible to configure the model to lose the weights for all the other object classifications?

Checkpoints stop saving/checkpoint questions

Hi, I was curious about how checkpoints work, I think I have an idea of what's going on but some clarification would be nice.

When training my model, 85 training and 10 testing, the models stop producing checkpoints after a certain amount, 3 or 4 epochs (and another at 32). I'm just curious as to why it does this? I'm currently at 200~ Epochs and no additional checkpoints have been written.

Some clarification on the checkpoint names might also be useful as well.
We have mask_rcnn_model.{epoch number}-{value}.h5. What is {value}?

Thanks!

run from file code.py

I created file D:\instance\code.py, paste this code

import pixellib
from pixellib.custom_train import instance_custom_training

vis_img = instance_custom_training()
vis_img.load_dataset("Nature")
vis_img.visualize_sample()

And got following error

from pixellib.custom_train import instance_custom_training
ImportError: cannot import name 'instance_custom_training' from partially initialized module 'pixellib.custom_train' (most likely due to a circular import)

ValueError while using .bmp image files for training

I was trying to use .bmp image files for training but it is throwing below error ,

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-21-a39c632a3e07> in <module>()
      3 
      4 vis_img = instance_custom_training()
----> 5 vis_img.load_dataset("/content/Polsar-project/main_Data512")
      6 vis_img.visualize_sample()

9 frames
<__array_function__ internals> in amin(*args, **kwargs)

/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs)
     85                 return reduction(axis=axis, out=out, **passkwargs)
     86 
---> 87     return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
     88 
     89 

ValueError: zero-size array to reduction operation minimum which has no identity

The code I used was this(I was using custom build because of that imports are looking like this)->

import PixelLib
from PixelLib.pixellib.custom_train import instance_custom_training

vis_img = instance_custom_training()
vis_img.load_dataset("/content/Polsar-project/main_Data512")
vis_img.visualize_sample()

I have created a demo dataset(cant share the full dataset) which is available here https://github.com/soumya997/demo_dataset

I have tried different stackoverflow answers but those seems not to be working. Yes I can convert them to other format but I guess converting the .bmps to other formats like .jpgs or .pngs will result in information loss.

Here is the colab notebook instance of the same -> https://colab.research.google.com/drive/1rafDzfqD_N0a3pCVLBaXDGVJ-LxUWHDj?usp=sharing

It will be really helpful if someone can help me out with this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.