Coder Social home page Coder Social logo

intel / unet Goto Github PK

View Code? Open in Web Editor NEW
297.0 16.0 125.0 89.69 MB

U-Net Biomedical Image Segmentation

License: Apache License 2.0

Python 11.30% Shell 0.14% Jupyter Notebook 88.56%
u-net deep-learning artificial-intelligence-algorithms tensorflow keras

unet's Introduction

DISCONTINUATION OF PROJECT

This project will no longer be maintained by Intel.
Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.
Intel no longer accepts patches to this project.
If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.

Deep Learning Medical Decathlon Demos for Python*

U-Net Biomedical Image Segmentation with Medical Decathlon Dataset.

This repository contains 2D and 3D U-Net TensorFlow scripts for training models using the Medical Decathlon dataset (http://medicaldecathlon.com/).

pred152_3D. pred195

Citation

David Ojika, Bhavesh Patel, G. Athony Reina, Trent Boyer, Chad Martin and Prashant Shah. “Addressing the Memory Bottleneck in AI Model Training”, Workshop on MLOps Systems, Austin TX (2020) held in conjunction with Third Conference on Machine Learning and Systems (MLSys). https://arxiv.org/abs/2003.08732

unet's People

Contributors

davenso avatar karkadad avatar mas-dse-greina avatar mattsonthieme avatar ravi9 avatar sfblackl-intel avatar shailensobhee avatar tonyreina avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unet's Issues

'Model' object has no attribute 'load_model'

I tried to run the evaluate_model.py but I faced the following problem.

Data format = channels_last
Traceback (most recent call last):
File "evaluate_model.py", line 51, in
model = unet_model.model.load_model(args.saved_model, custom_objects=unet_model.custom_objects)
AttributeError: 'Model' object has no attribute 'load_model'

data download from google drive

I tried to download the dataset (task 1) from the link provided in the notebook. The links don't seem to work. May you please update the link so that people can download the large dataset using the command line easily in Linux ?

Thanks

use GPU issue

Hi,
very good job and think you for share.
I want to know can we use GPU for tensor calculation? thank you

issue regarding training phase

I am using virtual machine with ubuntu
openvino 2021.2


WARNING:tensorflow:From /home/owais/anaconda3/envs/decathlon/lib/python3.6/site-packages/tensorflow_core/python/data/util/random_seed.py:58: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:OMP_NUM_THREADS is no longer used by the default Keras config. To configure the number of threads, use tf.config.threading APIs.
Traceback (most recent call last):
File "train.py", line 170, in
args.batch_size, args.epochs)
File "train.py", line 141, in train_and_predict
callbacks=model_callbacks)
File "/home/owais/anaconda3/envs/decathlon/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 727, in fit
use_multiprocessing=use_multiprocessing)
File "/home/owais/anaconda3/envs/decathlon/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_arrays.py", line 675, in fit
steps_name='steps_per_epoch')
File "/home/owais/anaconda3/envs/decathlon/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_arrays.py", line 169, in model_iteration
ins = _prepare_feed_values(model, inputs, targets, sample_weights, mode)
File "/home/owais/anaconda3/envs/decathlon/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_arrays.py", line 535, in _prepare_feed_values
extract_tensors_from_dataset=True)
File "/home/owais/anaconda3/envs/decathlon/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2471, in _standardize_user_data
exception_prefix='input')
File "/home/owais/anaconda3/envs/decathlon/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 517, in standardize_input_data
standardize_single_array(x, shape) for (x, shape) in zip(data, shapes)
File "/home/owais/anaconda3/envs/decathlon/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 517, in
standardize_single_array(x, shape) for (x, shape) in zip(data, shapes)
File "/home/owais/anaconda3/envs/decathlon/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 442, in standardize_single_array
if (x.shape is not None and len(x.shape) == 1 and
File "/home/owais/anaconda3/envs/decathlon/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_shape.py", line 827, in len
raise ValueError("Cannot take the length of shape with unknown rank.")
ValueError: Cannot take the length of shape with unknown rank.

SyntaxError: invalid syntax

Hello,

When i start training the model, i'm getting this error.

(decathlon) sivajyothi@cougar-U:~/unet/3D$ python train.py --data_path /home/sivajyothi/unet/3D/data/Task01_BrainTumour/
Traceback (most recent call last):
File "train.py", line 19, in
from dataloader import DataGenerator
File "/home/sivajyothi/unet/3D/dataloader.py", line 352
imgs = np.zeros((self.batch_size, *self.dim, self.n_in_channels))
^
SyntaxError: invalid syntax

As u said i untar the dataset and provided the path for that.

Task01_Brain Tumour dataset is consisting of imagesTr, labelsTr, imagesTs and a dataset.json file.
should i make any changes for preparing the dataset. Can you please help me with this??

Predict multiple output masks in 2D

Hello,
Can you give a code example when you mentioned : To predict multiple output masks some modification of the output layer to the model (e.g. more output layers for the sigmoid mask) is required ? Let's say for 3 outputs (0 : background, 1 : edema, 2: non-enhancing tumor)

Also, does the loss function need to be adjusted ?

Thank you in advance

Why does 2D UNET convert data to HDF5 when 3D UNET does not?

Hi sjain-stanford :
Thanks for sharing your cod.I have three questions.
1. Have you written any relevant papers?
2. Why does 2D UNET convert data to HDF5 when 3D UNET does not?
3.Are you using a GPU?  I see tensorFlow is a CPU version. If I want to use GPU, do I just need to install tensorFlow-GPU?

Best wish!

Converting Keras model to TF model.

I tried to run the :CUDA_VISIBLE_DEVICES=None python3 convert_keras_to_tensorflow_serving_model.py". It gives the following error.

Loading saved Keras model.
Traceback (most recent call last):
File "convert_keras_to_tensorflow_serving_model.py", line 87, in
"dice_coef": dice_coef, "dice_coef_loss": dice_coef_loss})
File "/home/avx/venv/TF3_env/lib/python3.5/site-packages/keras/engine/saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "/home/avx/venv/TF3_env/lib/python3.5/site-packages/keras/engine/saving.py", line 312, in _deserialize_model
sample_weight_mode=sample_weight_mode)
File "/home/avx/venv/TF3_env/lib/python3.5/site-packages/keras/engine/training.py", line 139, in compile
loss_function = losses.get(loss)
File "/home/avx/venv/TF3_env/lib/python3.5/site-packages/keras/losses.py", line 133, in get
return deserialize(identifier)
File "/home/avx/venv/TF3_env/lib/python3.5/site-packages/keras/losses.py", line 114, in deserialize
printable_module_name='loss function')
File "/home/avx/venv/TF3_env/lib/python3.5/site-packages/keras/utils/generic_utils.py", line 165, in deserialize_keras_object
':' + function_name)
ValueError: Unknown loss function:combined_dice_ce_loss

Bug report: dataloader.py does not read all slices

Apparently, there is a bug in the algorithm behind the function generate_batch_from_files in the file dataloader.py in the 2D directory. It does not read all slices of some files.
Below you see some printout from inside the function generate_batch_from_files for the first few iterations for a batch_size of 320 and num_slices_per_scan of 155.
For this setup, each time that a queue of images is created, three volumes of 155 slices are read resulting in a stack of 465 slices which is larger than the batch size. The stack should be cropped to 320 slices to match the batch size.
When idy=0, the stack of images is cropped from the end, meaning the first 320 slices are kept and the rest are thrown away. When idy=30, the stack of images is cropped from the beginning, meaning that the last 320 slices are kept and the rest are thrown away. Each time a new set of 3 files is read and the leftover from the last step is not used anymore.

If this repository is still active, I would really appreciate your reply.

NUM_QUEUED_IMAGES: 3
idz=0 idx=0 idy=0
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_089.nii.gz
idz=1 idx=1 idy=0
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_071.nii.gz
idz=2 idx=2 idy=0
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_183.nii.gz

Epoch 1/30
idz=0 idx=3 idy=320
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_046.nii.gz
idz=1 idx=4 idy=320
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_182.nii.gz
idz=2 idx= 5 idy=320
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_408.nii.gz

idz=0 idx=7 idy=0
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_072.nii.gz
idz=1 idx=8 idy=0
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_145.nii.gz
idz=2 idx=9 idy=0
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_102.nii.gz

  1/187 [..............................] - ETA: 0s - loss: 7.2098 - dice_coef: 1.3537e-04 - soft_dice_coef: 0.0010after yield: idx 10 idy 320
idz=0 idx=10 idy=320
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_431.nii.gz
idz=1 idx=11 idy=320
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_233.nii.gz
idz=2 idx=12 idy=320
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_476.nii.gz

  2/187 [..............................] - ETA: 36:50 - loss: 6.6567 - dice_coef: 4.5922e-04 - soft_dice_coef: 0.0021after yield: idx 14 idy 0
idz=0 idx=14 idy=0
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_055.nii.gz
idz=1 idx=15 idy=0
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_118.nii.gz
idz=2 idx=16 idy=0
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_037.nii.gz

  3/187 [..............................] - ETA: 48:27 - loss: 6.6684 - dice_coef: 3.6537e-04 - soft_dice_coef: 0.0021after yield: idx 17 idy 320
idz=0 idx=17 idy=320
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_058.nii.gz
idz=1 idx=18 idy=320
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_143.nii.gz
idz=2 idx=19 idy=320
filename /home/pkhateri/Documents/data/decathlon/Task01_BrainTumour/./labelsTr/BRATS_298.nii.gz

The Build Fails

Hi, Thank you for sharing your work online. It's been very instructive. Just wanted to say that the Docker build fails when trying to run build_samples.sh. I was able to get the build to complete by commenting this command out...but I'm not sure if that's a good thing yet (how necessary is it to run build_samples.sh?). The error messages are included below.

Step 16/30 : RUN /bin/bash -c "${OPENVINO_DIR}/inference_engine/samples/build_samples.sh"
 ---> Running in e45c057c80ba

Setting environment variables for building samples...
[setupvars.sh] OpenVINO environment initialized
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for C++ include unistd.h
-- Looking for C++ include unistd.h - found
-- Looking for C++ include stdint.h
-- Looking for C++ include stdint.h - found
-- Looking for C++ include sys/types.h
-- Looking for C++ include sys/types.h - found
-- Looking for C++ include fnmatch.h
-- Looking for C++ include fnmatch.h - found
-- Looking for C++ include stddef.h
-- Looking for C++ include stddef.h - found
-- Check size of uint32_t
-- Check size of uint32_t - done
-- Looking for strtoll
-- Looking for strtoll - found
-- Found InferenceEngine: /opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/lib/intel64/libinference_engine.so (Required is at least version "2.1") 
-- Performing Test HAVE_CPUID_INFO
-- Performing Test HAVE_CPUID_INFO - Success
-- Host CPU features:
--   3DNOW not supported
--   3DNOWEXT not supported
--   ABM not supported
--   ADX not supported
--   AES supported
--   AVX supported
--   AVX2 supported
--   AVX512CD not supported
--   AVX512F not supported
--   AVX512ER not supported
--   AVX512PF not supported
--   BMI1 supported
--   BMI2 supported
--   CLFSH supported
--   CMPXCHG16B supported
--   CX8 supported
--   ERMS supported
--   F16C supported
--   FMA supported
--   FSGSBASE supported
--   FXSR supported
--   HLE not supported
--   INVPCID not supported
--   LAHF supported
--   LZCNT supported
--   MMX supported
--   MMXEXT not supported
--   MONITOR not supported
--   MOVBE supported
--   MSR supported
--   OSXSAVE supported
--   PCLMULQDQ supported
--   POPCNT supported
--   PREFETCHWT1 not supported
--   RDRAND supported
--   RDSEED not supported
--   RDTSCP not supported
--   RTM not supported
--   SEP supported
--   SHA not supported
--   SSE supported
--   SSE2 supported
--   SSE3 supported
--   SSE4.1 supported
--   SSE4.2 supported
--   SSE4a not supported
--   SSSE3 supported
--   SYSCALL supported
--   TBM not supported
--   XOP not supported
--   XSAVE supported
-- TBB include: /opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/tbb/include
-- TBB Release lib: /opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/tbb/lib/libtbb.so
-- TBB Debug lib: /opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/tbb/lib/libtbb_debug.so
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Configuring done
-- Generating done
-- Build files have been written to: /root/inference_engine_samples_build
Scanning dependencies of target gflags_nothreads_static
[  1%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags.cc.o
[  2%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags_reporting.cc.o
Scanning dependencies of target format_reader
[  3%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags_completions.cc.o
[  5%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/MnistUbyte.cpp.o
[  6%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/format_reader.cpp.o
[  7%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/opencv_wraper.cpp.o
[  8%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/bmp.cpp.o
Scanning dependencies of target ie_cpu_extension
[ 10%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_detectionoutput.cpp.o
[ 11%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_fill.cpp.o
[ 12%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_psroi.cpp.o
[ 14%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_squeeze.cpp.o
[ 15%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_one_hot.cpp.o
[ 16%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_powerfile.cpp.o
[ 17%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_reverse_sequence.cpp.o
[ 19%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_space_to_depth.cpp.o
[ 20%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_reorg_yolo.cpp.o
[ 21%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_gather.cpp.o
[ 23%] Linking CXX static library ../../intel64/Release/lib/libgflags_nothreads.a
[ 23%] Built target gflags_nothreads_static
[ 24%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_range.cpp.o
[ 25%] Linking CXX shared library ../../intel64/Release/lib/libformat_reader.so
[ 25%] Built target format_reader
[ 26%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_resample.cpp.o
[ 28%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_grn.cpp.o
[ 29%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_priorbox_clustered.cpp.o
[ 30%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_strided_slice.cpp.o
[ 32%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_region_yolo.cpp.o
[ 33%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_roifeatureextractor_onnx.cpp.o
[ 34%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_scatter.cpp.o
[ 35%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_argmax.cpp.o
[ 37%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_reduce.cpp.o
[ 38%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_gather_tree.cpp.o
[ 39%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_base.cpp.o
[ 41%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_priorgridgenerator_onnx.cpp.o
[ 42%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_math.cpp.o
[ 43%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_shuffle_channels.cpp.o
[ 44%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_unsqueeze.cpp.o
[ 46%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_topkrois_onnx.cpp.o
[ 47%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_proposal.cpp.o
[ 48%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_depth_to_space.cpp.o
[ 50%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_normalize.cpp.o
[ 51%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_detectionoutput_onnx.cpp.o
[ 52%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_ctc_greedy.cpp.o
[ 53%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_simplernms.cpp.o
[ 55%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_priorbox.cpp.o
[ 56%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/common/simple_copy.cpp.o
[ 57%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_pad.cpp.o
[ 58%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_proposal_onnx.cpp.o
[ 60%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_select.cpp.o
[ 61%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_list.cpp.o
[ 62%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_interp.cpp.o
[ 64%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_sparse_fill_empty_rows.cpp.o
[ 65%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_broadcast.cpp.o
[ 66%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_log_softmax.cpp.o
[ 67%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_mvn.cpp.o
[ 69%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_unique.cpp.o
[ 70%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_topk.cpp.o
[ 71%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_non_max_suppression.cpp.o
[ 73%] Linking CXX shared library ../intel64/Release/lib/libcpu_extension.so
[ 73%] Built target ie_cpu_extension
Scanning dependencies of target hello_nv12_input_classification
Scanning dependencies of target speech_sample
Scanning dependencies of target object_detection_sample_ssd
Scanning dependencies of target style_transfer_sample
Scanning dependencies of target classification_sample_async
Scanning dependencies of target benchmark_app
[ 74%] Building CXX object object_detection_sample_ssd/CMakeFiles/object_detection_sample_ssd.dir/main.cpp.o
[ 75%] Building CXX object hello_nv12_input_classification/CMakeFiles/hello_nv12_input_classification.dir/main.cpp.o
[ 76%] Building CXX object speech_sample/CMakeFiles/speech_sample.dir/main.cpp.o
Scanning dependencies of target hello_reshape_ssd
[ 78%] Building CXX object benchmark_app/CMakeFiles/benchmark_app.dir/main.cpp.o
Scanning dependencies of target hello_classification
[ 79%] Building CXX object style_transfer_sample/CMakeFiles/style_transfer_sample.dir/main.cpp.o
[ 80%] Building CXX object classification_sample_async/CMakeFiles/classification_sample_async.dir/main.cpp.o
[ 82%] Building CXX object hello_reshape_ssd/CMakeFiles/hello_reshape_ssd.dir/main.cpp.o
[ 83%] Building CXX object hello_classification/CMakeFiles/hello_classification.dir/main.cpp.o
c++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-5/README.Bugs> for instructions.
make[2]: *** [benchmark_app/CMakeFiles/benchmark_app.dir/main.cpp.o] Error 4
benchmark_app/CMakeFiles/benchmark_app.dir/build.make:62: recipe for target 'benchmark_app/CMakeFiles/benchmark_app.dir/main.cpp.o' failed
CMakeFiles/Makefile2:351: recipe for target 'benchmark_app/CMakeFiles/benchmark_app.dir/all' failed
make[1]: *** [benchmark_app/CMakeFiles/benchmark_app.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 84%] Linking CXX executable ../intel64/Release/hello_classification
[ 84%] Built target hello_classification
[ 85%] Linking CXX executable ../intel64/Release/style_transfer_sample
[ 85%] Built target style_transfer_sample
[ 87%] Linking CXX executable ../intel64/Release/speech_sample
[ 88%] Linking CXX executable ../intel64/Release/hello_nv12_input_classification
[ 89%] Linking CXX executable ../intel64/Release/classification_sample_async
[ 89%] Built target speech_sample
[ 89%] Built target hello_nv12_input_classification
[ 89%] Built target classification_sample_async
[ 91%] Linking CXX executable ../intel64/Release/object_detection_sample_ssd
[ 91%] Built target object_detection_sample_ssd
[ 92%] Linking CXX executable ../intel64/Release/hello_reshape_ssd
[ 92%] Built target hello_reshape_ssd
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
Error on or near line 71; exiting with status 1
The command '/bin/sh -c /bin/bash -c "${OPENVINO_DIR}/inference_engine/samples/build_samples.sh"' returned a non-zero code: 1

Training suspends in last batch of the epoch - HDF5 selections.py issue?

Thanks for this repo. I ran the steps to download and preprocess data. It also starts training, but fails in the last batch of the first epoch. I'm using TF1.14 (not 1.15 as prescribed on the README), however I doubt that has anything to do with this error. Could this be a h5py version issue?

[UPDATE]: Before training, I set USE_KERAS_API to False, hence forcing it to pick tf.keras. Could this be the same issue referred in settings.py?

Error:

------------------------------
Fitting model with training data ...
------------------------------
Step 3, training the model started at 2020-06-16 12:51:03.365806
Train on 62930 samples, validate on 4960 samples
Epoch 1/40
62848/62930 [============================>.] - ETA: 1s - loss: 0.7630 - acc: 0.9642 - dice_coef: 0.6290 - soft_dice_coef: 0.2496
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/scratch/sambhavj/anaconda3/envs/tf1.14/lib/python3.6/site-packages/h5py/_hl/selections.py in select(shape, args, dsid)
     84             try:
---> 85                 int(a)
     86                 if isinstance(a, np.ndarray) and a.shape == (1,):

TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'

Here's the complete error stack:

------------------------------
Fitting model with training data ...
------------------------------
Step 3, training the model started at 2020-06-16 12:51:03.365806
Train on 62930 samples, validate on 4960 samples
Epoch 1/40
62848/62930 [============================>.] - ETA: 1s - loss: 0.7630 - acc: 0.9642 - dice_coef: 0.6290 - soft_dice_coef: 0.2496
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/scratch/sambhavj/anaconda3/envs/tf1.14/lib/python3.6/site-packages/h5py/_hl/selections.py in select(shape, args, dsid)
     84             try:
---> 85                 int(a)
     86                 if isinstance(a, np.ndarray) and a.shape == (1,):

TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'

During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)
/scratch/sambhavj/anaconda3/envs/tf1.14/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
    345           else:
--> 346             ins_batch = slice_arrays(ins, batch_ids)
    347         except TypeError:

/scratch/sambhavj/anaconda3/envs/tf1.14/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py in slice_arrays(arrays, start, stop)
    530         start = start.tolist()
--> 531       return [None if x is None else x[start] for x in arrays]
    532     else:

/scratch/sambhavj/anaconda3/envs/tf1.14/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py in <listcomp>(.0)
    530         start = start.tolist()
--> 531       return [None if x is None else x[start] for x in arrays]
    532     else:

/data/unet-intelai/2D/data.py in __getitem__(self, key)
    158         """
--> 159         data = super().__getitem__(key)
    160         self.idx += 1

/scratch/sambhavj/anaconda3/envs/tf1.14/lib/python3.6/site-packages/tensorflow/python/keras/utils/io_utils.py in __getitem__(self, key)
    113     else:
--> 114       return self.data[idx]
    115 

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

/scratch/sambhavj/anaconda3/envs/tf1.14/lib/python3.6/site-packages/h5py/_hl/dataset.py in __getitem__(self, args)
    552         # Perform the dataspace selection.
--> 553         selection = sel.select(self.shape, args, dsid=self.id)
    554 

/scratch/sambhavj/anaconda3/envs/tf1.14/lib/python3.6/site-packages/h5py/_hl/selections.py in select(shape, args, dsid)
     89                 sel = FancySelection(shape)
---> 90                 sel[args]
     91                 return sel

/scratch/sambhavj/anaconda3/envs/tf1.14/lib/python3.6/site-packages/h5py/_hl/selections.py in __getitem__(self, args)
    366                     if any(fst >= snd for fst, snd in adjacent):
--> 367                         raise TypeError("Indexing elements must be in increasing order")
    368 

TypeError: Indexing elements must be in increasing order

During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)
<ipython-input-24-41921214fca5> in <module>
     23               validation_data=(imgs_validation, msks_validation),
     24               verbose=1, shuffle="batch",
---> 25               callbacks=model_callbacks)
     26 
     27 print("Total time elapsed for training = {} seconds".format(time.time() - start_time))

/scratch/sambhavj/anaconda3/envs/tf1.14/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    778           validation_steps=validation_steps,
    779           validation_freq=validation_freq,
--> 780           steps_name='steps_per_epoch')
    781 
    782   def evaluate(self,

/scratch/sambhavj/anaconda3/envs/tf1.14/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
    407           validation_in_fit=True,
    408           prepared_feed_values_from_dataset=(val_iterator is not None),
--> 409           steps_name='validation_steps')
    410       if not isinstance(val_results, list):
    411         val_results = [val_results]

/scratch/sambhavj/anaconda3/envs/tf1.14/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
    346             ins_batch = slice_arrays(ins, batch_ids)
    347         except TypeError:
--> 348           raise TypeError('TypeError while preparing batch. '
    349                           'If using HDF5 input data, '
    350                           'pass shuffle="batch".')

TypeError: TypeError while preparing batch. If using HDF5 input data, pass shuffle="batch".

inference_openvino.py low dice score

Hi experts,

after I follow the guidance to train model, I execute 2D inference_openvino.py. However, the result images in "inference_examples_openvino" folder are not correct (no any prediction region) and I'm wondering how can I fix this issue!

Thank you!

Malcolm

[ INFO ] Loading U-Net model to the plugin
[ INFO ] Loading network files:
../openvino_models/FP32/saved_model.xml
../openvino_models/FP32/saved_model.bin
[ INFO ] Loading model to the plugin
[ INFO ] Image #22: Dice score = 1.0000
[ INFO ] Image #40: Dice score = 0.0011
[ INFO ] Image #56: Dice score = 0.0013
[ INFO ] Image #61: Dice score = 0.0029
[ INFO ] Image #400: Dice score = 0.0007
[ INFO ] Image #1100: Dice score = 1.0000
[ INFO ] Image #2229: Dice score = 0.0011
[ INFO ] Image #3136: Dice score = 0.0011
[ INFO ] Image #4385: Dice score = 0.0009
[ INFO ] Could not open font file /usr/share/fonts/truetype/noto/NotoColorEmoji.ttf: In FT2Font: Could not set the fontsize
[ INFO ] generated new fontManager
[ INFO ] Plotting the predictions and saving to png files. Please wait...
Saved file: inference_examples_openvino/pred_group0
Saved file: inference_examples_openvino/pred_group1
Saved file: inference_examples_openvino/pred_group2

pred_group0

Train error for any dataset asides Tumor

Hello

I am trying to train on the differrent task. While the braintumor Task01 training works, the others dont. I get this error below

File ".../medical-decathlon/2D/dataloader.py", line 204, in generate_batch_from_files img = img[:,:,:,0] # Just take FLAIR channel (channel 0) IndexError: too many indices for array: array is 3-dimensional, but 4 were indexed

Is there an argument to specify or has this issue been noticed before?

@ravi9 @tonyreina

hidden file

Hello:
I used my own data set, the following error occurred;

FileNotFoundError: No such file or no access: '/home/zhengsc/Desktop/unet/3D/Task100_Colon/./imagesTr/colon_212.nii.gz'

I really don't have this hidden file.
I looked at the contest data and he had a hidden file.
image
How do I generate this hidden file??

UnboundLocalError: local variable 'experiment_data' referenced before assignment

Hello,

When i evaluate the model, i'm facing this issue. Can you please help me to resolve this issue??

(decathlon) sivajyothi@cougar-U:~/unet/3D$ python evaluate_model.py --data_path /home/sivajyothi/data/Task01_BrainTumour/ --saved_model /home/sivajyothi/unet/3D/saved_model/3d_unet_decathlon.hdf5

Using TensorFlow backend.
Started script on 2019-11-11 00:26:31.874186
OMP: Warning #181: OMP_PROC_BIND: ignored because KMP_AFFINITY has been defined
Loading images and masks from test set
File /home/sivajyothi/data/Task01_BrainTumour/dataset.json doesn't exist. It should be part of the Decathlon directory
Traceback (most recent call last):
File "evaluate_model.py", line 77, in
**validation_data_params)
File "/home/sivajyothi/unet/3D/dataloader.py", line 83, in init
self.list_IDs = self.create_file_list()
File "/home/sivajyothi/unet/3D/dataloader.py", line 146, in create_file_list
self.output_channels = experiment_data["labels"]
UnboundLocalError: local variable 'experiment_data' referenced before assignment

Thank you.

Prediction volume files have a low dimension and dice coefficient

First of all thanks for your great work tony, I really appreciate it.

So far i've trained the net with patch_size (64,64,64) and batch_size of 4 on your provided train.py file due to graphics card limitations. Subsequently i've executed the evaluate_model.py file in order to test the net on your provided test data of 37 volumes. The train, validata as well as the test data all are volumes of dimensions 240 240 155 after a certain MRI channel has been picked.

However, so far i was not able to generate prediction volumes that meet the expected standards. In the evaluate_model.py file i fail to find out how the patch_size works on this model. I've tested the following parameters without success:

"dim": (240,240,155) yields to an error regarding dimensions (I will provide the detailed error message if neccessary)

"dim": (64,64,64) yields to a dice coefficient that meets the expected standards. However, the prediction volume file is cropped to dimensions 64,64,64

"dim": (144,144,144) yields to a dice coefficient that is far below (<0.2) the expected value. The prediction volume file has dimensions 144,144,144

I have not altered the remaining model parameters.

Can you tell me how to create prediction volume files that do have the expected dice coefficient (>0.8 on some volume files) but without cropping the test volume file? I would like the prediction volume file to have dimensions 240,240,155, just like the corresponding test volume file.

Best regards

inference_openvino.py: data type false

I tried to run "python inference_openvino.py ", setting the datapath to dataset.json folder and model input to FP32 folder. I see the following error and no output result is produced. The error is produced by line 67: "validation_generator = DataGenerator(False, args.data_path,
**validation_data_params)"

/usr/lib/python3.5/site-packages/h5py/init.py:34: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
[ INFO ] Loading U-Net model to the plugin
[ INFO ] Loading network files:
./FP32/3d_unet_decathlon.xml
./FP32/3d_unet_decathlon.bin
[ INFO ] Batch size = 1
error with type of data: False
[ INFO ] 0 started
[ INFO ] 0 finished
[ INFO ] Partial batch left over in dataset

issue regarding change in dataset

I am using unet provided by intel and successfully used it for BRATS but when ever i change the dataset this error occurs at the stage where i am running inference using IR files

Data batch channel count (4) does not match filter input channel count (1).

(decathlon) E:\code3\unet-master\2D>python plot_openvino_inference_examples.py --device MYRIAD
2021-01-31 12:28:08.797145: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2021-01-31 12:28:08.797355: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "plot_openvino_inference_examples.py", line 137, in
seed=args.seed)
File "E:\code3\unet-master\2D\dataloader.py", line 39, in init
self.file_list = tf.io.gfile.glob(self.dirName)
File "C:\Users\owais\anaconda3\envs\decathlon\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 371, in get_matching_files_v2
compat.as_bytes(pattern))
tensorflow.python.framework.errors_impl.NotFoundError: FindFirstFile failed for: ../data/decathlon/Task01_BrainTumour/2D_model/testing : The system cannot find the path specified.
; No such process

(decathlon) E:\code3\unet-master\2D>python plot_openvino_inference_examples.py --device MYRIAD
2021-01-31 12:29:54.101117: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2021-01-31 12:29:54.107419: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "plot_openvino_inference_examples.py", line 147, in
net = ie.read_network(model=path_to_xml_file, weights=path_to_bin_file)
File "ie_api.pyx", line 261, in openvino.inference_engine.ie_api.IECore.read_network
File "ie_api.pyx", line 285, in openvino.inference_engine.ie_api.IECore.read_network
RuntimeError: Check 'Dimension::merge(merged_channel_count, data_channel_count, filter_input_channel_count)' failed at C:\j\workspace\private-ci\ie\build-windows-vs2019@2\b\repos\openvino\ngraph\core\src\validation_util.cpp:344:
While validating node 'v1::Convolution Convolution_2 (mrimages[0]:f16{1,4,128,128}, StatefulPartitionedCall/2DUNet_Brats_Decathlon/encodeAa/Conv2D/Transpose5219_const[0]:f16{32,1,3,3}) -> (dynamic?)' with friendly_name 'Convolution_2':
Data batch channel count (4) does not match filter input channel count (1).

Dice Score

I successfully trained the 3d unet model for 30 epochs with a crop size of (64,64,64) and batch size 8 but the dice score was only 0.56. And testing avg dice was 0.4
Could you please help me solving this problem.
And also please explain the usage of openvino.
gita

3D visualization

Hi, the 3D visualisation gif file is very nice. Could you please show me how you made it? Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.