Coder Social home page Coder Social logo

idealo / image-quality-assessment Goto Github PK

View Code? Open in Web Editor NEW
2.0K 51.0 440.0 79.96 MB

Convolutional Neural Networks to predict the aesthetic and technical quality of images.

Home Page: https://idealo.github.io/image-quality-assessment/

License: Apache License 2.0

Shell 13.21% Python 85.73% Dockerfile 1.06%
nima neural-network tensorflow keras mobilenet aws image-quality-assessment convolutional-neural-networks deep-learning computer-vision

image-quality-assessment's Introduction

Image Quality Assessment

Build Status Docs License

This repository provides an implementation of an aesthetic and technical image quality model based on Google's research paper "NIMA: Neural Image Assessment". You can find a quick introduction on their Research Blog.

NIMA consists of two models that aim to predict the aesthetic and technical quality of images, respectively. The models are trained via transfer learning, where ImageNet pre-trained CNNs are used and fine-tuned for the classification task.

For more information on how we used NIMA for our specifc problem, we did a write-up on two blog posts:

The provided code allows to use any of the pre-trained models in Keras. We further provide Docker images for local CPU training and remote GPU training on AWS EC2, as well as pre-trained models on the AVA and TID2013 datasets.

Read the full documentation at: https://idealo.github.io/image-quality-assessment/.

Image quality assessment is compatible with Python 3.6 and is distributed under the Apache 2.0 license. We welcome all kinds of contributions, especially new model architectures and/or hyperparameter combinations that improve the performance of the currently published models (see Contribute).

Trained models

Predictions from aesthetic model
Predictions from technical model

We provide trained models, for both aesthetic and technical classifications, that use MobileNet as the base CNN. The models and their respective config files are stored under models/MobileNet. They achieve the following performance

Model Dataset EMD LCC SRCC
MobileNet aesthetic AVA 0.071 0.626 0.609
MobileNet technical TID2013 0.107 0.652 0.675

Getting started

  1. Install jq

  2. Install Docker

  3. Build docker image docker build -t nima-cpu . -f Dockerfile.cpu

In order to train remotely on AWS EC2

  1. Install Docker Machine

  2. Install AWS Command Line Interface

Predict

In order to run predictions on an image or batch of images you can run the prediction script

  1. Single image file

    ./predict  \
    --docker-image nima-cpu \
    --base-model-name MobileNet \
    --weights-file $(pwd)/models/MobileNet/weights_mobilenet_technical_0.11.hdf5 \
    --image-source $(pwd)/src/tests/test_images/42039.jpg
  2. All image files in a directory

    ./predict  \
    --docker-image nima-cpu \
    --base-model-name MobileNet \
    --weights-file $(pwd)/models/MobileNet/weights_mobilenet_technical_0.11.hdf5 \
    --image-source $(pwd)/src/tests/test_images

Train locally on CPU

  1. Download dataset (see instructions under Datasets)

  2. Run the local training script (e.g. for TID2013 dataset)

    ./train-local \
    --config-file $(pwd)/models/MobileNet/config_technical_cpu.json \
    --samples-file $(pwd)/data/TID2013/tid_labels_train.json \
    --image-dir /path/to/image/dir/local

This will start a training container from the Docker image nima-cpu and create a timestamp train job folder under train_jobs, where the trained model weights and logs will be stored. The --image-dir argument requires the path of the image directory on your local machine.

In order to stop the last launched container run bash CONTAINER_ID=$(docker ps -l -q) docker container stop $CONTAINER_ID

In order to stream logs from last launched container run bash CONTAINER_ID=$(docker ps -l -q) docker logs $CONTAINER_ID --follow

Train remotely on AWS EC2

  1. Configure your AWS CLI. Ensure that your account has limits for GPU instances and read/write access to the S3 bucket specified in config file [link]

    aws configure
  2. Launch EC2 instance with Docker Machine. Choose an Ubuntu AMI based on your region (https://cloud-images.ubuntu.com/locator/ec2/). For example, to launch a p2.xlarge EC2 instance named ec2-p2 run (NB: change region, VPC ID and AMI ID as per your setup)

    docker-machine create --driver amazonec2 \
                          --amazonec2-region eu-west-1 \
                          --amazonec2-ami ami-58d7e821 \
                          --amazonec2-instance-type p2.xlarge \
                          --amazonec2-vpc-id vpc-abc \
                          ec2-p2
  3. ssh into EC2 instance

    docker-machine ssh ec2-p2
  4. Update NVIDIA drivers and install nvidia-docker (see this blog post for more details)

    # update NVIDIA drivers
    sudo add-apt-repository ppa:graphics-drivers/ppa -y
    sudo apt-get update
    sudo apt-get install -y nvidia-375 nvidia-settings nvidia-modprobe
    
    # install nvidia-docker
    wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
    sudo dpkg -i /tmp/nvidia-docker_1.0.1-1_amd64.deb && rm /tmp/nvidia-docker_1.0.1-1_amd64.deb
  5. Download dataset to EC2 instance (see instructions under Datasets). We recommend to save the AMI with the downloaded data for future use.

  6. Run the remote EC2 training script (e.g. for AVA dataset)

    ./train-ec2 \
    --docker-machine ec2-p2 \
    --config-file $(pwd)/models/MobileNet/config_aesthetic_gpu.json \
    --samples-file $(pwd)/data/AVA/ava_labels_train.json \
    --image-dir /path/to/image/dir/remote

The training progress will be streamed to your terminal. After the training has finished, the train outputs (logs and best model weights) will be stored on S3 in a timestamped folder. The S3 output bucket can be specified in the config file. The --image-dir argument requires the path of the image directory on your remote instance.

Contribute

We welcome all kinds of contributions and will publish the performances from new models in the performance table under Trained models.

For example, to train a new aesthetic NIMA model based on InceptionV3 ImageNet weights, you just have to change the base_model_name parameter in the config file models/MobileNet/config_aesthetic_gpu.json to "InceptionV3". You can also control all major hyperparameters in the config file, like learning rate, batch size, or dropout rate.

See the Contribution guide for more details.

Datasets

This project uses two datasets to train the NIMA model:

  1. AVA used for aesthetic ratings (data)
  2. TID2013 used for technical ratings

For training on AWS EC2 we recommend to build a custom AMI with the AVA images stored on it. This has proven much more viable than copying the entire dataset from S3 to the instance for each training job.

Label files

The train script requires JSON label files in the format

[
  {
    "image_id": "231893",
    "label": [2,8,19,36,76,52,16,9,3,2]
  },
  {
    "image_id": "746672",
    "label": [1,2,7,20,38,52,20,11,1,3]
  },
  ...
]

The label for each image is the normalized or un-normalized frequency distribution of ratings 1-10.

For the AVA dataset these frequency distributions are given in the raw data files. For the TID2013 dataset we inferred the normalized frequency distribution, i.e. probability distribution, by finding the maximum entropy distribution that satisfies the mean score. The code to generate the TID2013 labels can be found under data/TID2013/get_labels.py.

For both datasets we provide train and test set label files stored under

data/AVA/ava_labels_train.json
data/AVA/ava_labels_test.json

and

data/TID2013/tid2013_labels_train.json
data/TID2013/tid2013_labels_test.json

For the AVA dataset we randomly assigned 90% of samples to the train set, and 10% to the test set, and throughout training a 5% validation set will be split from the training set to evaluate the training performance after each epoch. For the TID2013 dataset we split the train/test sets by reference images, to ensure that no reference image, and any of its distortions, enters both the train and test set.

Serving NIMA with TensorFlow Serving

TensorFlow versions of both the technical and aesthetic MobileNet models are provided, along with the script to generate them from the original Keras files, under the contrib/tf_serving directory.

There is also an already configured TFS Dockerfile that you can use.

To get predictions from the aesthetic or technical model:

  1. Build the NIMA TFS Docker image docker build -t tfs_nima contrib/tf_serving
  2. Run a NIMA TFS container with docker run -d --name tfs_nima -p 8500:8500 tfs_nima
  3. Install python dependencies to run TF serving sample client
    virtualenv -p python3 contrib/tf_serving/venv_tfs_nima
    source contrib/tf_serving/venv_tfs_nima/bin/activate
    pip install -r contrib/tf_serving/requirements.txt
    
  4. Get predictions from aesthetic or technical model by running the sample client
    python -m contrib.tf_serving.tfs_sample_client --image-path src/tests/test_images/42039.jpg --model-name mobilenet_aesthetic
    python -m contrib.tf_serving.tfs_sample_client --image-path src/tests/test_images/42039.jpg --model-name mobilenet_technical
    

Cite this work

Please cite Image Quality Assessment in your publications if this is useful for your research. Here is an example BibTeX entry:

@misc{idealods2018imagequalityassessment,
  title={Image Quality Assessment},
  author={Christopher Lennan and Hao Nguyen and Dat Tran},
  year={2018},
  howpublished={\url{https://github.com/idealo/image-quality-assessment}},
}

Maintainers

Copyright

See LICENSE for details.

image-quality-assessment's People

Contributors

bmachin avatar clennan avatar datitran avatar gosia-malgosia avatar nareddyt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

image-quality-assessment's Issues

Upgrade to TF 2.0

  1. It would be good to use TensorFlow 2.0, in particular, tf.keras since Keras won't be actively maintained any more.
  2. Also we need to fix the TF version in docker, latest causes problem with running it via docker

I have computed histograms and confusion matrix on validation dataset

I am attempting to train model on AVA myself and faced very low quality of predictions. Digging further, I found predictions to be very strange and started to investigate pretrained models and asking for help.

Let me introduce my research on the example of your model which has very similar performance to mine.

At first, let's take a look on a histogram of predicted mean scores and standard deviations (Fig.9 of original paper):

original

Here is your one, built on validation dataset you provided:

screenshot_20181222_031920

Mine looks similar as well. From the histogram one can decide that the model does not output scores > 7 and <3, but otherwise it should mirror real distribution well.

However, when I started to check images for my model manually, I found that scores seem very scattered and look often totally inadequate.

Let's compute accuracy of predicted scores as Mean(1. - |m' - m|/m), where m' is predicted score and m is ground truth. For your model it is 91.1%, for my model it is 90.8%. Seems good so far, but we need to remember that for score 5 it means mean error 5 - 0.9*5 = 0.5, and is even bigger for tails of the dataset distribution. Standard deviation of differences between scores is 0.37.

But let's compute confusion matrix (mine is similar to yours):


Got labels: 25548
[[   0    0    0    0    0    0    0    0    0    0]
 [   0    0    0    0    0    0    0    0    0    0]
 [   0   18  100   84    9    0    0    0    0    0]
 [   1   21  518 3027 1832  178    2    0    0    0]
 [   0    7  144 3472 9527 2660  129    0    0    0]
 [   0    0    3  147 1683 1786  198    2    0    0]
 [   0    0    0    0    0    0    0    0    0    0]
 [   0    0    0    0    0    0    0    0    0    0]
 [   0    0    0    0    0    0    0    0    0    0]
 [   0    0    0    0    0    0    0    0    0    0]]
accuracy:0.9111294033307482
standard deviation of score differences:0.3676722622236499

And to compare with, let's randomly shuffle predictions:

random shuffle:
[[   0    0    0    0    0    0    0    0    0    0]
 [   0    0    0    0    0    0    0    0    0    0]
 [   0    0    5   53  109   40    4    0    0    0]
 [   0    9  188 1440 2874  992   76    0    0    0]
 [   1   31  460 4201 8144 2903  197    2    0    0]
 [   0    6  112 1036 1924  689   52    0    0    0]
 [   0    0    0    0    0    0    0    0    0    0]
 [   0    0    0    0    0    0    0    0    0    0]
 [   0    0    0    0    0    0    0    0    0    0]
 [   0    0    0    0    0    0    0    0    0    0]]
accuracy:0.8577966426127535
standard deviation of score differences:0.5579391415989003

We can see that score 5 still dominates because of it's abundance in dataset. Others are not much different from model prediction. Accuracy is 0.86. Also, I want to mention that when I tried to create balanced validation set with sets of scores <4, 4-7 and >7 to be represented equally, I've got 83% accuracy and 0.5 std of differences which is equal to the result above.

I've shared all the code on gist:
https://gist.github.com/hcl14/d641f82922ce11cee0164b16e6786dfb

Also here are correlation coefficients for scores predictions:

Pearson: (0.6129669517144579, 0.0)
Spearman: SpearmanrResult(correlation=0.5949598193491837, pvalue=0.0)

The article also reports values 0.5-0.6

Would be great to hear some insights on that. Is it overfit cause by Adam optimizer and we really needed to optimize it via SGD with lr=10-7 as in the parer?

ImportError: libcuda.so.1: cannot open shared object file: No such file or directory

I build the Docker image nima-gpu with docker build -t nima-gpu . -f Dockerfile.gpu

Sending build context to Docker daemon  188.7MB
Step 1/8 : FROM tensorflow/tensorflow:latest-gpu-py3
 ---> 20a4b7fa03e7
Step 2/8 : RUN apt-get update && apt-get install -y --no-install-recommends       bzip2       g++       git       graphviz       libgl1-mesa-glx       libhdf5-dev       openmpi-bin       wget &&     rm -rf /var/lib/apt/lists/*
 ---> Using cache
 ---> 9a8f03e4404d
Step 3/8 : COPY src /src
 ---> Using cache
 ---> 00047f6005a3
Step 4/8 : COPY entrypoints /src/entrypoints
 ---> Using cache
 ---> 8f7e8ad022b1
Step 5/8 : WORKDIR /src
 ---> Using cache
 ---> e1fef7ec3a14
Step 6/8 : RUN pip install -r requirements.txt
 ---> Using cache
 ---> 6c1371f032e4
Step 7/8 : ENV PYTHONPATH='/src/:$PYTHONPATH'
 ---> Using cache
 ---> cdde560f51de
Step 8/8 : ENTRYPOINT ["entrypoints/entrypoint.train.gpu.sh"]
 ---> Using cache
 ---> 1fe2ce0c9ad9
Successfully built 1fe2ce0c9ad9
Successfully tagged nima-gpu:latest

But when I score a demo image with nima-gpu, I get a Traceback:

./predict --docker-image nima-gpu --base-model-name MobileNet --weights-file $(pwd)/models/MobileNet/weights_mobilenet_technical_0.11.hdf5 --image-source $(pwd)/src/tests/test_images/42039.jpg

How could I make this correct? Thanks.

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "/usr/lib/python3.5/imp.py", line 242, in load_module
    return load_dynamic(name, filename, file)
  File "/usr/lib/python3.5/imp.py", line 342, in load_dynamic
    return _load(spec)
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/src/evaluater/predict.py", line 6, in <module>
    from utils.utils import calc_mean_score, save_json
  File "/src/utils/utils.py", line 4, in <module>
    import keras
  File "/usr/local/lib/python3.5/dist-packages/keras/__init__.py", line 3, in <module>
    from . import utils
  File "/usr/local/lib/python3.5/dist-packages/keras/utils/__init__.py", line 6, in <module>
    from . import conv_utils
  File "/usr/local/lib/python3.5/dist-packages/keras/utils/conv_utils.py", line 9, in <module>
    from .. import backend as K
  File "/usr/local/lib/python3.5/dist-packages/keras/backend/__init__.py", line 84, in <module>
    from .tensorflow_backend import *
  File "/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py", line 5, in <module>
    import tensorflow as tf
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/__init__.py", line 24, in <module>
    from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/__init__.py", line 49, in <module>
    from tensorflow.python import pywrap_tensorflow
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
    raise ImportError(msg)
ImportError: Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "/usr/lib/python3.5/imp.py", line 242, in load_module
    return load_dynamic(name, filename, file)
  File "/usr/lib/python3.5/imp.py", line 342, in load_dynamic
    return _load(spec)
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions.  Include the entire stack trace
above this error message when asking for help.```

./train-local: line 34: jq: command not found

when I use cpu to train a model, there are something wrong like this:

localhost:image-quality-assessment-master chenjingwei$ ./train-local \

--docker-image nima-cpu
--config-file $(pwd)/models/MobileNet/config_mobilenet_technical.json
--samples-file $(pwd)/data/TID2013/tid_labels_train.json
--image-dir $(pwd)/data/test
./train-local: line 34: jq: command not found
"docker run" requires at least 1 argument.
See 'docker run --help'.

how to solve this ?thanks!

is it possible to train model to get Text Readability of image from "Image Quality Assessment" ?

The problem I am trying to solve is find "Text Readability of an Image"
I am using Tesseract to extract text from images, but I am not able to understand what are the parameters the tesseract is considering for a good/bad text extraction.

In order to analyse the image I am using "Image Quality Assessment" to get a image quality technical score.

The models does not focus on how good the text is in the image i.e Readability , can I train this model on the images I have and if yes how ?

IndexError: list index out of range----when I run predict.py

hi when I run the source file "predict.py", an error report:

File "predict.py", line 55, in main
sample[i] = calc_mean_score(predictions[i])#'mean_score_prediction'
IndexError: list index out of range

my parameter setting blow:
localhost:src chenjingwei$ python predict.py \

--base-model-name MobileNet
--weights-file ../models/MobileNet/weights_mobilenet_aesthetic_0.07.hdf5
--image-source tests/test_images

and the source code is:

calc mean scores and add to samples

for i, sample in enumerate(samples):
    sample[i] = calc_mean_score(predictions[i])#'mean_score_prediction'

is anything wrong?

thanks.

Retraining on specific dataset to improve predictions

Hey,

you retrained the NIMA model on your own hotel images and obtained a significant increase in accuracy in aesthetic and technical quality predictions, right? We are about to adapt this approach and would like to refine your model on our specific dataset (and then make it available to the community again). Currently we try to figure out the setup for labeling the images. What is your experience on how many images need to be labled to effectively train the CNNs? And did you use a specific software to do so?

Best
Sven

Support for tensorboard

As someone that is not too experienced with Tensorflow and ML, it would be awesome if you could include support for TensorBoard with the repo.

Trained models of other networks

Do you offer the trained model files of other networks, such as resnet or nasnet?
I would like to test those with my custom data.

Running with Python itself

While running it with only python as you mentioned in the answer of one other issue, I am getting error: Error while finding module specification for 'evaluater.predict(ModuleNotFoundError: No module named 'evaluater')' while running entrypoint.predict.cpu.sh

quality of technical model predictions

Hi,

Thank you for sharing your very simple and clean implementation, although I must admit I'm a little bit suprised that this kind of rather basic transfer learning is published by google as a research paper and great finding (by the way, am I the only one who has impression that facebook researchers did recently better job on the image recognition field?).
Nevertheless, I've run predictions from technical model on my dataset and I'm a little bit disappointed with the results as:

  • the order of predictions does not exactly match my human judgement
  • the range of predictions is rather tight (from 4.3 to 5.3), which may indicate that the distribution is concentrated in the mean and model only recognize heavy distortion

I'll try to post here my examples.

But I'm curious @clennan what was your impression from testing the model on idealo data, because I suppose the images on your site keep some quality standard and in general you're interested in minor distortions
And second question: why MobileNet and not something more accurate?

Regards,

Example code to load pretrained model in keras

Hi,

Would it be possible to provide an example code to load the pretrained models and weights (the config json and weights hdf5 files inside the MobileNet folder) into a model using keras, and how to apply it to predict a score for an input image please?

I haven't used keras before, but i did some searching to load the models and tried "model_from_json". However, this gave the error "Improper config format" (see ipython notebook attached).

I tried the Docker image first, compiling worked, but the prediction failed in the docker quickstart terminal and gave the following error: "module 'tensorflow' has no attribute 'get_default_graph'"

thanks in advance for your help!

Kind regards,

Dimitri

idealo_test.zip

"maxentropy.skmaxent" in the file get_labels.py

hi, I meet one more question.
there are one line code "from maxentropy.skmaxent import MinDivergenceModel" in the file get_labels.py, but I can not find some relations file or code about "maxentropy".

Error while loading the model from json file in Keras

Error while loading the model from json file in Keras
with open('config_mobilenet_aesthetic.json', 'r') as f:
model = model_from_json(f.read())
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
1 # Model reconstruction from JSON file
2 with open('config_mobilenet_aesthetic.json', 'r') as f:
----> 3 model = model_from_json(f.read())
4
5 #model = model_from_json('config_mobilenet_aesthetic.json')

/usr/local/lib/python2.7/dist-packages/keras/engine/saving.pyc in model_from_json(json_string, custom_objects)
488 A Keras model instance (uncompiled).
489 """
--> 490 config = json.loads(json_string)
491 from ..layers import deserialize
492 return deserialize(config, custom_objects=custom_objects)

/usr/lib/python2.7/json/init.pyc in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
337 parse_int is None and parse_float is None and
338 parse_constant is None and object_pairs_hook is None and not kw):
--> 339 return _default_decoder.decode(s)
340 if cls is None:
341 cls = JSONDecoder

/usr/lib/python2.7/json/decoder.pyc in decode(self, s, _w)
362
363 """
--> 364 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
365 end = _w(s, end).end()
366 if end != len(s):

/usr/lib/python2.7/json/decoder.pyc in raw_decode(self, s, idx)
380 obj, end = self.scan_once(s, idx)
381 except StopIteration:
--> 382 raise ValueError("No JSON object could be decoded")
383 return obj, end

ValueError: No JSON object could be decoded`

Error with Deepbinner: ImportError: libcuda.so.1: cannot open shared object file: No such file or directory Failed to load the native TensorFlow runtime.

Hello.
I'm trying to use deepbinner for demultiplexing by native barcodes the data of several patients I've obtaining through MinION sequencing. I've obtained this error and I don't understand what could happen. Any help? Thank you.

`(Deepbinner) [ugm@et8 Lecturas_30_05_2019]$ deepbinner classify
--native fast5_pass/ > classifications
Using TensorFlow backend.
Traceback (most recent call last):
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in

from tensorflow.python.pywrap_tensorflow_internal import *
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in

_pywrap_tensorflow_internal = swig_import_helper()
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in
swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp,
pathname, description)
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/imp.py",
line 243, in load_module
return load_dynamic(name, filename, file)
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/imp.py",
line 343, in load_dynamic
return _load(spec)
ImportError: libcuda.so.1: cannot open shared object file: No such
file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/programas/anaconda/3-4.4.0/envs/Deepbinner/bin/deepbinner",
line 10, in
sys.exit(main())
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/deepbinner/deepbinner.py", line 59, in
main
from .classify import classify
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/deepbinner/classify.py", line 24, in

from keras.models import load_model
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/keras/init.py", line 3, in

from . import utils
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/keras/utils/init.py", line 6, in

from . import conv_utils
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/keras/utils/conv_utils.py", line 9, in

from .. import backend as K
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/keras/backend/init.py", line 89, in

from .tensorflow_backend import *
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 5, in

import tensorflow as tf
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/tensorflow/init.py", line 24, in

from tensorflow.python import pywrap_tensorflow # pylint:
disable=unused-import
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/tensorflow/python/init.py", line 49, in

from tensorflow.python import pywrap_tensorflow
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in

raise ImportError(msg)
ImportError: Traceback (most recent call last):
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in

from tensorflow.python.pywrap_tensorflow_internal import *
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in

_pywrap_tensorflow_internal = swig_import_helper()
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in
swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp,
pathname, description)
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/imp.py",
line 243, in load_module
return load_dynamic(name, filename, file)
File
"/programas/anaconda/3-4.4.0/envs/Deepbinner/lib/python3.6/imp.py",
line 343, in load_dynamic
return _load(spec)
ImportError: libcuda.so.1: cannot open shared object file: No such
file or directory

Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/errors

for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.`

SyntaxError in the file "src/handlers/data_generator.py"

hi , when I run the src code "train.py" ,one error report. this is:

File "/work/imageQuality/image-quality-assessment-master/src/handlers/data_generator.py", line 39
X = np.empty((len(batch_samples), *self.img_crop_dims, 3))

the parameter numbers is 3 or 2 in the function np.empty ?

I just learn python, so may be this is a very simple question for you.
sweet smile! thanks

Activate citations

Hi, I'm an undergraduate student from University Carlos III of Madrid and I am using your code, specially your trained weights for MobileNet, for doing my final project. I would like to be able to cite your GitHub repository correctly. Here is a guide that allow to create a DOI for your repository https://help.github.com/en/articles/referencing-and-citing-content.

I don't know if this would be possible but it would be nice to put a good citation in my project.

Thanks in advance.

OSError: Unable to open file

I'm tring to evaluate around 3600 test images:

./predict  --docker-image nima-cpu --base-model-name MobileNet --weights-file $(pwd)/models/MobileNet/weights_mobilenet_technical_0.11.hdf5 --image-source $(pwd)/src/tests/test_images

and the error:

Using TensorFlow backend.
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-06-13 09:24:55.396005: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-06-13 09:24:55.415707: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2208000000 Hz
2019-06-13 09:24:55.417119: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x4daabd0 executing computations on platform Host. Devices:
2019-06-13 09:24:55.417176: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/src/evaluater/predict.py", line 73, in <module>
    main(**args.__dict__)
  File "/src/evaluater/predict.py", line 44, in main
    nima.nima_model.load_weights(weights_file)
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py", line 2658, in load_weights
    with h5py.File(filepath, mode='r') as f:
  File "/usr/local/lib/python3.5/dist-packages/h5py/_hl/files.py", line 394, in __init__
    swmr=swmr)
  File "/usr/local/lib/python3.5/dist-packages/h5py/_hl/files.py", line 170, in make_fid
    fid = h5f.open(name, flags, fapl=fapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5f.pyx", line 85, in h5py.h5f.open
OSError: Unable to open file (file read failed: time = Thu Jun 13 09:24:57 2019
, filename = '/src/weights.hdf5', file descriptor = 3, errno = 21, error message = 'Is a directory', buf = 0x7ffff603d840, total read size = 8, bytes this sub-read = 8, bytes actually read = 18446744073709551615, offset = 0)

Thanks.

Wrong inferred distribution using 'get_labels.py'

Using the settings in the current version of 'get_label.py' (i.e., 'samplespace' is [0:1:10] and algorithm='CG'), then if we input 'mean=7', the return distribution is wrong (i.e., with all the probabilities equal to 0.1). We can check this by printing Model.expections(), which will somehow be 4.5, far from what we expect (i.e., 7).

FIY: three ways to solve this problem:

  1. set 'samplespace' to [1:1:10];
  2. set 'algorithm' to 'BFGS';
  3. manually modify the mos that equals to '7' (e.g., changing the score to '7.1').

Unable to run the prediction script

I am using Windows and when I try to run the prediction script, I am getting the error:

C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: error while creating mount source path '/c/Users/Prawigya/Desktop/acm/image-quality-assessment-master/src/tests/test_images/42039.jpg': mkdir /c: file exists.

The prediction script used:
./predict
--docker-image nima-cpu
--base-model-name MobileNet
--weights-file /$(pwd)/models/MobileNet/weights_mobilenet_technical_0.11.hdf5
--image-source /$(pwd)/src/tests/test_images

Dataset with average score only

Hi !
First of all I'd like to point out the fact the I'm a beginner in ML so please excuse me if my question does not make sense or if the vocabulary used is not appropriate.

I would like to train the aesthetic model on my own dataset to have better results. I don't have labels with several scores given from 1 to 10 but a single score (a percentage from 0 to 100) for each photo. Is there a way to use it anyway ? If yes, do you have any tips on how to achieve this ?

Thanks !

GPU training results in docker error

When running ./train-ec2 with a verified running EC2 GPU instance, the DOCKER_REMOTE_RUN command fails with this error:

nvidia-docker: command not found

Here is the full log -

Using default tag: latest
latest: Pulling from idealo/nima-gpu
Digest: sha256:8503017ed9427fa63d321911e620c01bb897d9bfd5494be4af1ea8b3e6d9bb7f
Status: Image is up to date for idealo/nima-gpu:latest
gpu-aesthetic-config.json                                                                              100%  480    11.5KB/s   00:00
labels_train.json                                                                                      100%  392KB   1.3MB/s   00:00
sudo: nvidia-docker: command not found
exit status 1```

about guidance

i wonder if there is a guidance about how to use this project,i am so confused of the structure.

Predict script number of images limit

Hi, is there a limit to how many images predict script can generate ratings for? I have a dataset of couple of thousand images and the predict script run on 800 or so images. It doesn't load anything beyond that number. I checked my resource utilization, its even using 10% of the memory. Is there any way I can increase this limit to cover the whole dataset?

Thanks!

Accuracy of results seems odd.

For some reason I'm getting odd values, and I'm not sure if its me or the model. Take a look and let me know what you think...

Screen Shot 2019-10-16 at 3 47 25 PM

I was expecting to get around 9.8+ score on the 1.jpg, the one with the tree and blue sky. Is from the NIMA paper.

Additionally what scores or results are you guys getting? Maybe is just something on my end...

how to convert to .mlmodel file

I'd like to convert to hdf5 model to IOS .mlmodel ,seems that it work, but the result given from the .mlmodel is not correct, I suspect it's the resson I didn't get preprocessing parameter right,so can somebody give some advice what parameter should I give to the call:
apple_model = coremltools.converters.keras.convert(nima.nima_model,input_names=['image'],image_input_names='image')

need info to build config and samples json

can anyone explain "train_env": "TID2013",
"s3_bucket": "ds-hotel-image-assessment",
"docker_image": "idealo/nima-gpu",
"base_model_name": "MobileNet",
"existing_weights": null,
"n_classes": 10,
"batch_size": 64,
"epochs_train_dense": 25,
"learning_rate_dense": 0.001,
"decay_dense": 0,
"epochs_train_all": 25,
"learning_rate_all": 0.0000003,
"decay_all": 0,
"dropout_rate": 0.75,
"multiprocessing_data_load": true,
"num_workers_data_load": 8,
"img_format": "bmp"

build cpu dockerfile problem with TF version?

Hi
i build Dockerfile.cpu

and when predict

./predict  --docker-image nima-cpu --base-model-name MobileNet --weights-file $(pwd)/models/MobileNet/weights_mobilenet_technical_0.11.hdf5 --image-source /mnt/videos/data/frames/videos/1116l.mp4/001.jpg
Using TensorFlow backend.
Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/src/evaluater/predict.py", line 73, in <module>
    main(**args.__dict__)
  File "/src/evaluater/predict.py", line 43, in main
    nima.build()
  File "/src/handlers/model_builder.py", line 35, in build
    self.base_model = BaseCnn(input_shape=(224, 224, 3), weights=self.weights, include_top=False, pooling='avg')
  File "/usr/local/lib/python3.6/dist-packages/keras/applications/mobilenet.py", line 248, in MobileNet
    img_input = Input(shape=input_shape)
  File "/usr/local/lib/python3.6/dist-packages/keras/engine/topology.py", line 1457, in Input
    input_tensor=tensor)
  File "/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/keras/engine/topology.py", line 1319, in __init__
    name = prefix + '_' + str(K.get_uid(prefix))
  File "/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py", line 68, in get_uid
    graph = tf.get_default_graph()
AttributeError: module 'tensorflow' has no attribute 'get_default_graph'

How to use it only by python

Could you please tell me how to use it only by python? Both create my own model or use your model are both ok for me.
Because I can just use python on my computer, I wish you could give me some advice.

label generation

Hi ,I am confused about the label generation , if you could help, I would appreciate it. When I open the get_labels.py script, I don't know the parameter 'source-file-mean' . How can I generate labels about my data? And what ten numbers of label mean ?

Can I use this model to evaluate GAN model outputs?

Since the GAN model was not so stable now, usually we might get terrible generated images, just like the following ones.

20190910120007

My question is whether we can evaluate these images with NIMA with my own dataset, and it's so hard to create my own AVA dataset.

Unable to find image 'nima-cpu:latest' locally

I try to score a demo picture but get an error:

./predict --docker-image nima-cpu --base-model-name MobileNet --weights-file $(pwd)/models/MobileNet/weights_mobilenet_technical_0.11.hdf5 --image-source $(pwd)/src/tests/test_images/42039.jpg

Does someone know how to deal with this issue? Thanks.

Unable to find image 'nima-cpu:latest' locally
docker: Error response from daemon: pull access denied for nima-cpu, repository does not exist or may require 'docker login'.
See 'docker run --help'.

how did you create the mobilenet visualizations?

in this nvidia article (linked below) about your work you show layer visualizations of mobilenet. I tried to do that in a project of mine but it failed with multiple libraries (e.g. lucid : tensorflow/lucid#68). So I was wondering how you did it.

article: https://devblogs.nvidia.com/deep-learning-hotel-aesthetics-photos/?nvid=em-ded-63644&mkt_tok=eyJpIjoiT1dGaE5ERm1NbVV5TmpreSIsInQiOiJOK0pzQnBxclNwUWNKNmNoNUNYbCtDVUlibzAzM21uYnNsRTBvZFhnWlk0RnJwZmltZHdRTFZTXC9Idk5vQXg2K3hjWE94cE9UU2NnV0xWVk1tWnhHSGhLUVZMbW84R1Z0QUN3RVNVaWdqS05yR1ZaR0FEbmFCMGlmNTZ5bndMNGcifQ%3D%3D

Illegal Instruction when running ./predict

Hi there,
I have just downloaded your models and installed all the requirement, but it did not work when I tried predicting the images with
./predict \ --docker-image nima-cpu \ --base-model-name MobileNet \ --weights-file $(pwd)/models/MobileNet/weights_mobilenet_technical_0.11.hdf5 \ --image-source $(pwd)/src/tests/test_images/42039.jpg

It then showed an error of Illegal Instruction in the file entrypoint.predict.cpu.sh:
entrypoints/entrypoint.predict.cpu.sh: line 12: 6 Illegal instruction (core dumped) python -m evaluater.predict --base-model-name $BASE_MODEL_NAME --weights-file $WEIGHTS_FILE --image-source $IMAGE_SOURCE

Does the model support *.png images?

I'm getting an exception while trying to asses a PNG imageL

(master)$ ./predict  --docker-image nima-cpu --base-model-name MobileNet --weights-file $(pwd)/models/MobileNet/weights_mobilenet_technical_0.11.hdf5 --image-source $(pwd)/src/tests/test_images/src.png
Using TensorFlow backend.
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-03-27 14:57:13.408342: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-27 14:57:13.413560: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2598115000 Hz
2019-03-27 14:57:13.414336: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x3def930 executing computations on platform Host. Devices:
2019-03-27 14:57:13.414620: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/lib/python3.5/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 401, in get_index
    return _SHARED_SEQUENCES[uid][i]
  File "/src/handlers/data_generator.py", line 80, in __getitem__
    X, y = self.__data_generator(batch_samples)
  File "/src/handlers/data_generator.py", line 94, in __data_generator
    img = utils.load_image(img_file, self.img_load_dims)
  File "/src/utils/utils.py", line 39, in load_image
    return np.asarray(keras.preprocessing.image.load_img(img_file, target_size=target_size))
  File "/usr/local/lib/python3.5/dist-packages/keras/preprocessing/image.py", line 387, in load_img
    img = pil_image.open(path)
  File "/usr/local/lib/python3.5/dist-packages/PIL/Image.py", line 2543, in open
    fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: '/src/src.jpg'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 578, in get
    inputs = self.queue.get(block=True).get()
  File "/usr/lib/python3.5/multiprocessing/pool.py", line 608, in get
    raise self._value
FileNotFoundError: [Errno 2] No such file or directory: '/src/src.jpg'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/src/evaluater/predict.py", line 73, in <module>
    main(**args.__dict__)
  File "/src/evaluater/predict.py", line 51, in main
    predictions = predict(nima.nima_model, data_generator)
  File "/src/evaluater/predict.py", line 30, in predict
    return model.predict_generator(data_generator, workers=8, use_multiprocessing=True, verbose=1)
  File "/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 2522, in predict_generator
    generator_output = next(output_generator)
  File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 584, in get
    six.raise_from(StopIteration(e), e)
  File "<string>", line 3, in raise_from
StopIteration: [Errno 2] No such file or directory: '/src/src.jpg'

The image is the same directory as the test images. The model works perfectly with the test JPGs.

(master)$ ./predict  --docker-image nima-cpu --base-model-name MobileNet --weights-file $(pwd)/models/MobileNet/weights_mobilenet_technical_0.11.hdf5 --image-source $(pwd)/src/tests/test_images/42039.jpg
Using TensorFlow backend.
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-03-27 14:56:57.667077: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-27 14:56:57.671900: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2598115000 Hz
2019-03-27 14:56:57.672253: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x3c08380 executing computations on platform Host. Devices:
2019-03-27 14:56:57.672511: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>

1/1 [==============================] - 1s 810ms/step
[
  {
    "image_id": "42039",
    "mean_score_prediction": 4.647609725594521
  }
]

Are there any suggestions?

Got an error when training on CPU

I use this for training:
./train-local --config-file $(pwd)/models/MobileNet/config_mobilenet_technical.json --samples-file $(pwd)/data/TID2013/tid_labels_train.json --image-dir ~/Downloads/image-quality-assessment-master/training_data/distorted_images/

Got error message like:

Using TensorFlow backend.
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/src/trainer/train.py", line 6, in
from sklearn.model_selection import train_test_split
ModuleNotFoundError: No module named 'sklearn'

I am pretty sure sklearn installed and I can even run
>> from sklearn.model_selection import train_test_split in python directly.
I tried both on MacOS and Ubuntu, got the same error.

Jenkins Deployment

Please help on this issue:

You must specify a region. You can also configure your region by running "aws configure".
You must specify a region. You can also configure your region by running "aws configure".
/tmp/jenkins6556285910827171418.sh: line 25: jq: command not found
Exception ignored in: <_io.TextIOWrapper name='' mode='w' encoding='UTF-8'>
BrokenPipeError: [Errno 32] Broken pipe
entered existing service
/tmp/jenkins6556285910827171418.sh: line 31: jq: command not found
Exception ignored in: <_io.TextIOWrapper name='' mode='w' encoding='UTF-8'>
BrokenPipeError: [Errno 32] Broken pipe
/tmp/jenkins6556285910827171418.sh: line 32: [: =: unary operator expected
usage: aws [options] [ ...] [parameters]
To see help text, you can run:

Test in other image type

Hi, I have changed the image type from jpg to bmp that in the position of /src/evaluate/predict, But it can only test the jpg image type. How can I do it that can test the bmp images?

Keras to TensorFlow Serving model migration script + client

Hi! I just thought that I'd share this simple script for migrating the NIMA model to a Tensorflow version that can be used in TF Serving. Hopefully it saves someone some time :)

Keras to TF migrating script:

import keras.backend as K
from keras.applications.mobilenet import DepthwiseConv2D, relu6
from keras.utils.generic_utils import CustomObjectScope
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model.signature_def_utils_impl import \
    predict_signature_def

from src.handlers.model_builder import Nima

EXPORT_PATH = './nima_tf/1'
BASE_MODEL_NAME = 'MobileNet'
WEIGHTS_FILE = '../models/MobileNet/weights_mobilenet_technical_0.11.hdf5'


# Load model and weights
nima = Nima(BASE_MODEL_NAME, weights=None)
nima.build()
nima.nima_model.load_weights(WEIGHTS_FILE)

# Tell keras that this will be used for making predictions
K.set_learning_phase(0)

# https://github.com/keras-team/keras/issues/7431#issuecomment-334959500
with CustomObjectScope({'relu6': relu6, 'DepthwiseConv2D': DepthwiseConv2D}):
    builder = saved_model_builder.SavedModelBuilder(EXPORT_PATH)
    signature = predict_signature_def(
        inputs={'input_image': nima.nima_model.input},
        outputs={'quality_prediction': nima.nima_model.output}
    )

    builder.add_meta_graph_and_variables(
        sess=K.get_session(),
        tags=[tag_constants.SERVING],
        signature_def_map={'image_quality': signature}
    )
    builder.save()

print(f'TF model exported to: {EXPORT_PATH}')

Here's also a sample client:

from __future__ import absolute_import, division, print_function

import tensorflow as tf
import keras
import numpy as np
from grpc.beta import implementations
from tensorflow_serving.apis import predict_pb2, prediction_service_pb2

TFS_HOST = 'localhost'
TFS_PORT = 9000


def normalize_labels(labels):
    labels_np = np.array(labels)
    return labels_np / labels_np.sum()


def calc_mean_score(score_dist):
    score_dist = normalize_labels(score_dist)
    return (score_dist*np.arange(1, 11)).sum()


def get_image_quality_predictions(image_path):
    # Load and preprocess image
    image = np.asarray(keras.preprocessing.image.load_img(image_path, target_size=(224, 224)))
    image = keras.applications.mobilenet.preprocess_input(image)

    # Run through model
    channel = implementations.insecure_channel(TFS_HOST, TFS_PORT)
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'nima'
    request.model_spec.signature_name = 'image_quality'

    request.inputs['input_image'].CopyFrom(
        tf.contrib.util.make_tensor_proto(np.expand_dims(image, 0)))

    response = stub.Predict(request, 10.0)
    result = round(calc_mean_score(response.outputs['quality_prediction'].float_val), 2)

    return result

TFS config file:

model_config_list: {

  config: {
        name: "nima",
        base_path: "/path/to/exported_nima_tf",
        model_platform: "tensorflow"
      }
}

And TFS command:
tensorflow_model_server --port=9000 --model_config_file=/path/to/tfs_config.cfg

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.