Coder Social home page Coder Social logo

karolzak / cntk-hotel-pictures-classificator Goto Github PK

View Code? Open in Web Editor NEW
39.0 7.0 22.0 2.38 MB

This POC is using CNTK 2.1 to train model for multiclass classification of images. Our model is able to recognize specific objects (i.e. toilet, tap, sink, bed, lamp, pillow) connected with picture types we are looking for. It plays a big role in a process which will be used to classify pictures from different hotels and determine whether it's a picture of bathroom, bedroom, hotel front, swimming pool, bar, etc.

License: MIT License

Python 100.00%
cntk classification cognitive-toolkit cntk-model object-detection object-classification python deep-learning deep-neural-networks transfer-learning

cntk-hotel-pictures-classificator's Introduction

Table of contents

  1. Project description
  2. Results and learnings
    2.1. Initial assumptions
    2.2. Dataset
    2.3. Training and evaluation results
    2.4. Using the model
  3. Run sample
    3.1. Setup
    3.2. Train and evaluate the model
  4. Code highlights
  5. Use with custom dataset
    5.1. Setup
    5.2. Prepare data
    5.3. Tag images
    5.4. Download pretrained model and create mappings for custom dataset
    5.5. Run training
    5.6. Deploy your model



1. Project description

[back to the top]

This POC is using CNTK 2.1 to train model for multiclass classification of images. Our model is able to recognize specific objects (i.e. toilet, tap, sink, bed, lamp, pillow) connected with picture types we are looking for. It plays a big role in a process which will be used to classify pictures from different hotels and determine whether it's a picture of bathroom, bedroom, hotel front, swimming pool, bar, etc. That final classification will be made based on objects that were detected in those pictures.

What can you find inside:

  • How to train a multiclass classificator for images using CNTK (Cognitive Toolkit) and FasterRCNN
  • Training using Transfer Learning with pretrained AlexNet model
  • How to prepare and label images in a dataset used for training and testing the model
  • Working example with all the data and pretrained models

If you would like to know how to use such model, you can check this project to find out how to write a simple RESTfull, Python-based web service and deploy it to Azure Web Apps with your own model.



2. Results and learnings

[back to the top]

Disclaimer: This POC and all the learnings you can find bellow is an outcome of close cooperation between Microsoft and Hotailors. Our combined team spent total of 3 days to prepare and label data, finetune parameters and train the model.


2.1. Initial assumptions

[back to the top]

  • Due to limited time and human resources we decided to create this POC for just 2 of almost 20 different types of pictures we would like to classify in final product

  • Each type of picture (i.e. bedroom, bathroom, bar, lobby, hotel front, restaurant) can consists of different objects (i.e. toilet, sink, tap, towell, bed, lamp, curtain, pillow) which are strongly connected with that speciifc picture type.

  • For our POC we used 2 picture types with 4 objects/classes per each:

    bedroom bathroom
    pillow tap
    bed sink
    curtain towel
    lamp toilet
  • At this time we focused only on detecting those specific objects for each picture type. Outcomes of evaluation should later be analyzed either by some simple algorithm or another model to match an image with one of the picture types we are looking for



2.2. Dataset

[back to the top]

  • We wanted to be as close as possible to real world scenarios so our dataset consists of real pictures from different hotels all over the world. Images where provided by Hotailors team

  • In our POC we used images scalled to max of 1000px on the wide side

  • Every picture usually consists of multiple types of objects we are looking for

  • We used total of 113 images to train and test our model from which we used:

    • 82 images in positive set for training the model. We have about 50/50 split between bathroom and bedroom pictures

      Bathroom positive sample Bedroom positive sample
    • 11 images in negative set for training the model. Those images should not contain any objects that we are interested in detecting

      Negative sample 1 Negative sample 2
    • 20 images in testImages set for testing and evaluating the model. We have about 50/50 split between bathroom and bedroom pictures

      Bathroom test sample Bedroom test sample
  • After we tagged all of the images from HotailorPOC2 dataset we analyzed them to verify how many tagged objects per each class we have. It is suggested to use about 20-30% of all data in dataset as test data. Looking at our numbers below we did quite ok but there's still some room for improvement

    object/class name # of tagged objects in positive/train set # of tagged objects in test set % of tagged objects in relation to all objects
    sink 46 10 18
    pillow 98 27 22
    toilet 34 7 17
    lamp 69 18 21
    curtain 78 16 17
    towel 30 14 32
    tap 44 9 17
    bed 53 12 18



2.3. Training and evaluation results

[back to the top]

  • After training and evaluating our model we achieved following results:

    Evaluating Faster R-CNN model for 20 images.
    Number of rois before non-maximum suppression: 550
    Number of rois  after non-maximum suppression: 87
    AP for            sink = 0.4429
    AP for          pillow = 0.1358
    AP for          toilet = 0.8095
    AP for            lamp = 0.5404
    AP for         curtain = 0.7183
    AP for           towel = 0.0000
    AP for             tap = 0.1111
    AP for             bed = 0.8333
    Mean AP = 0.4489
    
  • As you can see above, some of the results are not too good. For example: pillow and tap average precision for test set is extremely low and for towel it even shows 0.0000 which may indicate some problems with our dataset or tagged objects. We will definitely need to look into it and check if we are able to somehow improve those results

  • Even though the Mean Average Precision values are not perfect we still were able to get some decent results:


  • Some of the results include mistakes. But those clearly look like anomalies which should be fairly easy to catch in further classification of picture type

    Picture below shows how our model classified single region (yellow) as bed object although it's clearly not there:

    Another picture shows how our model classified single region as towel object although it's clearly not there:

  • Of course sometimes there are some really ugly results which may be hard to use for further classification:

    Next picture shows our model wasn't able to find any objects. We need to verify if it's because of wrongly tagged data in HotailorPOC2 or is it some kind of issue with Region Proposal Network and it simply didn't find any regions of interest for further classification



2.4. Using the model

[back to the top]

Final model will be used in form of web service running on Azure and that's why I prepared a sample RESTful web service written with Python using Flask module. This web service makes use of our trained model and provides API which takes images as an input for evaluation and returns either a cloud of tags or tagged images. Project also describes how to easily deploy this web service to Azure Web Apps with custom Python environment and required dependencies.

You can find running web service hosted on Azure Web Apps here, and project with code and deployement scripts can be found on GitHub.

Demo

Sample request and response in Postman: Demo



3. Run sample

3.1. Setup

[back to the top]

  • Download content of this repo

    You can either clone this repo or just download it and unzip to some folder

  • Setup Python environment

    In order for scripts to work you should have a proper Python environment. If you don't already have it setup then you should follow one of the online tutorials. To setup Python environment and all the dependencies required by CNTK on my local Windows machine, I used scripted setup tutorial for Windows. If you're using Linux then you might want to look into one of these tutorials. Just bear in mind that this project was developed and tested with CNTK 2.1 and it wasn't tested for any other version.

    Even after setting up Python environment properly you might still witness some errors when running Python scripts. Most of those errors are related to missing modules or some 3rd party frameworks and tools (i.e. GraphViz). Missing modules can be easily pip installed and most of the required ones can be found in requirements.txt files for each folder with Python scripts.

    Please report if you'll find any errors or missing modules, thanks!

  • Download hotel pictures dataset (HotailorPOC2) and pretrained AlexNet model used for Transfer Learning

    Go to Detection/FasterRCNN folder in the location were you unzipped this repo and run install_data_and_model.py. It will automatically download the HotailorPOC2 dataset, pretrained AlexNet model and will generate mapping files required to train the model.

3.2. Train and evaluate the model using HotailorPOC2 sample dataset

[back to the top]

After you go through setup steps you can start training your model.

In order to do it you need to run FasterRCNN.pyscript located in Detection/FasterRCNN.

I'm working on Windows 10 so I run the script from Anaconda Command Prompt which should be installed during setup steps.

Bear in mind that training the model might take a lot of time depending on the type of machine you are using for training and if you're using GPU or CPU.

python FasterRCNN.py

TIP: If you don't own any machine with heavy GPU you can use one of the ready to go Data Science Virtual Machine images in Azure.

When the training and evaluation will be completed, you should see something similar to this:

Evaluating Faster R-CNN model for 20 images.
Number of rois before non-maximum suppression: 550
Number of rois  after non-maximum suppression: 87
AP for            sink = 0.4429
AP for          pillow = 0.1358
AP for          toilet = 0.8095
AP for            lamp = 0.5404
AP for         curtain = 0.7183
AP for           towel = 0.0000
AP for             tap = 0.1111
AP for             bed = 0.8333
Mean AP = 0.4489

Trained model, neural network topology and evaluated images (with plotted results) can later be found in Output folder located in Detection/FasterRCNN.



4. Code highlights

[back to the top]

  • config.py - most of variables are set in this file

    • These variables are responsible for chosing a dataset that will be used to train the model. Most important variables here are :

      __C.CNTK.DATASET = "HotailorPOC2"   
      
      [..]  
      
      if __C.CNTK.DATASET == "HotailorPOC2": #name of your dataset Must match the name set with property '__C.CNTK.DATASET'
          __C.CNTK.MAP_FILE_PATH = "../../DataSets/HotailorPOC2" # dataset directory
          __C.CNTK.NUM_TRAIN_IMAGES = 82 # number of images in 'positive' folder
          __C.CNTK.NUM_TEST_IMAGES = 20 # number of images in 'testImages' folder
          __C.CNTK.PROPOSAL_LAYER_PARAMS = "'feat_stride': 16\n'scales':\n - 4 \n - 8 \n - 12"
    • IMAGE_WIDTH and IMAGE_HEIGHT are used to determine the input size of images used for training and later on for evaluation:

      __C.CNTK.IMAGE_WIDTH = 1000
      __C.CNTK.IMAGE_HEIGHT = 1000
    • BASE_MODEL defines which pretrained model should be used for transfer learning. Currently we used only AlexNet. In future we want to test it with VGG16 to check if we can get better results then with AlexNet

      __C.CNTK.BASE_MODEL = "AlexNet" # "VGG16" or "AlexNet" or "VGG19"
  • requirements.txt

    • It holds all the dependencies required by my scripts and CNTK libraries to work. It can be used with pip install command to quickly install all the required dependencies (more here)

      matplotlib==1.5.3
      numpy==1.13.3
      cntk==2.1
      easydict==1.6
      Pillow==4.3.0
      utils==0.9.0
      PyYAML==3.12
      
  • install_data_and_model.py

    • This script does 3 things:
      • Downloads pretrained model specified in config.py which will be later used for transfer learning:

        #downloads pretrained model pointed out in config.py that will be used for transfer learning
        sys.path.append(os.path.join(base_folder, "..", "..",  "PretrainedModels"))
        from models_util import download_model_by_name
        download_model_by_name(cfg["CNTK"].BASE_MODEL)
      • Downloads and unzips our sample HotailorPOC2 dataset:

        #downloads hotel pictures classificator dataset (HotailorPOC2)
        #comment out lines bellow if you're using a custom dataset
        sys.path.append(os.path.join(base_folder, "..", "..",  "DataSets", "HotailorPOC2"))
        from download_HotailorPOC2_dataset import download_dataset
        download_dataset()    
      • Creates mappings and metadata for dataset:

        #generates metadata for dataset required by FasterRCNN.py script
        print("Creating mapping files for data set..")
        create_mappings(base_folder)
  • FasterRCNN.py

    • We use this script for training and testing the model. It makes use of specific variables in config.py. This script comes unmodified from original CNTK repository on GitHub (version 2.1)



5. Use with custom dataset

[back to the top]

Although this project was prepared specifically for Hotailors case, it's based on one of the standard examples from original CNTK repository on GitHub and thus it can be easily reused in any other scenario. You just need to follow steps bellow:

5.1. Setup

[back to the top]

Follow steps number 1 and 2 from setup instructions.

5.2. Prepare data

[back to the top]

  • Gather data for your dataset

    • Think what type of objects you would like to classify and prepare some images with those objects. The more the better but usually u should get some decent results even with 30-40+ samples per object. Remember that single image can have multiple objects (it was exactly like that in our case)

    • Make sure to use only good quality images in specific resolution

    • Resolution we used for our project was 1000x1000 px but you can easily lower it depending on your scenario and needs. Just make sure to scale your images to this one specific resolution you will be working with. In our case the original images where much larger then 1000x1000 px but we scalled it down to match the longer side of image to 1000 px

    • It's not recommended to go beyond 1000x1000 px


  • Create a dataset

    Create a new folder in Datasets directory and name it with whatever your datasets name is and inside that newly created folder create 3 another folders for your images:

    • negative

      Here you must add images which don't include any of the objects you will be looking for. The more the better but don't get crazy here, 10 to 20 images should more then enought. Those images will be used during training to show our model what is not interesting for us and should be treated as a background

    • positive

      Here you must add images that will be used to teach our model what kind of objects it should look for. The more the better but we should be able to see some results with 30-40+ images per class/object we would like to detect. Just bear in mind that one image can have more then one object/class.

    • testImages

      Those images will be used for testing of your trained model and to evaluate AP (Average Precission) percentage for each class. Just take 20-30 percent of images from positive folder and put them here. It's very important though to not duplicate any images between positive and testImages folders as it may corrupt the results


5.3. Tag images

[back to the top]

In order to make your custom dataset ready to be used for training you will need to create some metadata with coordinates of objects and their names (classes)

Currently the best tool for tagging images is Visual object Taging Tool but for this project I used simple Python scripts that can be found in the original CNTK 2.1 github repository (mine were fine tuned a bit):

  • C1_DrawBboxesOnImages.py - allows you to draw bounding boxes for all the objects which are interesting to you (present objects you wish to recognize).

    There is one variable you will need to change before running this script:

    #change it to your images directory. Run this script separately for each folder
    imgDir = "../../DataSets/HotailorPOC2/testImages"

    Important thing to mention here is to run this script only for positive and testImages. You don't need to do it for negative because there's actually nothing to tag there.

    After successfully running the script you should see something like that:

    C1

    Now just use your mouse to draw bounding boxes for every object. Some keyboard shortcuts should be helpful here:

    "u" - will erase last bounding box you draw

    "n" - will move you to next image in current folder

    "s" - will skip current image and delete all the bounding boxes for that image


  • C2_AssignLabelsToBboxes.py - allows to review every bounding box you've marked with C1 script and label it with proper class name.

    Before running this script change those 2 variables:

    #change it to your images directory. Run this script separately for each folder
    imgDir = "../../DataSets/HotailorPOC2/testImages"
    
    #change it to your classes names
    classes = ["curtain", "pillow", "bed", "lamp", "toilet", "sink", "tap", "towel"]

    Again, same as in C1, run this script only for positive and testImages.

    C2


  • C3_VisualizeBboxes.py - I made this script based on C2 just to visualize bounding boxes for each image in dataset. It's very helpful when you are looking for mistakes within your dataset.

    Be sure to change imgDir variable to your directory:

    #change it to your images directory. Run this script separately for each folder
    imgDir = "../../DataSets/HotailorPOC2/testImages"

    Running C3 script will visualize bounding boxes for every image in directory and you should be able to see if everything is marked correctly:

    C3


5.4. Download pretrained model and create mappings for custom dataset

[back to the top]

In order to train the model we use transfer learning and we need to have a pretrained model for that. For this sample we use AlexNet model.

To download the model and create class and files mappings you can use install_data_and_model.py script and simply follow these steps:

  • Make sure to change variables in your config.py file and make sure you set __C.CNTK.MAP_FILE_PATH variable to a proper directory:

    if __C.CNTK.DATASET == "HotailorPOC2": #name of your dataset. Must match the name set with property '__C.CNTK.DATASET'
        __C.CNTK.MAP_FILE_PATH = "../../DataSets/HotailorPOC2" # your dataset directory
        __C.CNTK.NUM_TRAIN_IMAGES = 82 # number of images in 'positive' folder
        __C.CNTK.NUM_TEST_IMAGES = 20 # number of images in 'testImages' folder
        __C.CNTK.PROPOSAL_LAYER_PARAMS = "'feat_stride': 16\n'scales':\n - 4 \n - 8 \n - 12"
  • Open install_data_and_model.py script and comment out those lines:

    #downloads hotel pictures classificator dataset (HotailorPOC2)
    #comment out lines bellow if you're using a custom dataset
    sys.path.append(os.path.join(base_folder, "..", "..",  "DataSets", "HotailorPOC2"))
    from download_HotailorPOC2_dataset import download_dataset
    download_dataset()
  • Run install_data_and_model.py script. Bear in mind that downloading the pretrained model may take few minutes or even more depending on your internet connection.

At this point your custom dataset should be ready for training.


5.5. Run training

[back to the top]

  • Change variables

    Edit config.py script and change following variables:

    • Change value of __C.CNTK.DATASET:

      # set it to your custom dataset name
      __C.CNTK.DATASET = "HotailorPOC2" 
    • Change values of __C.CNTK.IMAGE_WIDTH and __C.CNTK.IMAGE_HEIGHT to much your custom dataset images resolution:

      # set it to your custom datasets images resolution
      __C.CNTK.IMAGE_WIDTH = 1000
      __C.CNTK.IMAGE_HEIGHT = 1000 
    • Change values in following code to match your dataset name, your datasets directory location and to match your custom dataset images resolution:

      if __C.CNTK.DATASET == "HotailorPOC2": #name of your dataset. Must match the name set with property '__C.CNTK.DATASET'
          __C.CNTK.MAP_FILE_PATH = "../../DataSets/HotailorPOC2" # your dataset directory
          __C.CNTK.NUM_TRAIN_IMAGES = 82 # number of images in 'positive' folder
          __C.CNTK.NUM_TEST_IMAGES = 20 # number of images in 'testImages' folder

  • Train and test your model with FasterRCNN.py script

    Run FasterRCNN.py script and wait till the training and testing finishes.

    Training may take even couple hours depending on your hardware setup. It's is best to use high performing GPU's for that kind of purposes.

    TIP: If you don't own any machine with heavy GPU you can use one of the ready to go Data Science Virtual Machine images in Azure.

    If you won't be satisfied with training results then try fine tunning the variables and cleaning your dataset if necessary and then rerun the training.


5.6. Deploy your model

[back to the top]

When you will find yourself satisfied with your model and you would like to get to know how to use it with RESTful Python web service and deploy it to Azure Web Apps, then check out this repository.

cntk-hotel-pictures-classificator's People

Contributors

karolzak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

cntk-hotel-pictures-classificator's Issues

ImportError: No module named 'utils.cython_modules.cython_bbox'

I know this error has been widely discussed and here are some specifications I found for this program.

  1. The default cython binaries in the directory 'Detection/utils/cython_modules/' only support for particular version, 3.4(Linux) and 3.5(Windows), of Python. So anyone who got the error info like the title should checks if your Python version is suitable.

  2. The latest cython module binaries have been updated on the repo of CNTK. But after copying the corresponding binaries for Python 3.5(Linux), i.e. cpu_nms.cpython-35m.so and cython_bbox.cpython-35m.so, into the directory of current project 'Detection/utils/cython_modules/', it still reports the same error as the title during compilation, while the program will run well if I change my Python to 3.4 by Anaconda virtual environment and run the same project, i.e. same directory and same program.

  3. Another solution for this problem is to recompile the above cython modules by ourselves following instructions from CNTK Guide. The lucky thing is that the program can pass the compilation without any error report, while unfortunately it produces/prints some unknown error info during the training.

All of my testing are run on the latest DSVM for Linux.
Default Python: Python 3.5 with CNTK 2.2
Python 3.4 Virtual Environment: Python 3.4 with CNTK 2.1

Hard coding concerning the model architecture

There are several places you hard coded the configurations of network architecture in the main program.

They are:

  1. The input dimension of Fast-RCNN, which is coded as [4096, ]. But it should be possible to be other numbers, e.g. if i want to use ResNet ended with dimension [1000, ].
  2. The spatial_scale of RoiPooling layer, which is coded as 1/16. But in my understanding, this ratio is related to the actual total stride in the convolutional layers.

Insufficient Memory: bad_alloc

Hello,
when I was trying to train the model with VGG16 base model, I got an error of bac_alloc related to the insufficient memory allocation on my DSVM with 4GB memory.

Basically, this is not a programming error. But I think pointing out the minimum requirements on the resource, like memory, will ease lots of pain for the people whose training aborted after running for a long time due to this problem.

Finally, here are some experiment results collected from the Linux DSVM:

  1. VGG16 + e2e: above 4GB but below 8GB
  2. VGG16 + 4-stage: around 17GB

Best wishes!

Name mismatching for the base model VGG16

Hello,

There is an error of name mismatching when I wanted to switch the base model to the VGG16. Basically, the problem is that you define the name of model by yourself in the config file, e.g. VGG16, while the real name of VGG model referred by the download is VGG16_ImageNet_Caffe. Then after finish downloading, the program named the model as VGG16 but the other part of main program loads the model via the name VGG16_ImageNet_Caffe.

Obviously, the problem will also happen when loading and using other base models if the program-defined name is different from the name defined by others.

Best wishes!
Lin.

Font 'Arial' is not default for Linux

Hello sir,

in your file 'Detection/FasterRasterRCNN/plot_helper.py', you use the font 'Arial' to draw text but it's not a default font for my Linux, Azure DSVM for Linux with Ubuntu 16.04.

I am not sure if the solution for fixing this error should be included in the Readme 'Setup' section to tell Linux users to install 'Arial' font before learning the model. if anyone else encountered this problem too, you can find a help from below link: https://askubuntu.com/questions/651441/how-to-install-arial-font-in-ubuntu

Default font size may be still missing

I saw your new commit. You set the font as FreeMono if Arial not found. But the problem of font not found still happens when I try to run the program on the Azure DSVM or DLVM.

So the better way, as i think, is to mention this problem in the Readme so that people will know everything they have to have before the program runs successfully.

Insufficient number of colors in the plot_helper

In the plot_helper.py, the color for drawing boxes currently only supports up to 15 classes, which is defined by five base colors and their 3 variants.

But it is not enough for me when i train and evaluate via my own custom dataset, which contains about 50 classes.

list index out of range

I was training with 17 images and testing with 3 images but gettign below error. is it is because of object identified as wrong as MAP is zero so getting below error?

warnings.warn(WARNING_MSG_GPU_ONLY % ('GPU-Specific', 'https://docs.microsoft.com/en-us/cognitive-toolkit/Setup-Windows-Python#optional-gpu-specific-packages'))
Selected CPU as the process wide default device.
Using the following parameters:
Flip image : True
Train conv layers: True
Random seed : 3
Momentum per MB : 0.9
E2E epochs : 20
Loading existing model from D:\Dropbox (eClerx Services Ltd.)\Navdeep.Singh05\Documents\navdeep\coe work\Hilton\cntk custom hotels\Detection\FasterRCNN\Output\faster_rcnn_eval_AlexNet_e2e.model
D:\Dropbox (eClerx Services Ltd.)\Navdeep.Singh05\Documents\navdeep\coe work\Hilton\cntk custom hotels\Detection\FasterRCNN..\utils\rpn\proposal_layer.py:33: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
layer_params = yaml.load(self.param_str_)
Evaluating Faster R-CNN model for 3 images.
Number of rois before non-maximum suppression: 0
Number of rois after non-maximum suppression: 0
AP for shacks = 0.0000
AP for pool = 0.0000
Mean AP = 0.0000
D:\Dropbox (eClerx Services Ltd.)\Navdeep.Singh05\Documents\navdeep\coe work\Hilton\cntk custom hotels\Detection\FasterRCNN..\utils\rpn\proposal_layer.py:33: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
layer_params = yaml.load(self.param_str_)
Plotting results from Faster R-CNN model for 3 images.
roiScores min: 0.46174436807632446, max: 1.0, threshold: 0.1
reset decision threshold to: 0.23087218403816223
Traceback (most recent call last):
File "FasterRCNN.py", line 750, in
bgrPlotThreshold=cfg["CNTK"].RESULTS_BGR_PLOT_THRESHOLD)
File "D:\Dropbox (eClerx Services Ltd.)\Navdeep.Singh05\Documents\navdeep\coe work\Hilton\cntk custom hotels\Detection\FasterRCNN\plot_helpers.py", line 186, in eval_and_plot_faster_rcnn
decisionThreshold=bgrPlotThreshold)
File "D:\Dropbox (eClerx Services Ltd.)\Navdeep.Singh05\Documents\navdeep\coe work\Hilton\cntk custom hotels\Detection\FasterRCNN\plot_helpers.py", line 95, in visualizeResultsFaster
text = classes[label]
IndexError: list index out of range

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.