Coder Social home page Coder Social logo

mustafamerttunali / deep-learning-training-gui Goto Github PK

View Code? Open in Web Editor NEW
128.0 12.0 33.0 402.22 MB

Train and predict your model on pre-trained deep learning models through the GUI (web app). No more many parameters, no more data preprocessing.

License: MIT License

Python 100.00%
deep-learning tensorflow flask python tensorboard gui computer-vision image-classification api mobilenetv2

deep-learning-training-gui's Introduction

logo

Description

My goal is to simplify the installation and training of pre-trained deep learning models through the GUI (or you can call web app) without writing extra code. Set your dataset and start the training right away and monitor it with TensorBoard or DLTGUI tool. No more many parameters, no more data preprocessing.

While developing this application, I was inspired by the DIGITS system developed by NVIDIA.

  • You won't have any problems for training image classification algorithms.
  • It is easy to train a image classification model, save the model, and make predictions from the saved model.
  • A few parameters!
  • You will be able to train on pre-trained models.
  • It doesn't exist for 1.0 but, it will be much easier to train and use object detection algortihms.
  • You can train your model on GPU or CPU.
  • Parallel operation is possible.
  • You won't be needing a second terminal and a script code to run TensorBoard.

In the words of Stephen Hawking:

Science is beautiful when it makes simple explanations of phenomena or connections between different observations. Examples include the double helix in biology and the fundamental equations of physics.

Guide - Youtube Video (Coming Soon)

Before Training

Updates

DLTGUI Version 1.0.9

  • Bug Fixes (There was a problem about showing heatmap for Cuda >= 10.0, fixed).

DLTGUI Version 1.0.8

  • Bug fixes.

DLTGUI Version 1.0.7

  • Many bugs have been solved.

  • You will be able to Fine-Tuning your model. In this way, you can easily increase the success rate of the model.

  • You will be able to see which parts your model focuses on while classifying images (Class activation map, heat map - heatmap - available for MobileNetV2 only)

DLTGUI Version 1.0.6

  • Bug fixes.

DLTGUI Version 1.0.5

  • Now you can do data augmentation using Augmentor.

DLTGUI Version 1.0.4

  • Now you can choose CPU or GPU before the training.
  • You are able to choose activation function for singe-class training. (Sigmoid and ReLu [new])
  • Added SimpleCNNModel
  • Fixed bugs

DLTGUI Version 1.0.2

  • Fixed single class problem, now you can train one-class model,
  • Added sigmoid as activation function and binary_crossentropy as loss function,
  • Added new function to DLGUI (prepare_data, sigmoid and more)
  • Added new example dataset.

DLTGUI Version 1.0.1:

  • Now you can use InceptionV3, VGG16, VGG19 and NASNetMobile models. [Image Classification]

Getting started

Prerequisites

  • Anaconda 64-bit
  • Python 3.7.3
  • Tensorflow 2.0.1
  • CUDA and CUDNN ( Minimum Cuda 10.0 - for gpu usage)
  • Numpy 1.16.4
  • Matplotlib
  • PIL
  • subprocess
  • pathlib
  • Augmentor

Available models

  • MobileNetV2
  • Inception V3
  • VGG16
  • VGG19
  • NASNetMobile
  • SimpleCnnModel

Dataset Folder Structure

The following is an example of how a dataset should be structured. Before you train a deep learning model, put all your dataset into datasets directory.

├──datasets/
    ├──example_dataset/
        ├── cat
        │   ├── img_1.jpg/png
        │   └── img_2.jpg/png
    ├──flower_photos/
        ├── daisy
        │── dandelion
        │── roses
        │── sunflowers
        │── tulips
        
For image classification.

Usage

Page - Home

  1. Clone this repo.
  2. cd Deep-Learning-Training-GUI
  3. On your conda terminal: pip install -r requirements.txt
  4. Set your dataset directory as I show above.
  5. When you set your dataset, go to the terminal and run python app.py. You can access the program on localhost:5000
  6. Now you will see the home page.

Home

Page - Training - Parameters

Training

  1. You must enter the path where your dataset is located. For example, I want to select the flower_photos folder in the datasets and I will write to the form element like this: datasets/flower_photos
  2. Split the dataset, we need to specify what percentage of the training data we will use as a test.
  3. Pre-trained Models - Currently only MobileNetV2 is available, but in future versions you can easily select other pre-trained models for fine-tuning [not available yet].
  4. CPU / GPU - You need to specify whether you want to train on the GPU or CPU (the first version will automatically run on the GPU).
  5. Number Of Classes - I'll go again from the flower_photos example. There are 5 separate folders under the flower_photos folder. This is our class count. When you train your own data set, you have to create as many folders here as you have classes.
  6. Batch Size - Specifies whether the training samples are uploaded to the training network in escapes. If you have a 1080 Ti or better GPU, you can set it to 64 or 128. The higher Batch Size, less noise that the model learns.
  7. Epoch - The number of training data shown to the model network. So if you make 10 Epoch, the training data will be shown to the model network 10 times.

Training and TensorBoard

When you start to training, you will be able to access TensorBoard without writing any script on terminal! Check localhost:6006

Training-Live

Prediction

Prediction

Result

Result

Contributing

Contributions with example scripts for other frameworks (PyTorch or Caffe 2) and other pre-trained models are welcome!

Guidelines

Coming soon.

Contributors

To-Do List

  • Release 5 pre-trained models.
  • Choosing CPU or GPU before the training.
  • Choosing Activation Function for singe-class training. (Sigmoid and ReLu)
  • Data Augmentation
  • Fine-Tuning
  • Heatmap on predicted images.
  • Object Detection - Mask RCNN.

References 📚

deep-learning-training-gui's People

Contributors

dependabot[bot] avatar mustafamerttunali avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-learning-training-gui's Issues

[Feature request] Add documentation on the functional differences between the available pretrained models

As it stands there 6 pre-trained models availible for fine tuning:

  • MobileNetV2
  • InceptionV3
  • VGG16
  • VGG19
  • NASNetMobile
  • SimpleCnnModel

It is not however from that which I can see communicated within the webUI what the functional differences between these are, while broad inferences can be made based on the names there is possibly room for clearer communication of the pros and cons of each model.

really like this idea, hope it is well maintained.

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.