Coder Social home page Coder Social logo

davenury / kotlindl Goto Github PK

View Code? Open in Web Editor NEW

This project forked from kotlin/kotlindl

0.0 0.0 0.0 334.47 MB

High-level Deep Learning Framework written in Kotlin and inspired by Keras

License: Apache License 2.0

Kotlin 96.92% Jupyter Notebook 3.08%

kotlindl's Introduction

KotlinDL: High-level Deep Learning API in Kotlin official JetBrains project

Kotlin Slack channel

KotlinDL is a high-level Deep Learning API written in Kotlin and inspired by Keras. Under the hood, it uses TensorFlow Java API and ONNX Runtime API for Java. KotlinDL offers simple APIs for training deep learning models from scratch, importing existing Keras and ONNX models for inference, and leveraging transfer learning for tailoring existing pre-trained models to your tasks.

This project aims to make Deep Learning easier for JVM and Android developers and simplify deploying deep learning models in production environments.

Here's an example of what a classic convolutional neural network LeNet would look like in KotlinDL:

private const val EPOCHS = 3
private const val TRAINING_BATCH_SIZE = 1000
private const val NUM_CHANNELS = 1L
private const val IMAGE_SIZE = 28L
private const val SEED = 12L
private const val TEST_BATCH_SIZE = 1000

private val lenet5Classic = Sequential.of(
    Input(
        IMAGE_SIZE,
        IMAGE_SIZE,
        NUM_CHANNELS
    ),
    Conv2D(
        filters = 6,
        kernelSize = intArrayOf(5, 5),
        strides = intArrayOf(1, 1, 1, 1),
        activation = Activations.Tanh,
        kernelInitializer = GlorotNormal(SEED),
        biasInitializer = Zeros(),
        padding = ConvPadding.SAME
    ),
    AvgPool2D(
        poolSize = intArrayOf(1, 2, 2, 1),
        strides = intArrayOf(1, 2, 2, 1),
        padding = ConvPadding.VALID
    ),
    Conv2D(
        filters = 16,
        kernelSize = intArrayOf(5, 5),
        strides = intArrayOf(1, 1, 1, 1),
        activation = Activations.Tanh,
        kernelInitializer = GlorotNormal(SEED),
        biasInitializer = Zeros(),
        padding = ConvPadding.SAME
    ),
    AvgPool2D(
        poolSize = intArrayOf(1, 2, 2, 1),
        strides = intArrayOf(1, 2, 2, 1),
        padding = ConvPadding.VALID
    ),
    Flatten(), // 3136
    Dense(
        outputSize = 120,
        activation = Activations.Tanh,
        kernelInitializer = GlorotNormal(SEED),
        biasInitializer = Constant(0.1f)
    ),
    Dense(
        outputSize = 84,
        activation = Activations.Tanh,
        kernelInitializer = GlorotNormal(SEED),
        biasInitializer = Constant(0.1f)
    ),
    Dense(
        outputSize = NUMBER_OF_CLASSES,
        activation = Activations.Linear,
        kernelInitializer = GlorotNormal(SEED),
        biasInitializer = Constant(0.1f)
    )
)


fun main() {
    val (train, test) = mnist()
    
    lenet5Classic.use {
        it.compile(
            optimizer = Adam(clipGradient = ClipGradientByValue(0.1f)),
            loss = Losses.SOFT_MAX_CROSS_ENTROPY_WITH_LOGITS,
            metric = Metrics.ACCURACY
        )
    
        it.logSummary()
    
        it.fit(dataset = train, epochs = EPOCHS, batchSize = TRAINING_BATCH_SIZE)
    
        val accuracy = it.evaluate(dataset = test, batchSize = TEST_BATCH_SIZE).metrics[Metrics.ACCURACY]
    
        println("Accuracy: $accuracy")
    }
}

Table of Contents

Library Structure

KotlinDL consists of the several modules:

  • kotlin-deeplearning-api api interfaces and classes
  • kotlin-deeplearning-impl implementation classes and utilities
  • kotlin-deeplearning-onnx inference with ONNX Runtime
  • kotlin-deeplearning-tensorflow learning and inference with TensorFlow
  • kotlin-deeplearning-visualization visualization utilities
  • kotlin-deeplearning-dataset dataset classes

Modules kotlin-deeplearning-tensorflow and kotlin-deeplearning-dataset are only available for desktop JVM, while other artifacts could also be used on Android.

How to configure KotlinDL in your project

To use KotlinDL in your project, ensure that mavenCentral is added to the repositories list:

repositories {
    mavenCentral()
}

Then add the necessary dependencies to your build.gradle file. Use kotlin-deeplearning-onnx module for inference with ONNX Runtime in desktop and Android projects:

dependencies {
    implementation 'org.jetbrains.kotlinx:kotlin-deeplearning-onnx:[KOTLIN-DL-VERSION]'
}

To use the full power of KotlinDL in your project for JVM, add the following dependencies to your build.gradle file:

dependencies {
    implementation 'org.jetbrains.kotlinx:kotlin-deeplearning-tensorflow:[KOTLIN-DL-VERSION]'
    implementation 'org.jetbrains.kotlinx:kotlin-deeplearning-onnx:[KOTLIN-DL-VERSION]'
    implementation 'org.jetbrains.kotlinx:kotlin-deeplearning-visualization:[KOTLIN-DL-VERSION]'
}

The latest KotlinDL version is 0.5.0.

For more details, as well as for pom.xml and build.gradle.kts examples, please refer to the Quick Start Guide.

Working with KotlinDL in Jupyter Notebook

You can work with KotlinDL interactively in Jupyter Notebook with the Kotlin kernel. To do so, add the required dependencies in your notebook:

@file:DependsOn("org.jetbrains.kotlinx:kotlin-deeplearning-tensorflow:[KOTLIN-DL-VERSION]")

For more details on installing Jupyter Notebook and adding the Kotlin kernel, check out the Quick Start Guide.

Working with KotlinDL in Android projects

KotlinDL supports an inference of ONNX models on the Android platform. To use KotlinDL in your Android project, add the following dependency to your build.gradle file:

dependencies {
    implementation ("org.jetbrains.kotlinx:kotlin-deeplearning-onnx:[KOTLIN-DL-VERSION]")
}

For more details, please refer to the Quick Start Guide.

Documentation

Examples and tutorials

You do not need to have any prior deep learning experience to start using KotlinDL. We are working on including extensive documentation to help you get started. At this point, please feel free to check out the following tutorials we have prepared:

For more inspiration, take a look at the code examples in this repository and Sample Android App.

Running KotlinDL on GPU

To enable the training and inference on a GPU, please read this TensorFlow GPU Support page and install the CUDA framework to enable calculations on a GPU device.

Note that only NVIDIA devices are supported.

You will also need to add the following dependencies in your project if you wish to leverage a GPU:

  compile 'org.tensorflow:libtensorflow:1.15.0'_
  compile 'org.tensorflow:libtensorflow_jni_gpu:1.15.0'_

On Windows, the following distributions are required:

For inference of ONNX models on a CUDA device, you will also need to add the following dependencies to your project:

  api 'com.microsoft.onnxruntime:onnxruntime_gpu:1.12.1'

To find more info about ONNXRuntime and CUDA version compatibility, please refer to the ONNXRuntime CUDA Execution Provider page.

Logging

By default, the API module uses the kotlin-logging library to organize the logging process separately from the specific logger implementation.

You could use any widely known JVM logging library with a Simple Logging Facade for Java (SLF4J) implementation such as Logback or Log4j/Log4j2.

You will also need to add the following dependencies and configuration file log4j2.xml to the src/resource folder in your project if you wish to use log4j2:

  implementation 'org.apache.logging.log4j:log4j-api:2.16.0'
  implementation 'org.apache.logging.log4j:log4j-core:2.16.0'
  implementation 'org.apache.logging.log4j:log4j-slf4j-impl:2.16.0'
<Configuration status="WARN">
    <Appenders>
        <Console name="STDOUT" target="SYSTEM_OUT">
            <PatternLayout pattern="%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"/>
        </Console>
    </Appenders>

    <Loggers>
        <Root level="debug">
            <AppenderRef ref="STDOUT" level="DEBUG"/>
        </Root>
        <Logger name="io.jhdf" level="off" additivity="true">
            <appender-ref ref="STDOUT" />
        </Logger>
    </Loggers>
</Configuration>

If you wish to use Logback, include the following dependency and configuration file logback.xml to src/resource folder in your project

  compile 'ch.qos.logback:logback-classic:1.2.3'
<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <root level="info">
        <appender-ref ref="STDOUT"/>
    </root>
</configuration>

These configuration files can be found in the examples module.

Fat Jar issue

There is a known Stack Overflow question and TensorFlow issue with Fat Jar creation and execution on Amazon EC2 instances.

java.lang.UnsatisfiedLinkError: /tmp/tensorflow_native_libraries-1562914806051-0/libtensorflow_jni.so: libtensorflow_framework.so.1: cannot open shared object file: No such file or directory

Despite the fact that the bug describing this problem was closed in the release of TensorFlow 1.14, it was not fully fixed and required an additional line in the build script.

One simple solution is to add a TensorFlow version specification to the Jar's Manifest. Below you can find an example of a Gradle build task for Fat Jar creation.

// build.gradle

task fatJar(type: Jar) {
    manifest {
        attributes 'Implementation-Version': '1.15'
    }
    classifier = 'all'
    from { configurations.runtimeClasspath.collect { it.isDirectory() ? it : zipTree(it) } }
    with jar
}
// build.gradle.kts

plugins {
    kotlin("jvm") version "1.5.31"
    id("com.github.johnrengelman.shadow") version "7.0.0"
}

tasks{
    shadowJar {
        manifest {
            attributes(Pair("Main-Class", "MainKt"))
            attributes(Pair("Implementation-Version", "1.15"))
        }
    }
}

Limitations

Currently, only a limited set of deep learning architectures are supported. Here's the list of available layers:

  • Core layers:
    • Input, Dense, Flatten, Reshape, Dropout, BatchNorm.
  • Convolutional layers:
    • Conv1D, Conv2D, Conv3D;
    • Conv1DTranspose, Conv2DTranspose, Conv3DTranspose;
    • DepthwiseConv2D;
    • SeparableConv2D.
  • Pooling layers:
    • MaxPool1D, MaxPool2D, MaxPooling3D;
    • AvgPool1D, AvgPool2D, AvgPool3D;
    • GlobalMaxPool1D, GlobalMaxPool2D, GlobalMaxPool3D;
    • GlobalAvgPool1D, GlobalAvgPool2D, GlobalAvgPool3D.
  • Merge layers:
    • Add, Subtract, Multiply;
    • Average, Maximum, Minimum;
    • Dot;
    • Concatenate.
  • Activation layers:
    • ELU, LeakyReLU, PReLU, ReLU, Softmax, ThresholdedReLU;
    • ActivationLayer.
  • Cropping layers:
    • Cropping1D, Cropping2D, Cropping3D.
  • Upsampling layers:
    • UpSampling1D, UpSampling2D, UpSampling3D.
  • Zero padding layers:
    • ZeroPadding1D, ZeroPadding2D, ZeroPadding3D.
  • Other layers:
    • Permute, RepeatVector.

TensorFlow 1.15 Java API is currently used for layer implementation, but this project will be switching to the TensorFlow 2.+ in the nearest future. This, however, does not affect the high-level API. Inference with TensorFlow models is currently supported only on desktop.

Contributing

Read the Contributing Guidelines.

Reporting issues/Support

Please use GitHub issues for filing feature requests and bug reports. You are also welcome to join the #kotlindl channel in the Kotlin Slack.

Code of Conduct

This project and the corresponding community are governed by the JetBrains Open Source and Community Code of Conduct. Please make sure you read it.

License

KotlinDL is licensed under the Apache 2.0 License.

kotlindl's People

Contributors

zaleslaw avatar juliabeliaeva avatar ermolenkodev avatar mkaze avatar mkhalusova avatar therealansh avatar knok16 avatar avan1235 avatar cagriyildirimr avatar dosier avatar michalharakal avatar wuhanstudio avatar devcrocod avatar smallshen avatar lotharschulz avatar digantamisra98 avatar kokorins avatar sebastianaigner avatar onuralpszr avatar lostmekka avatar b0n541 avatar hbrammer avatar femialaka avatar d-lowl avatar apahl avatar antkos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.