Coder Social home page Coder Social logo

xxxnell / flex Goto Github PK

View Code? Open in Web Editor NEW
125.0 9.0 14.0 42.53 MB

Probabilistic deep learning for data streams.

License: MIT License

Scala 76.27% Python 23.42% Shell 0.23% HTML 0.08%
scala functional-programming probability probability-density-function probability-distribution statistics data-stream

flex's Introduction

Flex

Build Status codecov Latest version

Flex is a probabilistic deep learning library for data streams. It has the following features:

  • Fast. Flex provides probabilistic deep learning that is fast enough to solve the real-world problems.
  • Typesafe and Functional. Types and pure functions make the code easy to understand and maintain.
  • Easy. You can program with a minimal knowledge of probability theory.

Today, neural networks have been widely used for solving problems in many areas. However, classical neural networks have some limitations when you want to include uncertainties in the model. For example, suppose that input data and training data contain a lot of noise. If you need to detect whether the data contains false-positive or false-negative, the model should represent how reliable the input and the output are. To deal with this issue, probabilistic deep learning, also known as the Bayesian neural network, can be used. It is a way to treat both input and output as a probability distribution and it is one of the best approaches to represent uncertainties. However, the Bayesian neural network is so computationally slow that it cannot be readily applied to the real-world problems. Flex is fast enough to make it possible to apply the Bayesian neural network to the real-world problems.

Getting Started

WIP. Flex is published to Maven Central and built for Scala 2.12, so you can add the following to your build.sbt:

libraryDependencies ++= Seq(
  "com.xxxnell" %% "flex-core",
  "com.xxxnell" %% "flex-chain"
).map(_ % "0.0.5")

Then, you need to import the context of Flex.

import flex.implicits._
import flex.chain.implicits._

Building a Model

We will use 3 hiddel layers with 10 neurons each.

val (kin, kout) = (20, 10)
val (l0, l1, l2, l3) = (784, 10, 10, 1)
val (k0, k1, k2, k3) = (20, 20, 20, 20)
val model0 = Complex
  .empty(kin, kout)
  .addStd(l0 -> k0, l0 * l1 -> k1, l1 * l2 -> k2, l2 * l3 -> k3)
  .map { case x1 :: z1 :: rem => z1.reshape(l1, l0).mmul(x1).tanh :: rem }
  .map { case h1 :: z2 :: rem => z2.reshape(l2, l1).mmul(h1).tanh :: rem }
  .map { case h2 :: z3 :: rem => z3.reshape(l3, l2).mmul(h2) :: rem }

First, construct an empty model using Complex.empty. Second, add the variables to be used for this neural network. Here, a prior probabilities of these variables are standard normal distributions with a mean of zero and a variance of one. Third, define a transformation of each layers using map operation. In this example, tanh was used as the activator.

Contributing

Contributions are always welcome. Any kind of contribution, such as writing a unit test, documentation, bug fix, or implementing the algorithm of Flex in another language, is helpful. It is also possible to make academic collaboration works. If you need some help, please contact me via email or twitter.

The master branch of this repository contains the latest stable release of Flex. In general, pull requests should be submitted from a separate feature branch starting from the develop branch.

Fo more detail, see the contributing documentation.

License

All code of Flex is available to you under the MIT license.

Copyright the maintainers.

flex's People

Contributors

kailuowang avatar kmsiapps avatar leifwickland avatar namukpark avatar tenkeyless avatar xxxnell avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flex's Issues

Memo `icdf`

Getting icdf is an expensive operation. Therefore, cache icdf to improve performance.

Support for comprehension

Currently, Sketch only has a working monad operation, so if you include Dist except Sketch in for comprehension, it will not work properly.

Experiment with large scale `ConcatSmoothingPs`

When Sketch estimates the density distribution, too low a KL-divergence value is obtained because the boundary is not processed properly. Therefore, as a way of smoothing the edges, we use the large scale ConcatSmoothingPs and then re-examine KL-divergence when performing deepUpdate.

How about add CODE_OF_CONDUCT.md to flip?

Type of issue

  • Doc

Description

  • I think opensource project should have code of conduct. How about add CODE_OF_CONDUCT.md to flip?
    You could add CODE_OF_CONDUCT.md below link ๐Ÿ˜„

Implement `SelectiveSketch`

Implement SelectiveSketch which performs deepUpdate selectively only when there is a discrepancy between the temporarily collected sample datas and the recorded distribution by Sketch.

Add various samplings

Flip seems to be able to compose various sampling methodologies such as MCMC or Gibbs.

Abstract and separate `sampling`

I have now independently packaged the sampling algorithm to separate the sampling methods. However, the legacy is strongly combined, so one have to replace it with the new one.

See cmapForEqualSpaceCumCorr of EqualSpaceCdfUpdate

sbt task to execute all experiments

So far I had to call runMain to run the implemented experiments.However, as the number of experiments increases, it is no longer possible to run all the experiments one by one. Therefore, sbt task to execute all experiments must be needed.

`icdfPlot` in `updateCmap` of `EqualSpaceCdfUpdate` doesn't returns infinity at 0 and 1.

Theoretically, inverse-cdf (quantile) returns ยฑโˆž at 0 and 1. However, due to the limitations of the way Sketch treats boundaries, this value only returns a finite large value.

For now, we take the approach of artificially removing the two values of the boundaries, but we need a more sophisticated way of getting a new Cmap in this function.

Plot with Measurable

Now plot contains primitive records only. However, in some cases, plot with measurable range, or RangeM would be useful.

Should sampling return Option?

Now sampling of SamplingDist returns Option of DensityPlot for empty structure Sketch. However, it can return DensityPlot.empty instead of None.

`bind` returns NaN

bind returns NaN for this configuration:

    val samplingNo = 50

    implicit val conf: SketchConf = SketchConf(
      startThreshold = 50,
      thresholdPeriod = 100,
      boundaryCorr = 0.1,
      decayFactor = 0,
      queueSize = 30,
      cmapSize = samplingNo,
      cmapNo = 5,
      cmapStart = Some(-10d),
      cmapEnd = Some(10),
      counterSize = samplingNo
    )

For more detail, see the code.

Execute all experiment codes in CLI

sbt experiment command in root should execute all experiments (c.f. flex.experiment package). However, for now, only one experiment is executed (with arg0). Therefore, the experiment command that does not have an argument must perform all the experiment codes. See Tasks.

Modularize `smoothing`

smoothing operations are used in several places. The use of UpdateCmap and DeepUpdate is especially important.

As part of refactoring the smoothing operation, several methods should be applicable dynamically.

Improve KL-divergence accuracy

When calculating the KL-divergence, the boundary is vanishing. So, the calculation results doesn't included for it. Therefore, when the sampling number is too small (100>), or when the ratio of boundary is too high (0.01<), the numerical calculation result of KL-divergence is inaccurate.

Apply ND4J

ND4J, or N-Dimensional Arrays for Java is scientific computing libraries for the JVM. They are meant to be used in production environments, which means routines are designed to run fast with minimum RAM requirements.
It would be better to replace array computation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.