Coder Social home page Coder Social logo

buildtensorflow's People

Contributors

karanchahal avatar uditarora avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

buildtensorflow's Issues

Activation Layer

This issue tracks the development of the activation layers like sigmoid and Relu that is plug and play with the rest of the Model

Optimiser Layer

This issue tracks the development of the optimiser layer. Optimisers like RMSProp, SGD will extend from this.

I meet an weird error

D:\VScode\test_vscode_01\TFTEST>g++ -std=c++11 main.cpp -o main
In file included from buildTensorflow.h:1,
from main.cpp:1:
types/tensor.h:21:10: fatal error: operations/operation.h: No such file or directory
#include "operations/operation.h"
compilation terminated.
Sir, I downloaded your code,but I meet this error ,I don't konw how to resolve it.

Broadcasting Support for N-Dimensional Array

In this project, we use a ND Array to represent the underlying data of a Tensor. The Matrix class represents the ND array.

As of now, we have little to no support for broadcasting as one would expect from ND arrays (like numpy arrays).

So for now we can perform various ops like addition, multiplication, division etc on elements of the same size, with ops being applied between 2 elements that are in the same position. However, that needs to change as when we move on to implement ops like softmax, we need support for applying operations across unequal size tensors.

For example, in the softmax operation, we have a sum varibale which is the sum of all elements in the data. And the softmax procedure dictate that we divide each element by this sum.

This is trivially implementable if we have a single dimension array. DIvision with a scalar solves our problem.

However, the problem becomes complicated when we have a mini batch of examples over which we want to apply softmax across an axis. Dividing the sum needs some form of broadcasting.

This issue will track the rules of broadcasting for our project. Most likely we want broadcasting to be very intuitive and similar to numpy's functionality.

Memory Leaks in creating Tensors and Ops

Lot of times we are constructing objects on the heap and not explicitly deleting them. This leads to a ton of memory leaks. We need to debug this and solve it. One way is to manually keep a check of these pointers. Another way is to use smart pointers and let memory leaks be handled that way.

Tests for Various Tensor Operations

We should have a tests module that verifies the working of each Tensor operation. The current list of operations are:

  1. Multiplication
  2. Addition
  3. Division
  4. Exponentiation
  5. Dot Product

We can then add these tests to the CI/CD Devops process that will be triggered on each pull request

Backward Propagation Pointer Bug

When we try to overload operations such that complex operations can be done in a single expression. Something like

Tensor<float> = a;
Tensor<float> = b;
Tensor<float> = c;
Tensor<float> = d;
Tensor<float> e = a*b + c*d;
e.backward();

We get several garbage value Tensor objects when we debug the backOp of e. This is a very puzzling bug.

Also we cannot perform operations where there is a temporary on the right hand side.

Tensor<float> e = a*b + c*d;

is an example of that where a*b is a temporary.

There are bugs in the operation overloading and that is a medium priority item. It would be very good to get fixed but we can proceed using the assemly coding style one op per line approach

Loss layer

This issue tracks the development of the Loss layer (MSE, Cross entropy et a)

Dataset layer.

This issue tracks the development of the Dataset layer.

I'm a little inspired by the simplicity of Tensorflow Datasets and would like to model it like that.

Also all that good stuff like prefetching multiple threads to remove any data loading bottlenecks.

Need some good though into this.

Maybe something like this ?

auto dataset = new Dataset("mnist");
for(auto i: dataset) {
auto input = i->input;
auto targets = i->targets;
}

Add nd-array support

This issue tracks the development of v0.1 of the nd array support for this project.

Currently we will support the following things:

  1. Creation of Nd shape array
  2. Arithmetic Operations with nd shape array

The aim of the API is to look somewhat like this:

auto val = new vector<int> ({1,2,3,4});
auto shape = new vector<int> ({2,2});
Matrix<int> m(val,shape)
cout<<m<<endl;
m = m + m;
m = m - m;

In the future we want to try to add broadcasting, but not in this release.
We'll try to be as memory efficient as possible with as few allocation of memory and responsibly avoiding memory leaks by writing destructors.

Dropout layer

This issue tracks the development of the dropout layer

Dense Layer Tracker

Now we can get to building the interesting Deep Learning layers ! now that our base of modular automatic differentiation is setup with some operations being performed on the GPu, we are set !

We've come a long way and let's appreciate that. But so much more to be done !

This issue tracks the development of the Dense Layer (Fully Conected layer)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.