karanchahal / buildtensorflow Goto Github PK
View Code? Open in Web Editor NEWA lightweight deep learning framework made with ❤️
A lightweight deep learning framework made with ❤️
This issue tracks the development of the activation layers like sigmoid and Relu that is plug and play with the rest of the Model
This issue tracks the development of the optimiser layer. Optimisers like RMSProp, SGD will extend from this.
D:\VScode\test_vscode_01\TFTEST>g++ -std=c++11 main.cpp -o main
In file included from buildTensorflow.h:1,
from main.cpp:1:
types/tensor.h:21:10: fatal error: operations/operation.h: No such file or directory
#include "operations/operation.h"
compilation terminated.
Sir, I downloaded your code,but I meet this error ,I don't konw how to resolve it.
In this project, we use a ND Array to represent the underlying data of a Tensor. The Matrix class represents the ND array.
As of now, we have little to no support for broadcasting as one would expect from ND arrays (like numpy arrays).
So for now we can perform various ops like addition, multiplication, division etc on elements of the same size, with ops being applied between 2 elements that are in the same position. However, that needs to change as when we move on to implement ops like softmax, we need support for applying operations across unequal size tensors.
For example, in the softmax operation, we have a sum varibale which is the sum of all elements in the data. And the softmax procedure dictate that we divide each element by this sum.
This is trivially implementable if we have a single dimension array. DIvision with a scalar solves our problem.
However, the problem becomes complicated when we have a mini batch of examples over which we want to apply softmax across an axis. Dividing the sum needs some form of broadcasting.
This issue will track the rules of broadcasting for our project. Most likely we want broadcasting to be very intuitive and similar to numpy's functionality.
Go through the entire code and refactor/cleanup if required.
Lot of times we are constructing objects on the heap and not explicitly deleting them. This leads to a ton of memory leaks. We need to debug this and solve it. One way is to manually keep a check of these pointers. Another way is to use smart pointers and let memory leaks be handled that way.
We should have a tests module that verifies the working of each Tensor operation. The current list of operations are:
We can then add these tests to the CI/CD Devops process that will be triggered on each pull request
When we try to overload operations such that complex operations can be done in a single expression. Something like
Tensor<float> = a;
Tensor<float> = b;
Tensor<float> = c;
Tensor<float> = d;
Tensor<float> e = a*b + c*d;
e.backward();
We get several garbage value Tensor objects when we debug the backOp of e. This is a very puzzling bug.
Also we cannot perform operations where there is a temporary on the right hand side.
Tensor<float> e = a*b + c*d;
is an example of that where a*b
is a temporary.
There are bugs in the operation overloading and that is a medium priority item. It would be very good to get fixed but we can proceed using the assemly coding style one op per line approach
This issue tracks the development of the Loss layer (MSE, Cross entropy et a)
This issue tracks the development of the Dataset layer.
I'm a little inspired by the simplicity of Tensorflow Datasets and would like to model it like that.
Also all that good stuff like prefetching multiple threads to remove any data loading bottlenecks.
Need some good though into this.
Maybe something like this ?
auto dataset = new Dataset("mnist");
for(auto i: dataset) {
auto input = i->input;
auto targets = i->targets;
}
Build out transpose operation so that we can perform backProp of dot product
This issue tracks the development of v0.1 of the nd array support for this project.
Currently we will support the following things:
The aim of the API is to look somewhat like this:
auto val = new vector<int> ({1,2,3,4});
auto shape = new vector<int> ({2,2});
Matrix<int> m(val,shape)
cout<<m<<endl;
m = m + m;
m = m - m;
In the future we want to try to add broadcasting, but not in this release.
We'll try to be as memory efficient as possible with as few allocation of memory and responsibly avoiding memory leaks by writing destructors.
This issue tracks the development of the batch normalisation layer.
This issue tracks the development of the dropout layer
Now we can get to building the interesting Deep Learning layers ! now that our base of modular automatic differentiation is setup with some operations being performed on the GPu, we are set !
We've come a long way and let's appreciate that. But so much more to be done !
This issue tracks the development of the Dense Layer (Fully Conected layer)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.