Coder Social home page Coder Social logo

libfann / fann Goto Github PK

View Code? Open in Web Editor NEW
1.6K 121.0 378.0 10.31 MB

Official github repository for Fast Artificial Neural Network Library (FANN)

License: GNU Lesser General Public License v2.1

CMake 1.24% C 9.52% C++ 76.44% Shell 0.85% Python 11.43% Starlark 0.52%
fann c neural-network library

fann's Introduction

Fast Artificial Neural Network Library

FANN

Fast Artificial Neural Network (FANN) Library is a free open source neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks.

Cross-platform execution in both fixed and floating point are supported. It includes a framework for easy handling of training data sets. It is easy to use, versatile, well documented, and fast.

Bindings to more than 15 programming languages are available.

An easy to read introduction article and a reference manual accompanies the library with examples and recommendations on how to use the library.

Several graphical user interfaces are also available for the library.

FANN Features

  • Multilayer Artificial Neural Network Library in C
  • Backpropagation training (RPROP, Quickprop, Batch, Incremental)
  • Evolving topology training which dynamically builds and trains the ANN (Cascade2)
  • Easy to use (create, train and run an ANN with just three function calls)
  • Fast (up to 150 times faster execution than other libraries)
  • Versatile (possible to adjust many parameters and features on-the-fly)
  • Well documented (An easy to read introduction article, a thorough reference manual, and a 50+ page university report describing the implementation considerations etc.)
  • Cross-platform (configure script for linux and unix, dll files for windows, project files for MSVC++ and Borland compilers are also reported to work)
  • Several different activation functions implemented (including stepwise linear functions for that extra bit of speed)
  • Easy to save and load entire ANNs
  • Several easy to use examples
  • Can use both floating point and fixed point numbers (actually both float, double and int are available)
  • Cache optimized (for that extra bit of speed)
  • Open source, but can still be used in commercial applications (licenced under LGPL)
  • Framework for easy handling of training data sets
  • Graphical Interfaces
  • Language Bindings to a large number of different programming languages
  • Widely used (approximately 100 downloads a day)

To Install

On Linux

From Source

First you'll want to clone the repository:

git clone https://github.com/libfann/fann.git

Once that's finished, navigate to the Root directory. In this case it would be ./fann:

cd ./fann

Then run CMake

cmake .

After that, you'll need to use elevated privileges to install the library:

sudo make install

That's it! If everything went right, you should see a lot of text, and FANN should be installed!

Building fann - Using vcpkg

You can download and install fann using the vcpkg dependency manager:

git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install fann

The fann port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.

To Learn More

To get started with FANN, go to the FANN help site, which will include links to all the available resources.

For more information about FANN, please refer to the FANN website

fann's People

Contributors

afck avatar andersfylling avatar anubisthejackle avatar bmwiedemann avatar bridgerrholt avatar bukka avatar codemercenary avatar criptych avatar drdub avatar gasagna avatar glumb avatar joelself avatar jschueller avatar kaizhu256 avatar ksuszka avatar lasote avatar lilywangl avatar maxkellermann avatar mikayex avatar mjunix avatar nathanepstein avatar saraneuhaus avatar shapedsundew9 avatar sigill avatar slyshyko avatar steffennissen avatar tay10r avatar varunagrawal avatar yukota avatar zasdfgbnm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fann's Issues

RPROP algorithm

Comparing the fann_update_weights_irpropm function with the pseudo code in the paper about the algorithm (Igel and Hüsken, 2000, citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.1332) there is a difference in the treatment of slope = 0, which happens by definition after a sign change.

In the paper, the step size is decreased and the weight is not modified. In the next epoch, the reduced step size is applied.

In FANN, the step size is decreased and the weight is modified with the reduced step size as if the slope were positive. In the next epoch, the step size is increased again.

Is there a reason for the differences or is it a bug?

Maybe a bug ?

Hello !
Doing a code revision on fann source code I found this:

FANN_EXTERNAL void FANN_API fann_scale_input( struct fann *ann, fann_type *input_vector )
{
	unsigned cur_neuron;
	if(ann->scale_mean_in == NULL)
	{
		fann_error( (struct fann_error *) ann, FANN_E_SCALE_NOT_PRESENT );
		return;
	}
...
FANN_EXTERNAL void FANN_API fann_descale_input( struct fann *ann, fann_type *input_vector )
{
	unsigned cur_neuron;
	if(ann->scale_mean_in == NULL)
	{
		fann_error( (struct fann_error *) ann, FANN_E_SCALE_NOT_PRESENT );
		return;
	}

And this probable copy & paste mistake:

FANN_EXTERNAL void FANN_API fann_scale_output( struct fann *ann, fann_type *output_vector )
{
	unsigned cur_neuron;
	if(ann->scale_mean_in == NULL) <<<<<????? copy&paste mistake ?????? scale_mean_out
	{
		fann_error( (struct fann_error *) ann, FANN_E_SCALE_NOT_PRESENT );
		return;
	}
...
FANN_EXTERNAL void FANN_API fann_descale_output( struct fann *ann, fann_type *output_vector )
{
	unsigned cur_neuron;
	if(ann->scale_mean_in == NULL) <<<<<????? copy&paste mistake ?????? scale_mean_out
	{
		fann_error( (struct fann_error *) ann, FANN_E_SCALE_NOT_PRESENT );
		return;
	}

Cheers !

complietest Target

Hi,

What is the intent of the compiletest target in the examples folder?

There are several compile errors when running this target all due to the -ansi option which forces conformance with a very old version of the C & C++ standards which the code does not conform too (but does conform to more modern standards).

I have "fixed" these issues by adding the appropriate standard version to the gcc/g++ command lines and a very minor tweak in fann_io.c which should still mean the code is compatible with standards compliant compilers. I was going to set up a pull request but then it occurred to me that it may be your intent that this target fails?

The comment says "compiletest is used to test whether the library will compile easily in other compilers" but does it really mean "compliant with C90/C++98"?

Thx

Why multiple binaries?

This is more just a question about the design of this library. Why was it designed to create multiple binaries (i.e. libfloatfann.so, libdoublefann.so, etc)? Perhaps I am somewhat ignorant of your logic but I always have thought that a single binary is preferred for a library. This isn't meant as a criticism but more as a curiosity question. If you could please explain this I would appreciate it.

fann_create_shortcut seems to create fully connected networks.

The documentation says that fann_create_shortcut creates a network "which is not fully connected", but in fann.c:457, connection_rate is set to 1.

Is it correct that it does, in fact, create a connection from every neuron to every neuron in every later layer?

Is there a reason why the combination (a fann_create_sparse_shortcut function with a connection_rate parameter) wouldn't make sense?

Fann GPU

Hi so I saw on the website there is a tutorial for fann_gpu but there is no file fann_gpu.h. I downloaded the linked tar on the blog and attempted to install it but nothing works still. Is this still supported and is there anything newer since the blog post?

example simple_train/simple_test does not give right output on Windows 7

Example simple_train/simple_test does not give right output on Windows 7

The xor.data file uses polarized version of the binary number 0, 1 -> -1, 1 respectively.
I get the output from trained neural net over 4 inputs [-1,-1], [1,1],[-1,1],[1,-1], to vary wildly between multiple training runs.

Why is this so? Can't we make your example more reproducible ?

Compiler = GGC4.9.2, Dev-C++ on Windows 7.

Sparsely connected net is not sparse

Hello!
I'm trying to build sparsely connected network. I have specific connections structure, so, I create fully connected net and then change its connections. It seems that I can't change number of connections. Show me please an example of sparse neural net creating and manual connections setting.

Here is my code:

// Set sparse connections
{
uint nweights0 = get_total_connections();
vector conns;
conns = [some value]; // conns.size() < nweights0 !!
set_weight_array(&conns[0], conns.size());
}

// Check connections count
{
uint nweights1 = get_total_connections(); // nweights1 = nweights0, Why?
vector conns(nweights1);
get_connection_array(&conns[0]); // conns.size() = nweights0..
}

So, I set my own connections array, number of connections is less than in fully connected net with same structure.
Then I'm getting connections array and it's size is as this network is fully connected.
Do I set connections in right way? Do I get them in right way?

question: fann for time series outliers

Hi,

I hope this is not a stupid question - I am still learning...

I am trying to find a way of predicting outliers for generating health alarm notifications on time series data, with http://my-netdata.io (I am the founder of netdata - an open source performance monitoring solution).

Is this something that can be done with fann?

So, the idea is:

  1. netdata collects around 2000 - 5000 different metrics per server per second. So, enough time provided we can have very detailed datasets.

  2. Ideally, I would like to have one ANN for each metric collected. Each ANN should learn continuously new data, as they are collected. If this cannot be done with fann, netdata could spawn threads at regular intervals, to train the ANNs with newly collected data and load the trained ANNs into its memory.

  3. health checks performed by netdata should somehow test newly collected data against the ANNs to find if the collected data are outliers or not. If they are, netdata could raise health alarms.

multithreading support

Hello!

I have seen some training data merge functions in fann, so I wondered how much multithreading support there is for FANN?

Is FANN thread-safe? Ie. does it use any static, global variables etc.?

Is there internal multithreading support like training on multiple threads without the user doing anything related to organizing the single threads/per thread data?

Maybe add this information to the README.md file.

Division by 0 error when all train samples for an input neuron are same value.

Hi,

When all of the training samples for a given input neuron are the same, then 'ann->scale_deviation_in' obtains a value of 0.0, causing a division-by-zero error on line 1034 of 'fann_train_data.c'.

I've added an 'if' guard against the div-by-0 in 'fann_scale_input()' (and similar in 'fann_descale_input()') ...

for( cur_neuron = 0; cur_neuron < ann->num_input; cur_neuron++ )
	if(ann->scale_deviation_in[ cur_neuron ] != 0.0)
		input_vector[ cur_neuron ] =
			(
				( input_vector[ cur_neuron ] - ann->scale_mean_in[ cur_neuron ] )
				/ ann->scale_deviation_in[ cur_neuron ]
				- ( (fann_type)-1.0 ) /* This is old_min */
			)
			* ann->scale_factor_in[ cur_neuron ]
			+ ann->scale_new_min_in[ cur_neuron ];

Does this seem a reasonable solution to the problem ?

Regards.

example error data files url

train_data = fann_read_train_from_file("../datasets/mushroom.train");
no
../..

(test file too)
please check other examples

Coding style - indent

Currently there is quite a mix up of different coding styles. It would be great if it could be a bit more consistent.

I would like to start with indent. All library C files and most header files use mostly tabs so I think we should keep using tabs. Currently the main exception is the cpp headers (fann_data_cpp.h and fann_train_data_data.h) and tests. If there are no objections I would like to make it consistent and use tabs everywhere! Hope that it's ok with everyone!

Ping @libfann/core

Installation doesn't compile

Hi,

I followed the installation guide in the readme file and at the step of making the compiler throws an error:

/home/jakob/fann/tests/fann_test.cpp: In member function ‘virtual void FannTest_CreateStandardThreeLayers_Test::TestBody()’:
/home/jakob/fann/tests/fann_test.cpp:64:67: error: taking address of temporary array
AssertCreateAndCopy(net, 3, (unsigned int[]) {2, 3, 4}, 11, 25);
^
/home/jakob/fann/tests/fann_test.cpp: In member function ‘virtual void FannTest_CreateSparseFourLayers_Test::TestBody()’:
/home/jakob/fann/tests/fann_test.cpp:93:69: error: taking address of temporary array
AssertCreateAndCopy(net, 4, (unsigned int[]){2, 3, 4, 5}, 17, 31);
^
/home/jakob/fann/tests/fann_test.cpp: In member function ‘virtual void FannTest_CreateSparseFourLayersUsingCreateMethod_Test::TestBody()’:
/home/jakob/fann/tests/fann_test.cpp:98:69: error: taking address of temporary array
AssertCreateAndCopy(net, 4, (unsigned int[]){2, 3, 4, 5}, 17, 31);
^
/home/jakob/fann/tests/fann_test.cpp: In member function ‘virtual void FannTest_CreateShortcutFourLayers_Test::TestBody()’:
/home/jakob/fann/tests/fann_test.cpp:121:69: error: taking address of temporary array
AssertCreateAndCopy(net, 4, (unsigned int[]){2, 3, 4, 5}, 15, 83);
^
/home/jakob/fann/tests/fann_test.cpp: In member function ‘virtual void FannTest_CreateShortcutFourLayersUsingCreateMethod_Test::TestBody()’:
/home/jakob/fann/tests/fann_test.cpp:127:69: error: taking address of temporary array
AssertCreateAndCopy(net, 4, (unsigned int[]){2, 3, 4, 5}, 15, 83);
^
/home/jakob/fann/tests/fann_test.cpp: In member function ‘virtual void FannTest_CreateFromFile_Test::TestBody()’:
/home/jakob/fann/tests/fann_test.cpp:151:76: error: taking address of temporary array
AssertCreateAndCopy(netToBeLoaded, 3, (unsigned int[]){2, 3, 4}, 11, 25);
^
/home/jakob/fann/tests/fann_test.cpp: In member function ‘virtual void FannTest_CreateFromFileUsingCreateMethod_Test::TestBody()’:
/home/jakob/fann/tests/fann_test.cpp:161:66: error: taking address of temporary array
AssertCreateAndCopy(net, 3, (unsigned int[]){2, 3, 4}, 11, 25);

but after changing all the (unsigned int[]) statements to (const unsigned int[]) in the fann_test.cpp file, the file compiled fine.
In fann_test_train.cpp, there is the same problem, but there you have to change it to (fann_type*)(const fann_type[]) to get fixed.
I can create a pull request if you want.

Best,
Jakob

Python binding location?

Hello,

With older versions described here: [http://leenissen.dk/fann/html/files2/installation-txt.html#Python_Install](FANN Python install), there used to be a python folder in the root of the FANN project.

Has the python binding been removed from the main project? Or has it been moved to a separate repository (this one maybe: [https://github.com/orso82/python-fann](FANN python binding)?)

Thanks.

Is possible to convert net to C function?

I need small answer machine. I have a learned network. Is any script or other way to convert my network to one C function?
for example i learn xor example
is possible to produce
float function(float a, float b) {}

im not need a learn and whole library. I need only answer
?

GPU compiling issues

i downloaded the fann library written by seth that runs on GPU.
http://leenissen.dk/fann/forum/viewtopic.php?f=2&t=658

i am running ubuntu, opengl, glut on nvidia graphics card. i tried to compile with fann_gpu.h and hit a brick wall here.

Here is the example code

gpu.c

include "fann/src/gpu/fann_gpu.h"

layer *A, *B, *C, *D, *E;
float *input, *weight_matrix, *mask_matrix;
int i,j;
double start, end, run_time;

//Dummy function for filling weights and mask with dummy data, it preserves the texture sizes wich is multiples of 4.
void fillWeights(layer *target);

//dummy vector data
void fillVector(layer *target);

int main(int argc, char *arg){
char
error;
int n = 5;

//start OpenGL
initOpenGL();

//testing system compatibility
if ((error = test()) != 0){
printf("Error: %s\n", error);
return -1;
}

//initializing system.
if (!init()){
printf("Init not successful...");
return -1;
}

//create layers using the sigmoid_sum fragment program.
A = generateLayer("sigmoid_sum_masked.fp", 4, 40, 0);
B = generateLayer("sigmoid_sum_masked.fp", 40, 16, 0);
C = generateLayer("sigmoid_sum_masked.fp", 40, 22, 16);
D = generateLayer("sigmoid_sum_masked.fp", 38, 5, 0);
E = generateLayer(0, 5, 0, 0);

setOutput(A, B);
setInput(C, A);
setOutput(B, D);
setOutput(C, D);
setOutput(D, E);

//dummy values.
//fill vectors with values.

fillWeights(A);
copyWeightsToTexture(weight_matrix, A);
copyMaskToTexture(mask_matrix, A);
free(weight_matrix);
free(mask_matrix);

fillWeights(B);
copyWeightsToTexture(weight_matrix, B);
copyMaskToTexture(mask_matrix, B);
free(weight_matrix);
free(mask_matrix);

fillWeights(C);
copyWeightsToTexture(weight_matrix, C);
copyMaskToTexture(mask_matrix, C);
free(weight_matrix);
free(mask_matrix);

fillWeights(D);
copyWeightsToTexture(weight_matrix, D);
copyMaskToTexture(mask_matrix, D);
free(mask_matrix);
free(weight_matrix);

//Execute the network n times.
while (n-->0){
fillVector(A);
copyVectorToTexture(input, A);
run(A);
run(B);
run(C);
run(D);
printLayer(E);
free(input);
}

//clean up
destroyLayer(A);
destroyLayer(B);
destroyLayer(C);
destroyLayer(D);
destroyLayer(E);

return 0;
}

//Dummy function for filling weights and mask with dummy data, it preserves the texture sizes wich is multiples of 4.
void fillWeights(layer *target){
weight_matrix = calloc(target->out_size * target->size, sizeof(float));
mask_matrix = calloc(target->out_size * target->size, sizeof(float));

for(i=0; iout_size; i++){
for(j=0; jsize; j++){
weight_matrix[j+i_target->size] = RAND_UNI;
//weight_matrix[j+i_target->size] = j+i_target->size;
//weight_matrix[j+i_target->size] = 1.0f;
mask_matrix[j+i*target->size] = 1;
}
}
}

//dummy vector data
void fillVector(layer _target){
input = malloc(sizeof(float)_target->size);
for(i=0; isize; i++){
input[i] = RAND_UNI;
//input[i] = i;
//input[i] = 1;
}
}

compiling this file gives this.. Am i missing something here?

pbu@pbu-desktop:~/Desktop$ gcc gpu.c -o gpu -lm -lfann -lglut -lGLU -lGL -lm
In file included from gpu.c:1:0:
fann/src/gpu/fann_gpu.h:77:1: error: unknown type name ‘PFNGLGETPROGRAMIVPROC’
PFNGLGETPROGRAMIVPROC glGetProgramiv;
^
fann/src/gpu/fann_gpu.h:78:1: error: unknown type name ‘PFNGLCREATEPROGRAMPROC’
PFNGLCREATEPROGRAMPROC glCreateProgram;
^
fann/src/gpu/fann_gpu.h:79:1: error: unknown type name ‘PFNGLCREATEPROGRAMOBJECTARBPROC’
PFNGLCREATEPROGRAMOBJECTARBPROC glCreateProgram2;
^
fann/src/gpu/fann_gpu.h:80:1: error: unknown type name ‘PFNGLDELETEPROGRAMPROC’
PFNGLDELETEPROGRAMPROC glDeleteProgram;
^
fann/src/gpu/fann_gpu.h:81:1: error: unknown type name ‘PFNGLGETSHADERIVPROC’
PFNGLGETSHADERIVPROC glGetShaderiv;
^
fann/src/gpu/fann_gpu.h:82:1: error: unknown type name ‘PFNGLCREATESHADERPROC’
PFNGLCREATESHADERPROC glCreateShader;
^
fann/src/gpu/fann_gpu.h:83:1: error: unknown type name ‘PFNGLDELETESHADERPROC’
PFNGLDELETESHADERPROC glDeleteShader;
^
fann/src/gpu/fann_gpu.h:84:1: error: unknown type name ‘PFNGLSHADERSOURCEPROC’
PFNGLSHADERSOURCEPROC glShaderSource;
^
fann/src/gpu/fann_gpu.h:85:1: error: unknown type name ‘PFNGLCOMPILESHADERPROC’
PFNGLCOMPILESHADERPROC glCompileShader;
^
fann/src/gpu/fann_gpu.h:86:1: error: unknown type name ‘PFNGLATTACHSHADERPROC’
PFNGLATTACHSHADERPROC glAttachShader;
^
fann/src/gpu/fann_gpu.h:87:1: error: unknown type name ‘PFNGLGETSHADERINFOLOGPROC’
PFNGLGETSHADERINFOLOGPROC glGetShaderInfoLog;
^
fann/src/gpu/fann_gpu.h:88:1: error: unknown type name ‘PFNGLGETPROGRAMINFOLOGPROC’
PFNGLGETPROGRAMINFOLOGPROC glGetProgramInfoLog;
^
fann/src/gpu/fann_gpu.h:89:1: error: unknown type name ‘PFNGLLINKPROGRAMPROC’
PFNGLLINKPROGRAMPROC glLinkProgram;
^
fann/src/gpu/fann_gpu.h:90:1: error: unknown type name ‘PFNGLUSEPROGRAMPROC’
PFNGLUSEPROGRAMPROC glUseProgram;
^
fann/src/gpu/fann_gpu.h:91:1: error: unknown type name ‘PFNGLGETUNIFORMLOCATIONPROC’
PFNGLGETUNIFORMLOCATIONPROC glGetUniformLocation;
^
fann/src/gpu/fann_gpu.h:92:1: error: unknown type name ‘PFNGLUNIFORM1FPROC’
PFNGLUNIFORM1FPROC glUniform1f;
^
fann/src/gpu/fann_gpu.h:93:1: error: unknown type name ‘PFNGLUNIFORM1IPROC’
PFNGLUNIFORM1IPROC glUniform1i;
^
fann/src/gpu/fann_gpu.h:96:1: error: unknown type name ‘PFNGLGENFRAMEBUFFERSEXTPROC’
PFNGLGENFRAMEBUFFERSEXTPROC glGenFramebuffersEXT;
^
fann/src/gpu/fann_gpu.h:97:1: error: unknown type name ‘PFNGLBINDFRAMEBUFFEREXTPROC’
PFNGLBINDFRAMEBUFFEREXTPROC glBindFramebufferEXT;
^
fann/src/gpu/fann_gpu.h:98:1: error: unknown type name ‘PFNGLFRAMEBUFFERTEXTURE2DEXTPROC’
PFNGLFRAMEBUFFERTEXTURE2DEXTPROC glFramebufferTexture2DEXT;
^
fann/src/gpu/fann_gpu.h:99:1: error: unknown type name ‘PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC’
PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC glCheckFramebufferStatusEXT;
^
fann/src/gpu/fann_gpu.h:113:1: error: unknown type name ‘GLuint’
GLuint output_fb;
^
fann/src/gpu/fann_gpu.h:114:1: error: unknown type name ‘GLuint’
GLuint glutWindowHandle;
^
fann/src/gpu/fann_gpu.h:122:2: error: unknown type name ‘GLuint’
GLuint input_texture;
^
fann/src/gpu/fann_gpu.h:123:2: error: unknown type name ‘GLuint’
GLuint original_texture;
^
fann/src/gpu/fann_gpu.h:124:2: error: unknown type name ‘GLuint’
GLuint weight_texture;
^
fann/src/gpu/fann_gpu.h:125:2: error: unknown type name ‘GLuint’
GLuint mask_texture;
^
fann/src/gpu/fann_gpu.h:126:2: error: unknown type name ‘GLuint’
GLuint program;
^
fann/src/gpu/fann_gpu.h:131:2: error: unknown type name ‘GLuint’
GLuint out_texture;
^
fann/src/gpu/fann_gpu.h:135:2: error: unknown type name ‘GLint’
GLint shader_input_vector;
^
fann/src/gpu/fann_gpu.h:136:2: error: unknown type name ‘GLint’
GLint shader_weight_matrix;
^
fann/src/gpu/fann_gpu.h:137:2: error: unknown type name ‘GLint’
GLint shader_mask_matrix;
^
fann/src/gpu/fann_gpu.h:143:1: error: unknown type name ‘GLenum’
GLenum texTarget;
^
fann/src/gpu/fann_gpu.h:144:1: error: unknown type name ‘GLenum’
GLenum texInternalFormat;
^
fann/src/gpu/fann_gpu.h:145:1: error: unknown type name ‘GLenum’
GLenum texFormat;
^
In file included from gpu.c:1:0:
fann/src/gpu/fann_gpu.h:193:1: error: unknown type name ‘GLuint’
GLuint attachShaderProgram(char* source_file, layer * target);
^
fann/src/gpu/fann_gpu.h:329:15: error: unknown type name ‘GLuint’
void printLog(GLuint obj);
^
fann/src/gpu/fann_gpu.h:331:1: error: unknown type name ‘GLuint’
void setupTexture(const GLuint texID, GLuint width, GLuint height, GLenum tex_target, GLenum tex_format, GLenum internal_format);
^
fann/src/gpu/fann_gpu.h:331:39: error: unknown type name ‘GLuint’
void setupTexture(const GLuint texID, GLuint width, GLuint height, GLenum tex_target, GLenum tex_format, GLenum internal_format);
^
fann/src/gpu/fann_gpu.h:331:53: error: unknown type name ‘GLuint’
void setupTexture(const GLuint texID, GLuint width, GLuint height, GLenum tex_target, GLenum tex_format, GLenum internal_format);
^
fann/src/gpu/fann_gpu.h:331:68: error: unknown type name ‘GLenum’
void setupTexture(const GLuint texID, GLuint width, GLuint height, GLenum tex_target, GLenum tex_format, GLenum internal_format);
^
fann/src/gpu/fann_gpu.h:331:87: error: unknown type name ‘GLenum’
void setupTexture(const GLuint texID, GLuint width, GLuint height, GLenum tex_target, GLenum tex_format, GLenum internal_format);
^
fann/src/gpu/fann_gpu.h:331:106: error: unknown type name ‘GLenum’
void setupTexture(const GLuint texID, GLuint width, GLuint height, GLenum tex_target, GLenum tex_format, GLenum internal_format);
^

<fann_cpp.h> set_train_data + destroy_train = heap error

In the C++ wrapper:
When using set_train_data to set a training_data object, I always had a heap error when destroying said object (both from .destroy_train() and from the destructor). This did not happen when I used read_train_from_file.

I believe it's due to the way the set_train_data organizes the dynamic arrays. When it copies the double pointers for input and output, it creates a new fann_train_data struct and allocates memory for the double pointers (making them pointers to dynamic arrays of pointers to dynamic arrays of data, just like the given input and output double pointers). Then it stacks all of the instances for input and output into one big dynamic array, then, in the fann_train_data double pointer dynamic arrays, it has each pointer point to a different section of that big dynamic array. It makes sense, why put the data in a bunch of places and then point to it, if you can put it all together and then point to the corresponding sections. The only problem is: when you go to delete the dynamic arrays (well, all of them but the last one in each array of pointers), it looks like you wrote out of the predetermined bounds of the dynamic array, because there is a value recorded right next to the end of the array. You didn't really write outside the bounds, but it looks like it.

I'm not an expert in C++, but that's my guess. I could be completely wrong though. Also, I found a temporary fix:
In the fann_cpp.h header: in the destroy_train() member, I replaced:

    fann_destroy train(train_data);`

with

    if (train_data->input != NULL)
        fann_safe_free(train_data->input[0]);
    if (train_data->output != NULL)
        fann_safe_free(train_data->output[0]);
    fann_safe_free(train_data->input);
    fann_safe_free(train_data->output);
    fann_safe_free(train_data);

That fix makes the destructor work for objects that use set_train_data, but it made objects that use read_train_from_file throw the same error when the destructor was called.

Let me know what you think. I'd like to know if my theory is correct or not.

Compiling fann

[ 71%] Built target gtest_main
Scanning dependencies of target fann_tests
[ 78%] Building CXX object tests/CMakeFiles/fann_tests.dir/main.cpp.o
[ 85%] Building CXX object tests/CMakeFiles/fann_tests.dir/fann_test.cpp.o
[ 92%] Building CXX object tests/CMakeFiles/fann_tests.dir/fann_test_data.cpp.o
[100%] Building CXX object tests/CMakeFiles/fann_tests.dir/fann_test_train.cpp.o
In file included from /root/src/fann/tests/fann_test_train.cpp:1:
/root/src/fann/tests/fann_test_train.h:8: error: a brace-enclosed initializer is not allowed here before ‘{’ token
/root/src/fann/tests/fann_test_train.h:12: error: ISO C++ forbids initialization of member ‘xorInput’
/root/src/fann/tests/fann_test_train.h:12: error: making ‘xorInput’ static
/root/src/fann/tests/fann_test_train.h:12: error: invalid in-class initialization of static data member of non-integral type ‘fann_type [8]’
/root/src/fann/tests/fann_test_train.h:13: error: a brace-enclosed initializer is not allowed here before ‘{’ token
/root/src/fann/tests/fann_test_train.h:17: error: ISO C++ forbids initialization of member ‘xorOutput’
/root/src/fann/tests/fann_test_train.h:17: error: making ‘xorOutput’ static
/root/src/fann/tests/fann_test_train.h:17: error: invalid in-class initialization of static data member of non-integral type ‘fann_type [4]’
/root/src/fann/tests/fann_test_train.cpp: In member function ‘virtual void FannTestTrain_TrainOnDateSimpleXor_Test::TestBody()’:
/root/src/fann/tests/fann_test_train.cpp:17: error: ‘xorInput’ was not declared in this scope
/root/src/fann/tests/fann_test_train.cpp:17: error: ‘xorOutput’ was not declared in this scope
make[2]: *** [tests/CMakeFiles/fann_tests.dir/fann_test_train.cpp.o] Error 1
make[1]: *** [tests/CMakeFiles/fann_tests.dir/all] Error 2
make: *** [all] Error 2

What to do?

Error when constructing training_data object.

Hello.
I stumbled on a not NULL initialized pointer.

Code:

FANN::training_data* td = NULL;  // Poiter to training_data

FANN::training_data trainData; // Initializing new object
trainData.set_train_data(20, num_input, input, num_output, output); // Filling it with 20 training sets

td = new FANN::training_data(trainData); // Constructing object and copying trainData

However, in contructor (fann_cpp.h):

training_data(const training_data &data)
{
      destroy_train();   // <- firstly the train_data is destroied
      if (data.train_data != NULL)
     {
         train_data = fann_duplicate_train_data(data.train_data);
      }
}

And debugger shows that train_data is a not initialized pointer.
Therefore, the check in destroy_train() function is useless.

void destroy_train()
{
   if (train_data != NULL)  // Unfortunately, it may be uninitialized
   {
       fann_destroy_train(train_data);
       train_data = NULL;
   }
}

Solution:
Just like in default constructor
setting train_data to NULL

training_data() : train_data(NULL)
{
}

and/or removing destroy_train() function, since when it is new object how the train_data could be different then empty before copying.

training_data(const training_data &data) : train_data(NULL)
{
   destroy_train();
   if (data.train_data != NULL)
   {
       train_data = fann_duplicate_train_data(data.train_data);
   }
}

Bug in fann_cpp.h -- the fix provided in comment!

Seg faults with fann_cpp.h on OSX and RH Linux. Here is the fix:

Change neural_net copy constructor signature (line 860) to:

neural_net(const neural_net& other) : ann(NULL)

instead of:

neural_net(const neural_net& other)

I/O functions on file descriptors

Is there any reason why the fd save/load functions are internal?
I'd like to be able to print the network to stdout or serialise it and save it in a database.

some functions of train_data creation are not defined

i tried three functionsfann_create_train, fann_create_train_array and fann_create_train_pointer_array, in my code at all. But end with error undefined reference.
Then i try pkg-config fann --libs --cflags, ending with error: undefined symbol: fann_create_train.
who can tell me how to deal with this problem? Thanks.

Compile time training

Good day and Regards FANN,

Recently I have contacted highly regarded expert in the field of Neural Networks programming and research, who works with Google right now, Geoffrey Hinton.

I'v been asking how useful might be compile time trained neural networks. How much of calculation performance boost might it give. Probably you had some real experiments in this field. Wouldn't you be so kind, please, to publish results if any.

Normalizing input data range

The input data to train ANNs in this library is required to have some range? If so, the needed range is [0,1] or [-1,1]?

Error handling inside fann

1)as i understand, there is no way to get in client code errors, which may occur in some of API calls(e.g. fann_read_from_train_from_file -> FANN_E_CANT_OPEN_CONFIG_R and FANN_E_CANT_ALLOCATE_MEM, fann_create_standard, fann_create_standard -> FANN_E_CANT_ALLOCATE_MEM, and so on).

2)also, as specified in fann_error function, if first argument is NULL, error string would be printed to stderr(or global to all threads file). but it's not thread safe.

Bug: overflow-error in Windows with 64-bit-fannfloat.dll

I had a overflow-error with the 64bit fannfloat.dll.
rprop_increase_factor and rprop_decrease_factor were similar to 0
Fixed it with this issue from ache7: #85

fann_train.c

void fann_update_weights_irpropm(struct fann *ann, unsigned int first_weight, unsigned int past_end)
{
	fann_type *train_slopes = ann->train_slopes;
	fann_type *weights = ann->weights;
	fann_type *prev_steps = ann->prev_steps;
	fann_type *prev_train_slopes = ann->prev_train_slopes;

	double prev_step, next_step, slope, prev_slope, same_sign;

	float increase_factor = ann->rprop_increase_factor;	/*1.2; */
	float decrease_factor = ann->rprop_decrease_factor;	/*0.5; */
	float delta_min = ann->rprop_delta_min;	/*0.0; */
	float delta_max = ann->rprop_delta_max;	/*50.0; */

	unsigned int i = first_weight;

	for(; i != past_end; i++)
	{
 
		prev_step = fann_max(prev_steps[i], (fann_type) 0.0001);	/* prev_step may not be zero because then the training will stop */
    
    
    
		slope = train_slopes[i];
		prev_slope = prev_train_slopes[i];
    
     
		same_sign = prev_slope * slope;

// https://github.com/libfann/fann/issues/85
		next_step = prev_step;
		if (same_sign > 0)
			next_step = fann_min(prev_step * increase_factor, delta_max);
		else if (same_sign < 0) {
			next_step = fann_max(prev_step * decrease_factor, delta_min);
			slope = 0;
		}

		if (slope < 0) {
			weights[i] -= next_step;
			if (weights[i] < -1500) weights[i] = -1500;
		}
		else if (slope > 0) {
			weights[i] += next_step;
			if (weights[i] > 1500) weights[i] = 1500;
		}

    /*
		if(same_sign >= 0.0)
			next_step = fann_min(prev_step * increase_factor, delta_max);
		else
		{
			next_step = fann_max(prev_step * decrease_factor, delta_min);
			slope = 0;
		}
   
     
    
		if(slope < 0)
		{
			weights[i] -= next_step;
			if(weights[i] < -1500)
				weights[i] = -1500;
		}
		else
		{
			weights[i] += next_step;
			if(weights[i] > 1500)
				weights[i] = 1500;
		}
	*/
     
		/*if(i == 2){
		 * printf("weight=%f, slope=%f, next_step=%f, prev_step=%f\n", weights[i], slope, next_step, prev_step);
		 * } */

		/* update global data arrays */
		prev_steps[i] = next_step;
		prev_train_slopes[i] = slope;
		train_slopes[i] = 0.0;

	}
}

seems like a tricky complier bug??

FANN-2.2.0 bug on Mac OS X 10.9.3

xor_sample.cpp will run forever at function create_standard until segment fault.


First I build the library using cmake. Then

c++ examples/xor_sample.cpp -I src/include src/libfloatfann.dylib

excute output create segment fault

Has the owner of this repo retired / dissapeared?

I'm curious since this repo hasn't had much acticity in the last years, his profile hasn't either.

I'm not sure what happened, but are there forks out there that tries to keep this library updated and ongoing?

Here's my fork: https://github.com/sciencefyll/fann

The only things I've done is to make it usable as a git submodule and modified the headers so I can use them as such in my own projects:
#include "fann/fann.h"

Which I think is a lot cleaner. Still need some work tho.

How continue training using library Fann?

I want to load a my pretrained model and continue training with my old dataset again, but i do not want to retrain the whole model again from scratch.

I'm trying to use

$ann = fann_create_from_file($model);

And then:

fann_train_on_file($ann, 'my_old_data.data', ... other params);

But my old MSE error not save after 1 epoch training.

Before training: 0.0072

After 1 epoch new training: 0.14627

P.S Sorry For My Bad English

decimal separator ","

Why my decimal separator ","?

Max epochs    10000. Desired error: 0,0001000000.
Epochs            1. Current error: 0,2481191903. Bit fail 32.
Epochs           32. Current error: 0,0000684304. Bit fail 0.

What can I do to decimal separator become "."?

Compiling FANN for MCUs

Hi,

Is it possible to compile FANN for some MCUs such as ARM-Cortext M0? We just want to use the recognition part on MCU after a model has been trained on PC side.

Thanks.

Setting activation steepness with doublefann.h

When running the code below I get "FANN Error 17: Index 1070805811 is out of bound." when uncluding "doublefann.h" but no error and the expected output when instead including "fann.h" or "floatfann.h"

This seems a bug although I haven't been able to put my finger on it.
I used the current github source code build with mingw.

//#include "fann.h" // this works
//#include "floatfann.h" // this works
#include "doublefann.h" // this fails

int main()
{
    struct fann *ann = fann_create_standard(3,1,10,1);
    printf("default activation steepness = %f\n", fann_get_activation_steepness(ann, 1, 0));
    fann_set_activation_steepness_layer(ann, (fann_type) 0.3, 1);
    printf("new activation steepness = %f\n", fann_get_activation_steepness(ann, 1, 0));
    fann_destroy(ann);
    return 0;
}

64-bit runtime transition issue: "Implicit conversion loses integer precision"

Compiling FANN for (some) 64-bit runtimes cause multiple warnings, because the code assumes that ints and a pointers are the same size.

Here is one example (from fann.c):

ann->num_output = (ann->last_layer - 1)->last_neuron - (ann->last_layer - 1)->first_neuron - 1;

Apple's Xcode marks this as "Implicit conversion loses integer precision: 'long' to 'unsigned int'".

fann_train_on_file does not set error on bad input training file data

there is an error logged by the parsing function, but that function has no access to the nn struct containing the error. I'm trying to generate exceptions in fannj for fann error conditions and discovered that the user is not able to detect all errors programmatically, ie. without looking at console output or inconveniently redirecting error output.

Installation failed: "file INSTALL cannot find compat_time.h"

Hello,

when I run cmake . it works ok. But when I run sudo make install, i get this:

[ 25%] Built target doublefann                     
[ 50%] Built target fann
[ 75%] Built target fixedfann
[100%] Built target floatfann
Install the project...
-- Install configuration: ""
-- Up-to-date: /usr/local/lib/pkgconfig/fann.pc
-- Installing: /usr/local/lib/libfloatfann.so.2.2.0
-- Installing: /usr/local/lib/libfloatfann.so.2
-- Installing: /usr/local/lib/libfloatfann.so
-- Installing: /usr/local/lib/libdoublefann.so.2.2.0
-- Installing: /usr/local/lib/libdoublefann.so.2
-- Installing: /usr/local/lib/libdoublefann.so
-- Installing: /usr/local/lib/libfixedfann.so.2.2.0
-- Installing: /usr/local/lib/libfixedfann.so.2
-- Installing: /usr/local/lib/libfixedfann.so
-- Installing: /usr/local/lib/libfann.so.2.2.0
-- Installing: /usr/local/lib/libfann.so.2
-- Installing: /usr/local/lib/libfann.so
-- Installing: /usr/local/include/fann.h
-- Installing: /usr/local/include/doublefann.h
-- Installing: /usr/local/include/fann_internal.h
-- Installing: /usr/local/include/floatfann.h
-- Installing: /usr/local/include/fann_data.h
-- Installing: /usr/local/include/fixedfann.h
CMake Error at src/include/cmake_install.cmake:36 (FILE):
  file INSTALL cannot find "/home/dan/load/fann/src/include/compat_time.h".
Call Stack (most recent call first):
  src/cmake_install.cmake:197 (INCLUDE)
  cmake_install.cmake:41 (INCLUDE)


Makefile:65: recipe for target 'install' failed
make: *** [install] Error 1


unrecognized command line option ‘-std=c++14’

When compiling the latest git version, I get the following error. This is odd, because from what I can tell the test for C++14 are correctly applied.

[ 76%] Building CXX object tests/CMakeFiles/fann_tests.dir/main.cpp.o
c++: error: unrecognized command line option ‘-std=c++14’
make[2]: *** [tests/CMakeFiles/fann_tests.dir/main.cpp.o] Error 1
make[1]: *** [tests/CMakeFiles/fann_tests.dir/all] Error 2
make: *** [all] Error 2

FANN Error 11

import java.util.ArrayList;
import java.util.List;

import com.googlecode.fannj.ActivationFunction;
import com.googlecode.fannj.Fann;
import com.googlecode.fannj.Layer;

public class FANNeuro {

    /**
     * @param args
     */
    public static void main(String[] args) {
        // TODO Auto-generated method stub

        List<Layer> layers =  new ArrayList<Layer>();
        layers.add(Layer.create(5, ActivationFunction.FANN_SIGMOID_SYMMETRIC));

        Fann fann =  new Fann(layers);
    }

}

I get 'FANN Error 11: Unable to allocate memory' as output. I've read that there is bug in 'fann_create_standart_array' function which is used in 'Fann' constructor. Would you fix?

Incorrect loop unrolling in fann_run()

The algorithm for loop unrolling seems incorrect:

in fann.c, in function fann_run(fann*, fann_type*), at about line 610 there is this unrolled loop:

                /* unrolled loop start */
                i = num_connections & 3;    /* same as modulo 4 */
                switch (i)
                {
                    case 3:
                        neuron_sum += fann_mult(weights[2], neurons[2].value);
                    case 2:
                        neuron_sum += fann_mult(weights[1], neurons[1].value);
                    case 1:
                        neuron_sum += fann_mult(weights[0], neurons[0].value);
                    case 0:
                        break;
                }

                for(; i != num_connections; i += 4)
                {
                    neuron_sum +=
                        fann_mult(weights[i], neurons[i].value) +
                        fann_mult(weights[i + 1], neurons[i + 1].value) +
                        fann_mult(weights[i + 2], neurons[i + 2].value) +
                        fann_mult(weights[i + 3], neurons[i + 3].value);
                }
                /* unrolled loop end */

The problem I see is that, if the number of connections between two layers is not divisible by 3 and the modulo by 3 equals 2, then the output of one neuron per layer is ignored and not propagated forward.

Quick example:
  1. num_connections := 20, resulting from the input layer having 4 neurons plus 1 bias and the first hidden layer having 4 neurons plus 1 bias, with the net being fully connected, so:
  2. connection_rate := 1
    then, from the snippet posted above:
  3. i = num_connections & 3 = 20 & 3 = 2

The switch would run case 2, computing the partial sum for neurons[1]; then, the subsequent for loop would reiterate starting from neurons[2], skipping neurons[0].

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.