Coder Social home page Coder Social logo

encog-dotnet-core's Introduction

Encog Machine Learning Framework

Note: The name "Encog Core" causes some confusion with Microsoft .Net Core. This version of Encog has nothing to do with Microsoft's .Net Core release. I am tempted to change the name of the Repo and NuGet, but that would be a massive breaking change at this point.

Encog Machine Learning Framework

Encog is a pure-Java/C# machine learning framework that I created back in 2008 to support genetic programming, NEAT/HyperNEAT, and other neural network technologies. Originally, Encog was created to support research for my master’s degree and early books. The neural network aspects of Encog proved popular, and Encog was used by a number of people and is cited by 952 academic papers in Google Scholar. I created Encog at a time when there were not so many well developed frameworks, such as TensorFlow, Keras, DeepLearning4J, and many others (these are the frameworks I work with the most these days for neural networks).

Encog continues to be developed (and bugs fixed) for the types of models not covered by the large frameworks and to provide a pure non-GPU Java/C# implementation of several classic neural networks. Because it is pure Java, the source code for Encog can be much simpler to adapt for cases where you want to implement the neural network yourself from scratch. Some of the less mainstream technologies supported by Encog include NEAT, HyperNEAT, and Genetic Programming. Encog has minimal support for computer vision. Computer vision is a fascinating topic, but just has never been a research interest of mine.

Encog supports a variety of advanced algorithms, as well as support classes to normalize and process data. Machine learning algorithms such as Support Vector Machines, Neural Networks, Bayesian Networks, Hidden Markov Models, Genetic Programming and Genetic Algorithms are supported. Most Encog training algorithms are multi-threaded and scale well to multicore hardware.

Encog continues to be developed, and is used in my own research, for areas that I need Java and are not covered by Keras. However, for larger-scale cutting edge work, where I do not need to implement the technology from scratch, I make use of Keras/TensorFlow for my own work.

Simple C# XOR Example in Encog

using System;
using ConsoleExamples.Examples;
using Encog.Engine.Network.Activation;
using Encog.ML.Data;
using Encog.ML.Data.Basic;
using Encog.ML.Train;
using Encog.Neural.Networks;
using Encog.Neural.Networks.Layers;
using Encog.Neural.Networks.Training.Propagation.Back;
using Encog.Neural.Networks.Training.Propagation.Resilient;

namespace Encog.Examples.XOR
{
    public class XORHelloWorld : IExample
    {
        /// <summary>
        /// Input for the XOR function.
        /// </summary>
        public static double[][] XORInput = {
            new[] {0.0, 0.0},
            new[] {1.0, 0.0},
            new[] {0.0, 1.0},
            new[] {1.0, 1.0}
        };

        /// <summary>
        /// Ideal output for the XOR function.
        /// </summary>
        public static double[][] XORIdeal = {
            new[] {0.0},
            new[] {1.0},
            new[] {1.0},
            new[] {0.0}
        };

        public static ExampleInfo Info
        {
            get
            {
                var info = new ExampleInfo(
                    typeof(XORHelloWorld),
                    "xor",
                    "Simple XOR with backprop, no factories or helper functions.",
                    "This example shows how to train an XOR with no factories or helper functions.");
                return info;
            }
        }

        #region IExample Members

        /// <summary>
        /// Program entry point.
        /// </summary>
        /// <param name="app">Holds arguments and other info.</param>
        public void Execute(IExampleInterface app)
        {
            // create a neural network, without using a factory
            var network = new BasicNetwork();
            network.AddLayer(new BasicLayer(null, true, 2));
            network.AddLayer(new BasicLayer(new ActivationSigmoid(), true, 3));
            network.AddLayer(new BasicLayer(new ActivationSigmoid(), false, 1));
            network.Structure.FinalizeStructure();
            network.Reset();

            // create training data
            IMLDataSet trainingSet = new BasicMLDataSet(XORInput, XORIdeal);

            // train the neural network
            IMLTrain train = new ResilientPropagation(network, trainingSet);

            int epoch = 1;

            do
            {
                train.Iteration();
                Console.WriteLine(@"Epoch #" + epoch + @" Error:" + train.Error);
                epoch++;
            } while (train.Error > 0.01);

            // test the neural network
            Console.WriteLine(@"Neural Network Results:");
            foreach (IMLDataPair pair in trainingSet)
            {
                IMLData output = network.Compute(pair.Input);
                Console.WriteLine(pair.Input[0] + @"," + pair.Input[1]
                                  + @", actual=" + output[0] + @",ideal=" + pair.Ideal[0]);
            }
        }

        #endregion
    }
}

encog-dotnet-core's People

Contributors

anders9ustafsson avatar bdrupieski avatar brannonking avatar djnicholson avatar firestrand avatar fxmozart avatar heatonresearch avatar jeffheaton avatar jeroldhaas avatar kedrzu avatar leonidvolkov avatar matthemsteger avatar mileythomas avatar primebydesign avatar reyoung avatar romiko avatar seemasingh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

encog-dotnet-core's Issues

Encog.ML.Data.Basic.BasicMLDataPairCentroid.Add() & .Remove()

Hi
The _value.Count should not be used in calculating:

_value[i] = ((_value[i] * _value.Count) + a[i]) / (_value.Count + 1);

because rather than using the number of that cluster's data points in formula, the number of attributes of each data point is used.
it is also the case in Remove Scenario.

Training convergence issue

Hi,

first of all, thanks for providing a powerful neural network implementation for free!

I'm struggling with convergence of training on relatively simple auto-encoder problems. The auto-encoders is represented as a BasicNetwork with 3 input nodes (+bias), 1 hidden layer with 2 nodes (+bias) and 3 output nodes. In the tests below, training data are such that the third value of each pattern is s linear combination of the other values, specifically input[i][2] = (2 * input[i][0] + input[i][1] + 1) / 5. The ideal value (training goal) equals the input. The training algorithm is supposed to find a representation of my data in the hidden layer that preserves all information, which is obviously quite simple.

I've written a JUnit 4 based test driver to explain my problem, see below. The test code sets up a network, compiles training data, trains the network and then tests if training error is reasonably low. Training succeeds reliably when using ActivationTANH in the output layer (testTrain_TanhTanh). Same results with QuickPropagation. I've also tried to duplicate the training set and add some noise (to overcome any potential issue resulting from the singular data matrix) but this hasn't changed anything.

My environment is NetBeans 8 with JUnit 4 on Win7/64. I'm using the Java version of Encog 3.2 in conjunction with the commons-math library version 3.2 on Java8/x64.

Test code:

import org.encog.Encog;
import org.encog.engine.network.activation.ActivationFunction;
import org.encog.engine.network.activation.ActivationLinear;
import org.encog.engine.network.activation.ActivationTANH;
import org.encog.mathutil.error.ErrorCalculation;
import org.encog.mathutil.error.ErrorCalculationMode;
import org.encog.ml.data.MLDataSet;
import org.encog.ml.data.basic.BasicMLDataSet;
import org.encog.neural.networks.BasicNetwork;
import org.encog.neural.networks.layers.BasicLayer;
import org.encog.neural.networks.training.propagation.Propagation;
import org.encog.neural.networks.training.propagation.quick.QuickPropagation;
import org.encog.neural.networks.training.propagation.resilient.ResilientPropagation;
import static org.junit.Assert.*;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

/**
 * Code to test the Encog framework.
 */
public class EncogTest {

    private MLDataSet trainingSet;

    @Before
    public void setUp() {
        ErrorCalculation.setMode(ErrorCalculationMode.ESS);

       // Compile training data:
        double[][] input = { // 3rd is always (2*first + second + 1) / 5
            { -1, -1, -.4 },
            { -1,  0, -.2 },
            { -1,  1,  0 },
            {  0, -1,  0 },
            {  0,  0,  .2 },
            {  0,  1,  .4 },
            {  1, -1,  .4 },
            {  1,  0,  .6 },
            {  1,  1,  .8 }
        };
        trainingSet = new BasicMLDataSet(input, input);
    }

    @After
    public void tearDown() {
        Encog.getInstance().shutdown();
    }

    public static BasicNetwork createAutoEncoder(ActivationFunction hiddenAct,
            ActivationFunction outputAct) {
        BasicNetwork network = new BasicNetwork();
        network.addLayer(new BasicLayer(null, true, 3));
        network.addLayer(new BasicLayer(hiddenAct, true, 2));
        network.addLayer(new BasicLayer(outputAct, false, 3));
        network.getStructure().finalizeStructure();
        network.reset();
        return network;
    }

    public double measureConvergence(BasicNetwork network) {
        Propagation trainingAlg = new ResilientPropagation(network, trainingSet);  // or QuickPropagation
        trainingAlg.iteration(500);
        return trainingAlg.getError();
    }

    @Test
    public void testTrain_TanhTanh() {
        BasicNetwork network = createAutoEncoder(new ActivationTANH(), new ActivationTANH());
        double error = measureConvergence(network);
        assertEquals("Training fails to converge", 0, error, .1);
    }

    @Test
    public void testTrain_LinearTanh() {
        BasicNetwork network = createAutoEncoder(new ActivationLinear(), new ActivationTANH());
        double error = measureConvergence(network);
        assertEquals("Training fails to converge", 0, error, .1);
    }

    @Test
    public void testTrain_TanhLinear() {
        BasicNetwork network = createAutoEncoder(new ActivationTANH(), new ActivationLinear());
        double error = measureConvergence(network);
        assertEquals("Training fails to converge", 0, error, .1);
    }

    @Test
    public void testTrain_LinearLinear() {
        BasicNetwork network = createAutoEncoder(new ActivationLinear(), new ActivationLinear());
        double error = measureConvergence(network);
        assertEquals("Training fails to converge", 0, error, .1);
    }
}

Thanks for looking after this,
Jürgen

Prune

When i try prune (the market example) it now runs , but it places Current :N/A BEST N/A ?

The rest runs fine.

Bug in Encog(tm) Core v3.3 - .Net Version

After some debugging I have found a bug in AnalystWizard.cs 3.3, private void ExpandTimeSlices() approx. line 700:
// swap back in
oldList.Clear(); // oldList is cleared and then used in a foreach …

// Original line: foreach (AnalystField item in oldList)

// New line:
foreach (AnalystField item in newList) // Correct list !!!
{
oldList.Add(item);
}

These lines are only executed if wizard.LagWindowSize > 0

EncogCmd Wizard: Multiple target fields

Hi,
it would be useful if multiple target fields could be specified in the EncogCmd Wizard, perhaps comma separated, for example:

Enter value for [target field] (default=): ideal1, ideal2, ideal3

Currently, only one may be specified and a change requires manual editing of .ega file and running Encog tasks again.

Encog DotNet 3.3 Multithread Error

I am getting what seems to be an Encog (3.x) threading / workload error...

Been using Encog CS 3.1, 3.2 and 3.3 with VS.NET 2015 on two servers, each with dual X5400 series 4 core / 4 thread Xeons (8 core / 8 thread system total) without a problem. One has 32Gb RAM and the other 64Gb (though I am only actually seeing 1 busy thread, but that's another story...).

I recently tried the exact same code, compiled (exe) and on the VS IDE on a dual X7500 series 8 core / 16 thread Xeon server (16 core / 32 thread 64Gb system total) and I get this error (with the Encog CS pre-compiled DLL straight from GitHub):

System.OverflowException, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089Arithmetic operation resulted in an overflow. at Encog.Util.Concurrency.DetermineWorkload..ctor(Int32 threads, Int32 workloadSize)
at Encog.Neural.Networks.Training.Propagation.Propagation.Init()
at Encog.Neural.Networks.Training.Propagation.Propagation.CalculateGradients()
at Encog.Neural.Networks.Training.Propagation.Propagation.ProcessPureBatch()
at Encog.Neural.Networks.Training.Propagation.Propagation.Iteration()
at EncogConsole.modEncog.ElmanTypeA(Boolean boolErrorVerbose, Boolean boolTestOutput) in C:\Users\Administrator\Documents\Visual Studio 2015\Projects\NormalizedConsole_v4B\EncogConsole\modEncog.vb:line 126
at EncogConsole.modEncog.Main() in C:\Users\Administrator\Documents\Visual Studio 2015\Projects\NormalizedConsole_v4B\EncogConsole\modEncog.vb:line 35
at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args)
at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)
at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()System.OverflowException: Arithmetic operation resulted in an overflow.
at Encog.Util.Concurrency.DetermineWorkload..ctor(Int32 threads, Int32 workloadSize)
at Encog.Neural.Networks.Training.Propagation.Propagation.Init()
at Encog.Neural.Networks.Training.Propagation.Propagation.CalculateGradients()
at Encog.Neural.Networks.Training.Propagation.Propagation.ProcessPureBatch()
at Encog.Neural.Networks.Training.Propagation.Propagation.Iteration()
at EncogConsole.modEncog.ElmanTypeA(Boolean boolErrorVerbose, Boolean boolTestOutput) in C:\Users\Administrator\Documents\Visual Studio 2015\Projects\NormalizedConsole_v4B\EncogConsole\modEncog.vb:line 126
at EncogConsole.modEncog.Main() in C:\Users\Administrator\Documents\Visual Studio 2015\Projects\NormalizedConsole_v4B\EncogConsole\modEncog.vb:line 35
at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args)
at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)
at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart() 0.31s

The thing here is that the code is the exact same, with the exact same datasets, etc. Tried different code that's running fine on the first two X5400 servers (albeit on what seems to be a single thread) and still the same problem (on the X7500 server). All machines running W2K8R2 with latest patches, etc.

Code is done with VB, but as said before, works just fine, except on this higher thread count server.

What gives?

Bug in TrainFlatNetworkResilient.cs

It appears that the private variable "_lastError" is never set, yet it used in a compare.

Should this be moved to TrainFlatNetworkProp.cs as a protected internal or readonly? Possible set just before CurrentError is calculated?

I am using: encog-dotnet-core-3.0.1

SVM IndexOutOfRangeException

Hello,
running the latest EncogCmd (built from git sources) on the data below, I get the following exception:

Unhandled Exception: System.IndexOutOfRangeException: Index was outside the bounds of the array.
at Encog.MathUtil.LIBSVM.Cache.get_data(Int32 index, Single[][] data, Int32 len) in D:\oss\encog-dotnet-core\encog-core-cs\MathUtil\LIBSVM\svm.cs:line 118
at Encog.MathUtil.LIBSVM.SVC_Q.get_Q(Int32 i, Int32 len) in D:\oss\encog-dotnet-core\encog-core-cs\MathUtil\LIBSVM\svm.cs:line 1282
at Encog.MathUtil.LIBSVM.Solver.Solve(Int32 l, Kernel Q, Double[] b_, SByte[] y_, Double[] alpha_, Double Cp, Double Cn, Double eps, SolutionInfo si, Int32 shrinking) in D:\oss\encog-dotnet-core\encog-core-cs\MathUtil\LIBSVM\svm.cs:line 645
at Encog.MathUtil.LIBSVM.Solver_NU.Solve(Int32 l, Kernel Q, Double[] b, SByte[] y, Double[] alpha, Double Cp, Double Cn, Double eps, SolutionInfo si, Int32 shrinking) in D:\oss\encog-dotnet-core\encog-core-cs\MathUtil\LIBSVM\svm.cs:line 1018
at Encog.MathUtil.LIBSVM.svm.solve_nu_svc(svm_problem prob, svm_parameter param, Double[] alpha, SolutionInfo si) in D:\oss\encog-dotnet-core\encog-core-cs\MathUtil\LIBSVM\svm.cs:line 1486
at Encog.MathUtil.LIBSVM.svm.svm_train_one(svm_problem prob, svm_parameter param, Double Cp, Double Cn) in D:\oss\encog-dotnet-core\encog-core-cs\MathUtil\LIBSVM\svm.cs:line 1612
at Encog.MathUtil.LIBSVM.svm.svm_train(svm_problem prob, svm_parameter param) in D:\oss\encog-dotnet-core\encog-core-cs\MathUtil\LIBSVM\svm.cs:line 2149
at Encog.ML.SVM.Training.SVMTrain.Iteration() in D:\oss\encog-dotnet-core\encog-core-cs\ML\SVM\Training\SVMTrain.cs:line 249
at Encog.ML.SVM.Training.SVMSearchTrain.Iteration() in D:\oss\encog-dotnet-core\encog-core-cs\ML\SVM\Training\SVMSearchTrain.cs:line 297
at Encog.App.Analyst.Commands.CmdTrain.PerformTraining(IMLTrain train, IMLMethod method, IMLDataSet trainingSet) in D:\oss\encog-dotnet-core\encog-core-cs\App\Analyst\Commands\CmdTrain.cs:line 222
at Encog.App.Analyst.Commands.CmdTrain.ExecuteCommand(String args) in D:\oss\encog-dotnet-core\encog-core-cs\App\Analyst\Commands\CmdTrain.cs:line 116
at Encog.App.Analyst.EncogAnalyst.ExecuteTask(AnalystTask task) in D:\oss\encog-dotnet-core\encog-core-cs\App\Analyst\EncogAnalyst.cs:line 539
at Encog.App.Analyst.EncogAnalyst.ExecuteTask(String name) in D:\oss\encog-dotnet-core\encog-core-cs\App\Analyst\EncogAnalyst.cs:line 572
at EncogCmd.EncogCmd.AnalystCommand() in D:\oss\encog-dotnet-core\EncogCmd\EncogCmd.cs:line 166
at EncogCmd.EncogCmd.Main(String[] args) in D:\oss\encog-dotnet-core\EncogCmd\EncogCmd.cs:line 307

This problem was also reported here:

http://www.heatonresearch.com/node/2368 (partly fixed?)
http://www.heatonresearch.com/node/2398 (no replies)


data.csv:

i1,i2,i3,i4,i5,i6,i7,i8,i9,y
0.0003,1.666666667,1,0.000124023,-0.000225,-0.000704,-0.001492,-0.001547,2.07E-05,1
0.0003,1,1,0.000348262,0.000155,-0.000344,-0.001144,-0.0012415,5.80E-05,1
0.0006,2.25,10,0.000234719,-0.000195,-0.003012,-0.003864,-0.004737,3.91E-05,0
0.0004,2,3,5.80E-05,-2.00E-05,-0.002582,-0.003959,-0.0048975,9.66E-06,1
0.0004,1.5,1,0.00039255,0.00042,-0.002062,-0.003513,-0.004471,6.54E-05,0
0.0002,1.5,0.5,0.000156911,0.00041,-0.0018,-0.003533,-0.0045565,2.62E-05,0
0.0003,1.666666667,1,6.20E-05,-0.000135,-0.000922,-0.0034,-0.0045225,1.03E-05,0
0.0003,1,1,0.000449639,0.00033,-0.000272,-0.002828,-0.00396,7.49E-05,0
0.0006,1.2,2,0.000376355,-0.00017,-0.001406,-0.004004,-0.005835,6.27E-05,0
-1.00E-04,4,0.166666667,0.000236876,-0.00013,-0.001478,-0.004007,-0.005891,3.95E-05,0
0.0008,1.714285714,1.5,-0.000290168,-0.00089,-0.003022,-0.005279,-0.0079045,-4.84E-05,1
0.0005,1.2,0.5,0.000288856,-0.000135,-0.002226,-0.004466,-0.0071965,4.81E-05,0
0.0002,2.666666667,0.666666667,0.000246055,0.000455,0.00085,-0.00101,-0.0044385,4.10E-05,0
0.0012,1.153846154,0.25,0.000585271,0.00062,0.00124,-0.000548,-0.004005,9.75E-05,0
-1.00E-04,2.5,6,-0.000184848,-0.000125,0.00069,-0.000868,-0.0042985,-3.08E-05,1
0.0011,1,1,0.000784416,0.0009,0.001764,0.000259,-0.0031595,0.000130736,1
0.0012,1.25,0.25,0.000791247,0.00112,0.002248,0.00105,-0.0022235,0.000131875,1
0.0008,1,1,-2.17E-05,-0.000665,-0.00093,0.00021,-0.002472,-3.61E-06,1
0.0007,1,1,0.000657545,0.000395,-0.000198,0.001018,-0.001595,0.000109591,1

Performance optimization for NormalizedField.DeNormalize

Hi,

I've taken a look at method *DeNormalize of the *NormalizedField class. I think we can improve the performance of this method with the following code:

public double DeNormalize(double v)
{
    return ((v - _normalizedLow)*(_actualHigh - _actualLow)
                /(_normalizedHigh - _normalizedLow)) + _actualLow;
    }

I've made some benchmark and it seems that with this equation, the result is 20% quicker. Can someone confirm my implementation.

Thanks

Concurrency bugs in PruneIncremental

The PruneIncremental class runs into errors when running with multiple threads. The _hiddenCounts and _pattern are shared across threads.

The hidden counts are modified by each thread in IncreaseHiddenCounts when a RequestNextTask call occurs. The same _pattern object is used by all threads to generate a network. So multiple threads trying to generate a network at the same time can lead to unintended hidden layers being created.

I think these objects could use a lock around the critical sections.

        /// <summary>
        /// Generate a network according to the current hidden layer counts.
        /// </summary>
        ///
        /// <returns>The network based on current hidden layer counts.</returns>
        private BasicNetwork GenerateNetwork()
        {
            BasicNetwork network = null;
            lock (_pattern)
            {
                _pattern.Clear();

                lock (this._hiddenCounts.SyncRoot)
                {
                    foreach (int element in _hiddenCounts)
                    {
                        if (element > 0)
                        {
                            _pattern.AddHiddenLayer(element);
                        }
                    }
                }
                network = (BasicNetwork)_pattern.Generate();
            }
            return network;
        }


        /// <summary>
        /// Increase the hidden layer counts according to the hidden layer
        /// parameters. Increase the first hidden layer count by one, if it is maxed
        /// out, then set it to zero and increase the next hidden layer.
        /// </summary>
        ///
        /// <returns>False if no more increases can be done, true otherwise.</returns>
        private bool IncreaseHiddenCounts()
        {
            lock (_hiddenCounts.SyncRoot)
            {
                int i = 0;
                do
                {
                    HiddenLayerParams param = _hidden[i];
                    _hiddenCounts[i]++;

                    // is this hidden layer still within the range?
                    if (_hiddenCounts[i] <= param.Max)
                    {
                        return true;
                    }

                    // increase the next layer if we've maxed out this one
                    _hiddenCounts[i] = param.Min;
                    i++;
                } while (i < _hiddenCounts.Length);
            }
            // can't increase anymore, we're done!

            return false;
        }

Encog.ML.Data.Versatile.NormalizationHelper is not serializable

when you try to serialize an instance of the versatile data normalization helper you get the following exception:

Type 'Encog.ML.Data.Versatile.NormalizationHelper' in Assembly 'encog-core-cs, Version=3.3.0.0, Culture=neutral, PublicKeyToken=3e882172b12155d4' is not marked as serializable.

Issue with VersatileMLDataSet.Analyze()

Analyze() Method returns an IndexOutOfRangeException when attempting to calculate the Standard Deviation for more than one output( specifically the second output column). but works fine when there is only one output.
screenshot 2015-07-03 07 03 46

Forecast ahead

How to modify Temporal code in PredictSunspot.cs sample to get an array of predicted data, for example 10 points ahead like I've done it with R?
GitHub Logo

Column of zeroes produces NaNs

Hi,
if there is a column of zeroes in the input data, which is to be normalized, the output becomes all NaNs (Not a number). This is because of zero division in Encog.Util.Arrayutil.NormalizedField.Normalize() method:

    public double Normalize(double v)
    {
        return ((v - _actualLow)/(_actualHigh - _actualLow))   // if _actualHigh  and _actualLow are both zero -> NaN
               *(_normalizedHigh - _normalizedLow)
               + _normalizedLow;
    }

There is already FixSingleValue() method, which would fix this problem, but it is not used (at all). The same applies for Java code.

EncogAnalyst.Save() Issue with Equilateral Classes

It seems that EncogAnalyst.Save(FileInfo) method is not saving the customized data, especially classes.

Eg. With following code:

'Analyst
Dim analyst = New EncogAnalyst()
'Wizard
 Dim wizard = New AnalystWizard(analyst)
 Dim BaseFile As FileInfo = FileUtil.CombinePath(New FileInfo(CSV_EXPORTS_PATH), 'baseFile.csv')
 wizard.Wizard(BaseFile, True, AnalystFileFormat.DecpntComma)

And then customizing one of the fields,

analyst.Script.Normalize.NormalizedFields(0).Classes.Clear()
analyst.Script.Normalize.NormalizedFields(0).Classes.Add(New Encog.Util.Arrayutil.ClassItem("CLASS_1",0))
analyst.Script.Normalize.NormalizedFields(0).Classes.Add(New Encog.Util.Arrayutil.ClassItem("CLASS_2",1))
analyst.Script.Normalize.NormalizedFields(0).Classes.Add(New Encog.Util.Arrayutil.ClassItem("CLASS_3",2))
analyst.Script.Normalize.NormalizedFields(0).Classes.Add(New Encog.Util.Arrayutil.ClassItem("CLASS_4",3))
analyst.Script.Normalize.NormalizedFields(0).Classes.Add(New Encog.Util.Arrayutil.ClassItem("CLASS_5",4))
analyst.Script.Normalize.NormalizedFields(0).Classes.Add(New Encog.Util.Arrayutil.ClassItem("CLASS_6",5))
analyst.Script.Normalize.NormalizedFields(0).Classes.Add(New Encog.Util.Arrayutil.ClassItem("CLASS_7",6))
analyst.Script.Normalize.NormalizedFields(0).Classes.Add(New Encog.Util.Arrayutil.ClassItem("CLASS_8",7))
analyst.Script.Normalize.NormalizedFields(0).Output = False
analyst.Script.Normalize.NormalizedFields(0).Action = Encog.Util.Arrayutil.NormalizationAction.Equilateral

Dim AnalystFile As FileInfo = FileUtil.CombinePath(New FileInfo(ENCOG_ANALYST_PATH), "baseFile_analyst.ega")
'save the analyst file
analyst.Save(AnalystFile)

This results in following config in .ega file

[DATA:CLASSES]
"field","code","name"
"TYPE_CLASS","CLASS_3","CLASS_3",1
"TYPE_CLASS","CLASS_2","CLASS_2",12
"TYPE_CLASS","CLASS_6","CLASS_6",33
"TYPE_CLASS","CLASS_8","CLASS_8",1

As you can see the rest of the classes were ignored and only those classes which were actually present in the baseFile.csv provided to wizard reflect. This results in Can't determine class for: error when this analyst is loaded later and is used to normalize same type of data but having a different class e.g. CLASS_1

Kindly review.

ART-2 or ART-3

Will there ever an adaptive resonance theory (ART) type 2 or 3, where possible to work with continuous inputs?

It looks like a Cuda API change causes encog to abnormally terminate in the CUDA 5.0 run-time

I think that a Cuda 5.0 API change causes encog to fail when executing in the Cuda 5.0
run-time environment.

The Cuda 5.0 release notes in
http://developer.download.nvidia.com/compute/cuda/5_0/rel/docs/CUDA_Toolkit_Release_Notes_And_Errata.txt
state that
"
** The use of a character string to indicate a device symbol, which was possible
with certain API functions, is no longer supported. Instead, the symbol should be
used directly.
"

Encog module cuda_eval.cu includes the statement
"cudaMemcpyToSymbol("cnet", &tempConstNet, sizeof(GPU_CONST_NETWORK));"
which I believe will now fail in the Cuda 5.0 run-time.

I have posted on the encog general discussion forum and on stackoverflow that when I run the command "./encog benchmark /gpu:1" I get:
("
encog-core/cuda_eval.cu(286) : getLastCudaError() CUDA error : kernel launch failure : (13) invalid device symbol.
").

Since I am neither a Cuda nor much of a C programmer, I hesitate to fiddle with the code,
and I'd appreciate any assistence in getting things working.
If you have some code change suggestions, I could try them out.

--Thanks
Rick

Missing base call in ResilientPropagation.PostIteration

Hi,

I'm new to Encog and try to play a little with it. I've tried some Training configuration and get stuck while trying to use a ResilientPropagation with an IEndTrainingStrategy. The method void PostIteration() of the strategy was never called.

I don't know if it's a real issue (maybe using an IEndTrainingStrategy with a ResilientPropagation is not a wanted usecase), but maybe you should consider calling the base method while overriding virtual ones.

In ResilientPropagation.cs:

public override void PostIteration()
{
    _lastError = Error;
}

should be replace with

public override void PostIteration()
{
    base.PostIteration();
    _lastError = Error;
}

Question about TrainerHelper.GenerateInputz

If I understand the TrainerHelper.GenerateInputz method correctly, it should take all items of a jagged array and return these elements as a one-dimensional array of double:s (right?).

If this is a correct interpretation, then I think that this code line should use ArrayList.AddRange:

  al.AddRange((double[])doublear);

instead of ArrayList.Add.

Wrong property assignment in Backpropagation.cs

Hi,
in class Encog.Neural.Networks.Training.Propagation.Back.Backpropagation, there is a mistake in property assignment (line 145):

    public virtual double Momentum
    {
        get { return _momentum; }
        set { _learningRate = value; }   // should be: _momentum = value 
    }

Default error mode conflict ... encog-core-cs/MathUtil/Error/ErrorCalculation.cs

Hi,

In ErrorCalculation.cs there is a conflict as to the calculation mode ...

line 37: private static ErrorCalculationMode _mode = ErrorCalculationMode.MSE;
line 57: /// The default error mode for Encog is RMS.

The UpdateError() function later in the file adds the square of the delta to the global error so shouldn't the Calculate() function use RMS and take the square root to arrive at an accurate error value ?

Also, if it should use the square root does that effectively eliminate negative error values ie all become positive ?

Changes to code seem to have appeared with v3.0 in 2011 ?

Don't fully understand in depth so clarification would be appreciated :)

Thanks

Licence is specific to Java version

The licence file says that it is specific to the Java version of Encog. Could this be taken out, or the applicable licence be added to the .Net project?

OverflowException in AnalyzedField.Analyze1

If the input value for analyze is higher then Int32, then part of method AnalyzedField.Analyze1 will generate OverflowException

...
if (Integer)
{
try
{
int i = Int32.Parse(str); // this part generate higher items
Max = Math.Max(i, Max);
Min = Math.Min(i, Min);
if (!accountedFor)
{
_total += i;
}
}
catch (FormatException)
{
Integer = false;
if (!Real)
{
Max = 0;
Min = 0;
StandardDeviation = 0;
}
}
}
...

Suggestion: It would be fine to change supported type from Int32 to the Long

Minor issue in BasicLayer

The summary says that the layer is being created with a sigmoid function but then creates it with a hyperbolic tangent function. Am I missing something here or is this a mistake?

///

/// Construct this layer with a sigmoid activation function.
///

public BasicLayer(int neuronCount) : this(new ActivationTANH(), true, neuronCount)
{
}

Training error - huge difference (encog 3.1.0)

I used the common XOR sample in NN (training method ResilientPropagation and train till Error<0.001) and I got huge error after training (ideal 0, real value 0,989125420071542), see output:

Epoch #1 Error:0,403222760807917
Epoch #2 Error:0,326979855722731
...
Epoch #42 Error:0,00152763617214056
Epoch #43 Error:0,000498892283437333
Neural Network Results:
0,0, actual=0,00861768412365147,ideal=0
1,0, actual=0,982667334534116,ideal=1
0,1, actual=0,998007704200434,ideal=1
1,1, actual=0,989125420071542,ideal=0 (it seems as error]

part of source code (I used encog-dotnet-core-3.1.0):

    public static double[][] XOR_INPUT = {
      new double[2] { 0.0, 0.0 },
      new double[2] { 1.0, 0.0 },
      new double[2] { 0.0, 1.0 },
      new double[2] { 1.0, 1.0 } };

    public static double[][] XOR_IDEAL = {                                              
      new double[1] { 0.0 }, 
      new double[1] { 1.0 }, 
      new double[1] { 1.0 }, 
      new double[1] { 0.0 } };

        BasicNetwork network = new BasicNetwork();

        network.AddLayer(new BasicLayer(new ActivationSigmoid(), true, 2));
        network.AddLayer(new BasicLayer(new ActivationSigmoid(), true, 10));
        network.AddLayer(new BasicLayer(new ActivationSigmoid(), true, 1));
        network.Structure.FinalizeStructure();
        network.Reset();

       IMLDataSet trainingSet = new BasicMLDataSet(XOR_INPUT, XOR_IDEAL);

        IMLTrain train = new ResilientPropagation(network, trainingSet);

        int epoch = 1;
        do
        {
            train.Iteration();
            Console.WriteLine("Epoch #" + epoch + " Error:" + train.Error);
            epoch++;
        } while ((epoch < 10000) && (train.Error > 0.001));

        Console.WriteLine("Neural Network Results:");
        foreach (IMLDataPair pair in trainingSet)
        {
            IMLData output = network.Compute(pair.Input);

            Console.WriteLine(pair.Input[0] + "," + pair.Input[1]
            + ", actual=" + output[0] + ",ideal=" + pair.Ideal[0]);
            if (Math.Abs(pair.Ideal[0] - output[0]) > 0.2)
            {
                Console.WriteLine("Huge error");
                hugeError = true;
            }
        }

Compute an item

Hi,

I am new to Encog.

I have already defined, trained and evaluated my network. My inputs, and output are all labels and were correctly normalized before training.

I need now to compute some items that come from database. I have seen that Compute receives an IMLPair and has an overload of (double[] input, double[] output).

The problem is How can I define my input for Compute method. I need to generate the input on the fly.

Thanks

File Not In Project

Hi! this file is not in the project and there no function to download and no writings reference to where to find it AND not a file columns description so I can create it!!! Here the Encog project and the file name:

encog-dotnet-core/ConsoleExamples/Examples/ForexExample/ForexMarketTrain.cs
"DB!EURUSD.Bar.Time.600.csv"

Thank you.

Add AnalystNormalizeCSV.ExtractFields()

Please add AnalystNormalizeCSV.ExtractFields() method variant with a list parameter:

public static double[] ExtractFields(EncogAnalyst analyst, IList<string> rawValues, CSVFormat format, int outputLength, bool skipOutput)

Usage scenario:

  1. Create .ega and .eg file using training data in Workbench.
  2. Use these files to classify live (non-CSV) data (.ega contains normalization information).

There already exists such a method, but it requires a CSV input. I prepared a solution here (together with an example and test data): http://dione.zcu.cz/~toman40/encog/ExtractFields-patch.zip

The provided example works well with iris data, but throws an NullReferenceException with data7, which may indicate a bug in Encog.

Unit Tests Failing - TestHessian

Just FYI:

TestDualOutput() and TestSingleOutput() produce:

System.IndexOutOfRangeException: Index was outside the bounds of the array
at Encog.MathUtil.Matrices.Hessian.HessianFD.Init(BasicNetwork theNetwork, IMLDataSet theTraining) in HessianFD.cs: line 102

Elman Network - Negative and above 1 calculation values

normliezed the data but still, get negative and above 1 values.

even when run the ElmanXOR sample as is:

  • layers: 1 input, 5 hidden, 1 output
  • activation: ActivationSigmoid
  • training: LevenbergMarquardtTraining:
    a. Greedy
    b. HybridStrategy -> NeuralSimulatedAnnealing

Multiple threading problem with BasicGenericAlgorithm

Look at this code inside BasicGenericAlgorithm.cs:

int offspringIndex = Population.PopulationSize
                                 - offspringCount;
...
Parallel.For(0, countToMate, i =>
            {
                IGenome mother = Population.Genomes[i];
                var fatherInt = (int)(ThreadSafeRandom.NextDouble() * matingPopulationSize);
                IGenome father = Population.Genomes[fatherInt];
                IGenome child1 = Population.Genomes[offspringIndex];
                IGenome child2 = Population.Genomes[offspringIndex + 1];

                var worker = new MateWorker(mother, father, child1,
                                            child2);

                worker.Run();                

                offspringIndex += 2;
            });

Here, you parallel iterations of cycle, but setting offspringIndex inside is not thread safe. It may cause follow problem: during one thread is calculating score for child, another thread is recreating the same child. In my case I saw complitely different Error of method and Score for the best network.
How did i fix it:

Parallel.For(0, countToMate, i =>
            {
                IGenome mother = Population.Genomes[i];
                var fatherInt = (int) (ThreadSafeRandom.NextDouble()*matingPopulationSize);
                IGenome father = Population.Genomes[fatherInt];
                IGenome child1 = Population.Genomes[offspringIndex + i * 2];
                IGenome child2 = Population.Genomes[offspringIndex + i * 2 + 1];

                var worker = new MateWorker(mother, father, child1,
                                            child2);

                worker.Run();
            });

Please repeat my fix inside your repository.

Unhandled exception in Encog.Util.File.ResourceLoader.CreateStream (ResourceLoader.cs)

Hello,

I would like to report unhandled exception:

Encog.Util.File.ResourceLoader.CreateStream (ResourceLoader.cs)

    public static Stream CreateStream(String resource)
    {
        Stream result = null;
        Assembly[] assemblies = AppDomain.CurrentDomain.GetAssemblies();

        foreach (Assembly a in assemblies)
        {
            result = a.GetManifestResourceStream(resource); // this line causes exception
            if (result != null)
                break;
        }

        return result;
    }

resource = "Encog.Resources.analyst.csv"

The invoked member is not supported in a dynamic assembly.

w System.Reflection.Emit.InternalAssemblyBuilder.GetManifestResourceStream(String name)
w Encog.Util.File.ResourceLoader.CreateStream(String resource) w d:\dev\MyProject\encog-core-cs\Util\File\ResourceLoader.cs:wiersz 54
w Encog.App.Analyst.Script.Prop.PropertyConstraints..ctor() w d:\dev\MyProject\encog-core-cs\App\Analyst\Script\Prop\PropertyConstraints.cs:wiersz 62
w Encog.App.Analyst.Script.Prop.PropertyConstraints.get_Instance() w d:\dev\MyProject\encog-core-cs\App\Analyst\Script\Prop\PropertyConstraints.cs:wiersz 132
w Encog.App.Analyst.Script.ScriptSave.SaveSubSection(EncogWriteHelper xout, String section, String subSection) w d:\dev\MyProject\encog-core-cs\App\Analyst\Script\ScriptSave.cs:wiersz 273
w Encog.App.Analyst.Script.ScriptSave.Save(Stream stream) w d:\dev\MyProject\encog-core-cs\App\Analyst\Script\ScriptSave.cs:wiersz 66
w Encog.App.Analyst.Script.AnalystScript.Save(Stream stream) w d:\dev\MyProject\encog-core-cs\App\Analyst\Script\AnalystScript.cs:wiersz 333
w Encog.App.Analyst.EncogAnalyst.Save(Stream stream) w d:\dev\MyProject\encog-core-cs\App\Analyst\EncogAnalyst.cs:wiersz 778
w Encog.App.Analyst.EncogAnalyst.Save(FileInfo file) w d:\dev\MyProject\encog-core-cs\App\Analyst\EncogAnalyst.cs:wiersz 749
w MyProject.Normalize(FilesInfo filesInfo) w d:\dev\MyProject\EncogFrameworkLogic\AuxiliaryMethods.cs:wiersz 68
w MyProject.TrainNetwork() w d:\dev\MyProject\EncogFrameworkLogic\MainClass.cs:wiersz 37
w TALibAnalyzer.Form1.<button2_Click>b__0() w d:\dev\MyProject\Gui\Form1.cs:wiersz 43
w System.Threading.ThreadHelper.ThreadStart_Context(Object state)
w System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
w System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
w System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
w System.Threading.ThreadHelper.ThreadStart()

Best regards!

Nuget - NuSpec

Hi,

I notice Nuget Gallery does not have latest version. Is it possible we can add a NuSpec file to the core project for c# and then have the build server automatically run Unit Tests, Build and then deploy to NuGet, this way, we can automate NuGet package.

I can help with this, we can use TeamCity for example :)

possible inconsistant namespacing

It seems that your Greedy() object lives in namespace Encog.ML.Train.Strategy whilst your SmartLearningRate() lives in Encog.Neural.Networks.Training.Strategy.

Was this intentional?

Minor bug in ResourceLoader.CreateStream

A project containing a dynamic assembly (such as Entity Framework) causes a exception if the in the CreateStream() method of the ResourceLoader class. For example if I access a database using entity framework and subsequently call an encog method that invokes CreateStream(), the GetManifestResourceStream() call throws an exception (due to the Anonymously Hosted DynamicMethods Assembly). I've created a workaround/fix for this in encog-dotnet-core-3.2.0-beta2 source code as follows:

public static Stream CreateStream(String resource)
{
Stream result = null;
Assembly[] assemblies = AppDomain.CurrentDomain.GetAssemblies();
foreach (Assembly a in assemblies)
{
if (a.IsDynamic) // this is the fix
continue;
result = a.GetManifestResourceStream(resource);
if (result != null)
break;
}
return result;
}

Problem in C# with generated code from workbench 3.2.0

I have a problem in C# with generated code from workbench 3.2.0
When I try to debug the code I receive this exception with this stack trace

in Encog.Persist.EncogDirectoryPersistence.LoadObject(Stream mask0)
in Encog.Persist.EncogDirectoryPersistence.LoadObject(FileInfo file)
in AddestramentoSOM.Program.Main(String[] args) in c:\Users\Carmine\Documents\Visual Studio 2012\Projects\AddestramentoSOM\AddestramentoSOM\Program.cs:riga 25

and this message

Do not know how to read the object: SOM

Perhaps I realized where the problem is. In fact, the header of eg file is
encog,SOM,java,3.2.0,1,1412937518137
but this in part of code in Encog.Persist.EncogDirectoryPersistence.LoadObject(Stream mask0)
IEncogPersistor persistor = new PersistSOM();
string p = persistor.PersistClassString;
the value of p is "SOMNetwork"
so when in Encog.Persist.EncogDirectoryPersistence.LoadObject(Stream mask0) there is this part of code IEncogPersistor p = PersistorRegistry.Instance.GetPersistor(name);
p has value null because instead of name="SOMNetwork" will pass to the function name="SOM".
And so I obtain the exception aformentioned.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.