Coder Social home page Coder Social logo

sciml / reservoircomputing.jl Goto Github PK

View Code? Open in Web Editor NEW
203.0 13.0 36.0 11.73 MB

Reservoir computing utilities for scientific machine learning (SciML)

Home Page: https://docs.sciml.ai/ReservoirComputing/stable/

License: MIT License

Julia 100.00%
reservoir-computing echo-state-networks rnn machine-learning differential-equations scientific-machine-learning sciml julia

reservoircomputing.jl's Introduction

ReservoirComputing.jl

Join the chat at https://julialang.zulipchat.com #sciml-bridged Global Docs arXiv

codecov Build Status Build status

ColPrac: Contributor's Guide on Collaborative Practices for Community Packages SciML Code Style

rc_full_logo_large_white_cropped

ReservoirComputing.jl provides an efficient, modular and easy to use implementation of Reservoir Computing models such as Echo State Networks (ESNs). For information on using this package please refer to the stable documentation. Use the in-development documentation to take a look at at not yet released features.

Quick Example

To illustrate the workflow of this library we will showcase how it is possible to train an ESN to learn the dynamics of the Lorenz system. As a first step we will need to gather the data. For the Generative prediction we need the target data to be one step ahead of the training data:

using ReservoirComputing, OrdinaryDiffEq

#lorenz system parameters
u0 = [1.0, 0.0, 0.0]
tspan = (0.0, 200.0)
p = [10.0, 28.0, 8 / 3]

#define lorenz system
function lorenz(du, u, p, t)
    du[1] = p[1] * (u[2] - u[1])
    du[2] = u[1] * (p[2] - u[3]) - u[2]
    du[3] = u[1] * u[2] - p[3] * u[3]
end
#solve and take data
prob = ODEProblem(lorenz, u0, tspan, p)
data = solve(prob, ABM54(), dt = 0.02)

shift = 300
train_len = 5000
predict_len = 1250

#one step ahead for generative prediction
input_data = data[:, shift:(shift + train_len - 1)]
target_data = data[:, (shift + 1):(shift + train_len)]

test = data[:, (shift + train_len):(shift + train_len + predict_len - 1)]

Now that we have the data we can initialize the ESN with the chosen parameters. Given that this is a quick example we are going to change the least amount of possible parameters. For more detailed examples and explanations of the functions please refer to the documentation.

input_size = 3
res_size = 300
esn = ESN(input_data, input_size, res_size;
    reservoir = rand_sparse(; radius = 1.2, sparsity = 6 / res_size),
    input_layer = weighted_init,
    nla_type = NLAT2())

The echo state network can now be trained and tested. If not specified, the training will always be ordinary least squares regression. The full range of training methods is detailed in the documentation.

output_layer = train(esn, target_data)
output = esn(Generative(predict_len), output_layer)

The data is returned as a matrix, output in the code above, that contains the predicted trajectories. The results can now be easily plotted (for the actual script used to obtain this plot please refer to the documentation):

using Plots
plot(transpose(output), layout = (3, 1), label = "predicted")
plot!(transpose(test), layout = (3, 1), label = "actual")

lorenz_basic

One can also visualize the phase space of the attractor and the comparison with the actual one:

plot(transpose(output)[:, 1],
    transpose(output)[:, 2],
    transpose(output)[:, 3],
    label = "predicted")
plot!(transpose(test)[:, 1], transpose(test)[:, 2], transpose(test)[:, 3], label = "actual")

lorenz_attractor

Citing

If you use this library in your work, please cite:

@article{JMLR:v23:22-0611,
  author  = {Francesco Martinuzzi and Chris Rackauckas and Anas Abdelrehim and Miguel D. Mahecha and Karin Mora},
  title   = {ReservoirComputing.jl: An Efficient and Modular Library for Reservoir Computing Models},
  journal = {Journal of Machine Learning Research},
  year    = {2022},
  volume  = {23},
  number  = {288},
  pages   = {1--8},
  url     = {http://jmlr.org/papers/v23/22-0611.html}
}

Acknowledgements

This project was possible thanks to initial funding through the Google summer of code 2020 program. Francesco M. further acknowledges ScaDS.AI and RSC4Earth for supporting the current progress on the library.

reservoircomputing.jl's People

Contributors

00krishna avatar anasabdelr avatar arnostrouwen avatar chrisrackauckas avatar christopher-dg avatar cmerkatas avatar dependabot[bot] avatar devmotion avatar github-actions[bot] avatar jamesjscully avatar jay-sanjay avatar justtestingtho avatar kanav99 avatar martinuzzifrancesco avatar mkg33 avatar natema avatar pallharaldsson avatar ranocha avatar thazhemadam avatar viralbshah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

reservoircomputing.jl's Issues

Update docs

Some improvements have not made their way into the API docs it seems, having them stated more clearly is necessary for more clarity on the use of some functions. Example

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Division by zero in init_reservoir_givendeg()

When using init_reservoir_givendeg for small res_size in combination with a low degree it is possible to devide by zero.

In init_reservoir_givendeg rho_w can be zero if all eigenvalues of W are zero.

rho_w = maximum(abs.(eigvals(W)))

In that case init_reservoir_givendeg will return a W containing NaN or ±Inf.

My suggestion:
Add an error if rho_w is zero or generate W until rho_w is not zero any more.

The same issue is true for init_reservoir_givensp.

Feature request: interpolation

I'd like to be able to interpolate between two timepoints, where I have measurements before the first timepoint and after the second timepoint, with missing data inbetween; I could just forecast from the first timepoint, but that does not take into account information on the state at the beginning of the second timepoint. Is this something that can be easily accommodated using the fitting machinery?

Novel minimal reservoir computing

What kind of problems is it mostly used for?

Short time series modeling, showing improved results over NGRC (see #108)

Other implementations to know about

No open implementations available at the moment

References

Paper

TagBot issue

Describe the bug 🐞

Tagbot is failing on the new version (0.9.5) as can be seen here. The version displayed on the project's page is 0.9.3, adding the package adds 0.9.4

Feed data to `train` instead of `ESN`

Pretty much title, the goal is to divide the definition of the model and the actual training. We could also add an instantiate function to run the data through the ESN to obtain the states without the training

Suggestion for figures: express time axis in Lyapunov time

Hey, awesome to see this in Julia! I remember first learning about this methods in a 2018/2019 meeting where Ed Ott presented their reservoir computing work for chaotic timeseries prediction, and we presented the work based on TimeseriesPrediction.jl, (now published here https://link.springer.com/article/10.1007/s00332-019-09588-7 )

In chaotic timeseries prediction it is customary to express the time axis in units of the Lyapunov time, 1/λ with λ the Maximum Lyapunov exponent. For many famous systems (e.g. Lorenz63) you can find its value online, or use the lyapunov function from DynamicalSystems.jl, https://juliadynamics.github.io/DynamicalSystems.jl/dev/chaos/lyapunovs/

[Breaking] Unify states modifications

At the moment we have nonlinear algorithms and padding/extension as separate entities although they both manipulate the states. Since not all manipulations neatly fit into these categories, we should unify the alterations to the states.

Variations of the Echo State Networks

After working for a while on issue SciML/NeuralPDE.jl#34 and on the implementation of an echo state network, I have found a lot of interesting papers regarding possible modifications and improvements of the model.

A direct improvement has been shown by Lun et al. with the proposal of the Double Activation Echo State Network (DAF-ESN), capable of improving the results of the ESN in time series predictions.

One of the problems of the original implementation is the prediction of noisy data, examined in this paper by Prater. Here it can be seen that using different methods aside from the classical ridge regression for the construction of the output layer yields better results in noisy datasets.

Another problem tackled regards the robustness in the presence of outliers. Of all the proposed models three in particular have shown interesting results in this direction:

  • SVESM : Support Vector Echo-State Machine, the basic idea of which consists in replacing the kernel trick with a reservoir trick.

  • ESGP: A Bayesian approach to the computation of the output matrix.

  • RESN: Improvement over the ESGP with a Laplace distribution instead of a Gaussian one.

Also the creation of the reservoir is an interesting area of study. In order to make an ESN work properly the echo state property (ESP) has to be guaranteed. The necessary condition for the ESP is that the spectral radius of the matrix of the reservoir should be less than 1. The standard procedure to achieve this is that the matrix is created under determined schemes and then scaled to have spectral radius < 1. In this paper by Yang et al. is proposed a new method for meeting the ESP without scaling of the reservoir weights using single value decomposition.

It is also known that the reservoir obtains best performances when it works on the edge of criticality. An interesting algorithm to guarantee that this condition is met is proposed in the following paper, where we can also see the improvements this control yields.

All the illustrated approaches have used a sparse matrix with random numbers as the reservoir. A curious alternative has been proposed by Ylmaz in which the reservoir consists of Cellular Automata rules. Both one dimensional and two dimensional rules are explored with good results.

These are just a few example of the different improvements that can be found in literature regarding Echo State Networks. The study of this model is somewhat still in the early days but already showing better results in chaotic prediction than other Machine Learning methods, as shown by Chattopadhyay et al..

Change `nla()` to reduce allocations

At the moment the nla() function takes an array as input and returns the modified array as output, but this creates a lot of allocations inside for loops. The new function call nla!() should take in the array to be modified and a preallocated array to store the modified array in. This should remove unneeded allocations and speed up computations.

Modeling multiple time series?

The current ESN allows one to model multiple correlated time series, but I was wondering if one had multiple independent time series (assumed to be generated under the same process) - perhaps of different lengths - how one would train an ESN from those using this framework.

Build on LuxCore

A lot of recent issues can be tackled by an internal refactor of the drivers and generally of how the models are defined. Since this refactor is in the plans, it is worth exploring the possibility of building on LuxCore.jl, to provide a familiar interface to SciML users. I will update this issue with ideas on how to build the components

Add external dependencies for training strategies

Packages like LIBSVM or MLJLinearModels can be external (weak) dependencies.

Add a lower level function that takes the place of _train structured like train(algo::TrainingAlgo, states, target; kwargs...). The upper level function still maintains train(esn::AbstractReservoirComputer, target_data, training_algo; kwargs...). Additionally include a version that has train(esn::AbstractReservoirComputer, training_data, target_data, training_algo; kwargs...) to prepare for #202

Fitted states for ESN?

While there's an ESNpredict function, and it's possible to add the exact input state onto the state of ESN, would it be possible to add an `ESNfitted' function that returns the (predicted) state for the input/training data?

Extend to count/binary data using GLMs?

I don't know much about ESNs beyond the basics, but I was wondering whether there are any barriers to switching from linear models to generalized linear models to allow time series of e.g. count data to be considered? This could involve using MLJGLMInterface.jl, though it doesn't have penalized models, or GLMNet.jl, which does have penalized models but may be more work to link, given the current codebase links to MLJLinearModels.jl.

Example from Readme non-working

Hey there,

I was just testing your Reservoir Implementation and the example presented in the readme is not working for me.

There are some typos there, a missing comma in the ESN call and NLAT2() is called before the using ReservoirComputing.
The main problem though is, that I don't get proper result.

So the example from the readme:

using ParameterizedFunctions
using OrdinaryDiffEq
using ReservoirComputing


#lorenz system parameters
u0 = [1.0,0.0,0.0]
tspan = (0.0,200.0)
p = [10.0,28.0,8/3]
#define lorenz system
function lorenz(du,u,p,t)
    du[1] = p[1]*(u[2]-u[1])
    du[2] = u[1]*(p[2]-u[3]) - u[2]
    du[3] = u[1]*u[2] - p[3]*u[3]
end
#solve and take data
prob = ODEProblem(lorenz, u0, tspan, p)
sol = solve(prob, AB4(), dt=0.02)
v = sol.u
data = Matrix(hcat(v...))
shift = 1
train_len = 5000
predict_len = 1250
train = data[:, shift:shift+train_len-1]
test = data[:, train_len:train_len+predict_len-1]

approx_res_size = 300
radius = 1.2
degree = 6
activation = tanh
sigma = 0.1
beta = 0.0
alpha = 1.0
nla_type = NLAT2()
extended_states = false

esn = ESN(approx_res_size,
    train,
    degree,
    radius,
    activation, #default = tanh
    alpha, #default = 1.0
    sigma, #default = 0.1
    nla_type, #default = NLADefault(),
    extended_states #default = false
    )

W_out = ESNtrain(esn, beta)
output = ESNpredict(esn, predict_len, W_out)

using Plots
plot(transpose(output),layout=(3,1),label="predicted")
plot!(transpose(test),layout=(3,1), xlims=[0,500], label="actual")

Returns

(Each time I run it looks different, but never close to the Lorenz attractor)
reservoirtest

Access states during/after prediction

It could be useful to access the states during prediction, with either Generative() or Predictive(). It could be implemented as a boolean toggle save_states, in which case the output of the prediction would be a tuple with the prediction and the states

Error when loading in julia 1.6.0-rc1

Calling Using ReservoirComputing gives the following errors

julia> using ReservoirComputing
ERROR: InitError: UndefVarError: Params not defined
Stacktrace:
  [1] top-level scope
    @ none:1
  [2] eval
    @ ./boot.jl:360 [inlined]
  [3] define_typeinf_code2()
    @ IRTools.Inner ~/.julia/packages/IRTools/BpoqK/src/reflection/reflection.jl:21
  [4] __init__()
    @ IRTools.Inner ~/.julia/packages/IRTools/BpoqK/src/IRTools.jl:37
  [5] _include_from_serialized(path::String, depmods::Vector{Any})
    @ Base ./loading.jl:670
  [6] _require_search_from_serialized(pkg::Base.PkgId, sourcepath::String)
    @ Base ./loading.jl:756
  [7] _tryrequire_from_serialized(modkey::Base.PkgId, build_id::UInt64, modpath::String)
    @ Base ./loading.jl:685
  [8] _require_search_from_serialized(pkg::Base.PkgId, sourcepath::String)
    @ Base ./loading.jl:745
  [9] _tryrequire_from_serialized(modkey::Base.PkgId, build_id::UInt64, modpath::String)
    @ Base ./loading.jl:685
 [10] _require_search_from_serializedjulia> using ReservoirComputing
ERROR: InitError: UndefVarError: Params not defined
Stacktrace:
  [1] top-level scope
    @ none:1
  [2] eval
    @ ./boot.jl:360 [inlined]
  [3] define_typeinf_code2()
    @ IRTools.Inner ~/.julia/packages/IRTools/BpoqK/src/reflection/reflection.jl:21
  [4] __init__()
    @ IRTools.Inner ~/.julia/packages/IRTools/BpoqK/src/IRTools.jl:37
  [5] _include_from_serialized(path::String, depmods::Vector{Any})
    @ Base ./loading.jl:670
  [6] _require_search_from_serialized(pkg::Base.PkgId, sourcepath::String)
    @ Base ./loading.jl:756
  [7] _tryrequire_from_serialized(modkey::Base.PkgId, build_id::UInt64, modpath::String)
    @ Base ./loading.jl:685
  [8] _require_search_from_serialized(pkg::Base.PkgId, sourcepath::String)
    @ Base ./loading.jl:745(pkg::Base.PkgId, sourcepath::String)
    @ Base ./loading.jl:745
 [11] _require(pkg::Base.PkgId)
    @ Base ./loading.jl:994
 [12] require(uuidkey::Base.PkgId)
    @ Base ./loading.jl:910
 [13] require(into::Module, mod::Symbol)
    @ Base ./loading.jl:897
during initialization of module Inner

julia> using ReservoirComputing
ERROR: InitError: LoadError: KeyError: key %9 not found
Stacktrace:
  [1] getindex
    @ ./dict.jl:482 [inlined]
  [2] substitute(p::IRTools.Inner.Pipe, x::Union{IRTools.Inner.NewVariable, IRTools.Inner.Variable})
    @ IRTools.Inner ~/.julia/packages/IRTools/BpoqK/src/ir/ir.jl:853
  [3] _broadcast_getindex_evalf
    @ ./broadcast.jl:648 [inlined]
  [4] _broadcast_getindex
    @ ./broadcast.jl:621 [inlined]
  [5] getindex
    @ ./broadcast.jl:575 [inlined]
  [6] copyto_nonleaf!(dest::Vector{Zygote.Alpha}, bc::Base.Broadcast.Broadcasted{Base.Broadcast.DefaultArrayStyle{1}, Tuple{Base.OneTo{Int64}}, typeof(IRTools.Inner.substitute), Tuple{Tuple{IRTools.Inner.Pipe}, Base.Broadcast.Extruded{Vector{Any}, Tuple{Bool}, Tuple{Int64}}}}, iter::Base.OneTo{Int64}, state::Int64, count::Int64)
    @ Base.Broadcast ./broadcast.jl:1078
  [7] copy
    @ ./broadcast.jl:930 [inlined]
  [8] materialize
    @ ./broadcast.jl:883 [inlined]
  [9] substitute(p::IRTools.Inner.Pipe, x::Expr)
    @ IRTools.Inner ~/.julia/packages/IRTools/BpoqK/src/ir/ir.jl:856
 [10] substitute(p::IRTools.Inner.Pipe, x::IRTools.Inner.Statement)
    @ IRTools.Inner ~/.julia/packages/IRTools/BpoqK/src/ir/ir.jl:855
 [11] iterate(p::IRTools.Inner.Pipe, ::Any)
    @ IRTools.Inner ~/.julia/packages/IRTools/BpoqK/src/ir/ir.jl:892
 [12] renumber(ir::Any)
    @ IRTools.Inner ~/.julia/packages/IRTools/BpoqK/src/passes/passes.jl:124
 [13] |>
    @ ./operators.jl:859 [inlined]
 [14] Zygote.Adjoint(ir::IRTools.Inner.IR; varargs::Nothing, normalise::Bool)
    @ Zygote ~/.julia/packages/Zygote/KNUTW/src/compiler/reverse.jl:287
 [15] top-level scope
    @ ~/.julia/packages/Zygote/KNUTW/src/precompile.jl:15
 [16] include(mod::Module, _path::String)
    @ Base ./Base.jl:386
 [17] include
    @ ~/.julia/packages/Zygote/KNUTW/src/Zygote.jl:1 [inlined]
 [18] precompile
    @ ~/.julia/packages/Zygote/KNUTW/src/Zygote.jl:46 [inlined]
 [19] (::Zygote.var"#1837#1838")()
    @ Zygote ~/.julia/packages/Requires/035xH/src/init.jl:11
 [20] __init__()
    @ Zygote ~/.julia/packages/Requires/035xH/src/init.jl:18
 [21] _include_from_serialized(path::String, depmods::Vector{Any})
    @ Base ./loading.jl:670
 [22] _require_search_from_serialized(pkg::Base.PkgId, sourcepath::String)
    @ Base ./loading.jl:756
 [23] _tryrequire_from_serialized(modkey::Base.PkgId, build_id::UInt64, modpath::String)
    @ Base ./loading.jl:685
 [24] _require_search_from_serialized(pkg::Base.PkgId, sourcepath::String)
    @ Base ./loading.jl:745
 [25] _require(pkg::Base.PkgId)
    @ Base ./loading.jl:994
 [26] require(uuidkey::Base.PkgId)
    @ Base ./loading.jl:910
 [27] require(into::Module, mod::Symbol)
    @ Base ./loading.jl:897
in expression starting at /home/gabrielbaraldi/.julia/packages/Zygote/KNUTW/src/precompile.jl:15
during initialization of module Zygote

If I call using ReservoirComputing for a third time it just works.

This is the version I am using

Julia Version 1.6.0-rc1
Commit a58bdd9010 (2021-02-06 15:49 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-11.0.1 (ORCJIT, skylake)

Tests are failing for Julia 1.7

Between 1.6 and 1.7 something changed on random numbers generation. The tests at the moment are run using the lts version (1.6) but failing on 1.7.

Ideally the tests should be robust enough to be run on both versions

RecursiveArrayTools v3 compatibility

I am receiving a MethodError when attempting to run the most basic example provided with ReservoirComputing.jl
The MethodError occurs when invoking ESN:


julia> esn = ESN(input_data;
reservoir = RandSparseReservoir(res_size, radius = 1.2, sparsity = 6 / res_size),
input_layer = WeightedLayer(),
nla_type = NLAT2())
ERROR: MethodError: no method matching size(::RecursiveArrayTools.DiffEqArray{Float64, 2, Vector{Vector{Float64}}, Vector{Float64}, Vector{Float64}, ODEFunction{true, SciMLBase.AutoSpecialize,

Full details including package and version numbers and terminal I/O is provided below.
I look forward to your feedback and guidance in resolving this issue!

nick@localhost-live:~$ julia
_
_ _ ()_ | Documentation: https://docs.julialang.org
() | () () |
_ _ | | __ _ | Type "?" for help, "]?" for Pkg help.
| | | | | | |/ ` | |
| | |
| | | | (
| | | Version 1.9.4 (2023-11-14)
/ |_'|||_'_| | Official https://julialang.org/ release
|__/ |

julia> versioninfo()
Julia Version 1.9.4
Commit 8e5136fa297 (2023-11-14 08:46 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 12 × AMD Ryzen 5 1600 Six-Core Processor
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-14.0.6 (ORCJIT, znver1)
Threads: 1 on 12 virtual cores

(@v1.9) pkg> status
Status ~/.julia/environments/v1.9/Project.toml
[1dea7af3] OrdinaryDiffEq v6.63.0
[91a5bcdd] Plots v1.39.0
[731186ca] RecursiveArrayTools v3.1.0
[7c2d2b1e] ReservoirComputing v0.9.5

julia> using ReservoirComputing, OrdinaryDiffEq

julia> #lorenz system parameters
u0 = [1.0, 0.0, 0.0]
3-element Vector{Float64}:
1.0
0.0
0.0

julia> tspan = (0.0, 200.0)
(0.0, 200.0)

julia> p = [10.0, 28.0, 8 / 3]
3-element Vector{Float64}:
10.0
28.0
2.6666666666666665

julia> #define lorenz system
function lorenz(du, u, p, t)
du[1] = p[1] * (u[2] - u[1])
du[2] = u[1] * (p[2] - u[3]) - u[2]
du[3] = u[1] * u[2] - p[3] * u[3]
end
lorenz (generic function with 1 method)

julia> #solve and take data
prob = ODEProblem(lorenz, u0, tspan, p)
ODEProblem with uType Vector{Float64} and tType Float64. In-place: true
timespan: (0.0, 200.0)
u0: 3-element Vector{Float64}:
1.0
0.0
0.0

julia> data = solve(prob, ABM54(), dt = 0.02)
retcode: Success
Interpolation: 3rd order Hermite
t: 10001-element Vector{Float64}:
0.0
0.02
0.04
0.06
0.08
0.1
0.12000000000000001
0.14
0.16
0.18

199.84000000002843
199.86000000002844
199.88000000002845
199.90000000002846
199.92000000002847
199.94000000002848
199.9600000000285
199.9800000000285
200.0
u: 10001-element Vector{Vector{Float64}}:
[1.0, 0.0, 0.0]
[0.8680136241599999, 0.5116114289171798, 0.00464843405046073]
[0.846922029853721, 0.9721624694997062, 0.016723754482887952]
[0.9127463182632621, 1.4365930707725014, 0.03638911365118479]
[1.0547161536343206, 1.9492406388218464, 0.06682954770912672]
[1.27153139719415, 2.5500972526141688, 0.11416945073599366]
[1.569344101287243, 3.2792182194088135, 0.18865896722509617]
[1.9607159493604909, 4.179979661504116, 0.30675850312612357]
[2.4642887133184064, 5.301306601644064, 0.4946463038435349]
[3.1048014427360577, 6.698665411076851, 0.7936722317742436]

[-12.712535028050192, -7.991920612082245, 36.96745201041986]
[-11.636377668987166, -5.691148385936144, 36.66624998028994]
[-10.372587430370148, -3.7766460900344607, 35.77093362370175]
[-9.034083394619811, -2.325994687676812, 34.4844534202631]
[-7.718171209563215, -1.3249744565922135, 32.98711594710444]
[-6.496311609072913, -0.7054966719917979, 31.41147109034457]
[-5.412475842259617, -0.3802159215457954, 29.84101721524591]
[-4.486956988649644, -0.26529770021033905, 28.320947150047015]
[-3.722463627032463, -0.2914952957436106, 26.87140831430775]

julia> shift = 300
300

julia> train_len = 5000
5000

julia> predict_len = 1250
1250

julia> #one step ahead for generative prediction
input_data = data[:, shift:(shift + train_len - 1)]
t: 5000-element Vector{Float64}:
5.9799999999999605
5.99999999999996
6.01999999999996
6.039999999999959
6.059999999999959
6.079999999999958
6.099999999999958
6.1199999999999575
6.139999999999957
6.159999999999957

105.79999999999609
105.81999999999609
105.83999999999608
105.85999999999608
105.87999999999607
105.89999999999607
105.91999999999607
105.93999999999606
105.95999999999606
u: 5000-element Vector{Vector{Float64}}:
[-9.903289241214775, -9.018088588651105, 29.81517777140388]
[-9.691839850827593, -8.473011482849701, 29.93605690750639]
[-9.420486660590333, -7.939065463718113, 29.90843259211291]
[-9.104880950241373, -7.4449797108850175, 29.742036297886404]
[-8.762474089642671, -7.01314735768699, 29.454019305573695]
[-8.410935717615619, -6.658702746055927, 29.06608913077009]
[-8.066811666191718, -6.389828805905225, 28.601842075390245]
[-7.744566849246501, -6.2089080639741825, 28.084693027483976]
[-7.456053443994025, -6.11406609169808, 27.53654671977279]
[-7.210360712173874, -6.1007246144663565, 26.977154921025395]

[12.518922134483173, 10.108387400259105, 34.677601864758]
[11.905676310941601, 8.232629715112148, 35.05977814254804]
[11.071914312382939, 6.472053614446601, 34.882263966926644]
[10.092017839786998, 4.960342280071891, 34.24498656635422]
[9.04494324433941, 3.7654618681658447, 33.275347750522876]
[8.001895218860973, 2.8967891643506336, 32.09636572944009]
[7.018825886840775, 2.3239810199099424, 30.808116692367136]
[6.13395766702855, 1.9972030630363637, 29.48232612752881]
[5.3690197189757916, 1.862732065250123, 28.165385874059083]

julia> target_data = data[:, (shift + 1):(shift + train_len)]
t: 5000-element Vector{Float64}:
5.99999999999996
6.01999999999996
6.039999999999959
6.059999999999959
6.079999999999958
6.099999999999958
6.1199999999999575
6.139999999999957
6.159999999999957
6.179999999999956

105.81999999999609
105.83999999999608
105.85999999999608
105.87999999999607
105.89999999999607
105.91999999999607
105.93999999999606
105.95999999999606
105.97999999999605
u: 5000-element Vector{Vector{Float64}}:
[-9.691839850827593, -8.473011482849701, 29.93605690750639]
[-9.420486660590333, -7.939065463718113, 29.90843259211291]
[-9.104880950241373, -7.4449797108850175, 29.742036297886404]
[-8.762474089642671, -7.01314735768699, 29.454019305573695]
[-8.410935717615619, -6.658702746055927, 29.06608913077009]
[-8.066811666191718, -6.389828805905225, 28.601842075390245]
[-7.744566849246501, -6.2089080639741825, 28.084693027483976]
[-7.456053443994025, -6.11406609169808, 27.53654671977279]
[-7.210360712173874, -6.1007246144663565, 26.977154921025395]
[-7.013953305940451, -6.162911685030047, 26.423996356199307]

[11.905676310941601, 8.232629715112148, 35.05977814254804]
[11.071914312382939, 6.472053614446601, 34.882263966926644]
[10.092017839786998, 4.960342280071891, 34.24498656635422]
[9.04494324433941, 3.7654618681658447, 33.275347750522876]
[8.001895218860973, 2.8967891643506336, 32.09636572944009]
[7.018825886840775, 2.3239810199099424, 30.808116692367136]
[6.13395766702855, 1.9972030630363637, 29.48232612752881]
[5.3690197189757916, 1.862732065250123, 28.165385874059083]
[4.732497640190652, 1.8723908301054109, 26.88478061861816]

julia> test = data[:, (shift + train_len):(shift + train_len + predict_len - 1)]

t: 1250-element Vector{Float64}:
105.97999999999605
105.99999999999605
106.01999999999605
106.03999999999604
106.05999999999604
106.07999999999603
106.09999999999603
106.11999999999603
106.13999999999602
106.15999999999602

130.7999999999931
130.81999999999312
130.83999999999313
130.85999999999314
130.87999999999315
130.89999999999316
130.91999999999317
130.93999999999318
130.9599999999932
u: 1250-element Vector{Vector{Float64}}:
[4.732497640190652, 1.8723908301054109, 26.88478061861816]
[4.22349538680031, 1.987820915806937, 25.655613923905776]
[3.8353316780542905, 2.181372263125947, 24.485767387745945]
[3.5584604091713943, 2.4351927068305277, 23.37943997904429]
[3.3826188986546626, 2.739600649501364, 22.339370937107052]
[3.2982766454142727, 3.0913593199467937, 21.368185196603726]
[3.2975239355256036, 3.492142131634464, 20.46924395963976]
[3.3745467566607004, 3.947276445140908, 19.647275341036046]
[3.5258121339502693, 4.464737468128744, 18.908963446114747]
[3.7500536807441187, 5.054297346775937, 18.263604797307796]

[5.667284334594522, 7.920044709382794, 21.837292439416736]
[6.128504560099492, 8.495628927065395, 21.64604924227314]
[6.616442143819489, 9.132648149983781, 21.6160067677496]
[7.1357622016914535, 9.812756425953655, 21.762471295222465]
[7.686353276111834, 10.510880836564114, 22.099963839944905]
[8.262833251743796, 11.193794316023716, 22.639394570288623]
[8.853950777066231, 11.819492971445888, 23.384189012804384]
[9.442118229459183, 12.338166583539232, 24.3255347922894]
[10.003421378477109, 12.695609055900354, 25.43743234339266]

julia> res_size = 300
300

julia> esn = ESN(input_data;
reservoir = RandSparseReservoir(res_size, radius = 1.2, sparsity = 6 / res_size),
input_layer = WeightedLayer(),
nla_type = NLAT2())
ERROR: MethodError: no method matching size(::RecursiveArrayTools.DiffEqArray{Float64, 2, Vector{Vector{Float64}}, Vector{Float64}, Vector{Float64}, ODEFunction{true, SciMLBase.AutoSpecialize, FunctionWrappersWrappers.FunctionWrappersWrapper{Tuple{FunctionWrappers.FunctionWrapper{Nothing, Tuple{Vector{Float64}, Vector{Float64}, Vector{Float64}, Float64}}, FunctionWrappers.FunctionWrapper{Nothing, Tuple{Vector{ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}, Vector{Float64}, Float64}}, FunctionWrappers.FunctionWrapper{Nothing, Tuple{Vector{ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}, Vector{Float64}, Vector{Float64}, ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}}, FunctionWrappers.FunctionWrapper{Nothing, Tuple{Vector{ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}, Vector{Float64}, ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}}}, false}, LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, SymbolicIndexingInterface.SymbolCache{Nothing, Nothing, Nothing}}}, ::Int64)

Closest candidates are:
size(::Union{LinearAlgebra.QR, LinearAlgebra.QRCompactWY, LinearAlgebra.QRPivoted}, ::Integer)
@ LinearAlgebra ~/.julia/juliaup/julia-1.9.4+0.x64.linux.gnu/share/julia/stdlib/v1.9/LinearAlgebra/src/qr.jl:581
size(::Union{LinearAlgebra.QRCompactWYQ, LinearAlgebra.QRPackedQ}, ::Integer)
@ LinearAlgebra ~/.julia/juliaup/julia-1.9.4+0.x64.linux.gnu/share/julia/stdlib/v1.9/LinearAlgebra/src/qr.jl:583
size(::Union{LinearAlgebra.Cholesky, LinearAlgebra.CholeskyPivoted}, ::Integer)
@ LinearAlgebra ~/.julia/juliaup/julia-1.9.4+0.x64.linux.gnu/share/julia/stdlib/v1.9/LinearAlgebra/src/cholesky.jl:513
...

Stacktrace:
[1] ESN(train_data::RecursiveArrayTools.DiffEqArray{Float64, 2, Vector{Vector{Float64}}, Vector{Float64}, Vector{Float64}, ODEFunction{true, SciMLBase.AutoSpecialize, FunctionWrappersWrappers.FunctionWrappersWrapper{Tuple{FunctionWrappers.FunctionWrapper{Nothing, Tuple{Vector{Float64}, Vector{Float64}, Vector{Float64}, Float64}}, FunctionWrappers.FunctionWrapper{Nothing, Tuple{Vector{ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}, Vector{Float64}, Float64}}, FunctionWrappers.FunctionWrapper{Nothing, Tuple{Vector{ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}, Vector{Float64}, Vector{Float64}, ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}}, FunctionWrappers.FunctionWrapper{Nothing, Tuple{Vector{ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}, Vector{Float64}, ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1}}}}, false}, LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, SymbolicIndexingInterface.SymbolCache{Nothing, Nothing, Nothing}}}; variation::Default, input_layer::WeightedLayer{Float64}, reservoir::RandSparseReservoir{Float64, Float64}, bias::NullLayer, reservoir_driver::RNN{typeof(NNlib.tanh_fast), Float64}, nla_type::NLAT2, states_type::StandardStates, washout::Int64, matrix_type::Type)
@ ReservoirComputing ~/.julia/packages/ReservoirComputing/ptuHU/src/esn/echostatenetwork.jl:112
[2] top-level scope
@ REPL[17]:1

julia>

Adding capability of handling models with control inputs

Hi!

I was looking through the documentation and realized there was no implementation for creating a ESN model with control inputs. Wanted to also do a PR for this if it isn't already in the works (or if I somehow missed it).

Saving/Loading ESN via JLD2.jl gives error

I am training some ESNs and want to save them and load them again (preferably in another script) to make a prediction. After I failed to save/load with JLD.jl, I am using JLD2.jl now. Saving works apparently, however when I load the ESN again and try to make the prediction, I get the following error:

ERROR: MethodError: objects of type JLD2.JLDFile{JLD2.MmapIO} are not callable.

A minimal example (basically the example from the docs + saving/loading):

Minimal example
using OrdinaryDiffEq
using JLD2 

#define lorenz system
function lorenz!(du,u,p,t)
    du[1] = 10.0*(u[2]-u[1])
    du[2] = u[1]*(28.0-u[3]) - u[2]
    du[3] = u[1]*u[2] - (8/3)*u[3]
end

#solve and take data
prob = ODEProblem(lorenz!, [1.0,0.0,0.0], (0.0,200.0))
data = solve(prob, ABM54(), dt=0.02)


#determine shift length, training length and prediction length
shift = 300
train_len = 5000
predict_len = 1250

#split the data accordingly
input_data = data[:, shift:shift+train_len-1]
target_data = data[:, shift+1:shift+train_len]
test_data = data[:,shift+train_len+1:shift+train_len+predict_len]


using ReservoirComputing

#define ESN parameters
res_size = 300
res_radius = 1.2
res_sparsity = 6/300
input_scaling = 0.1

#build ESN struct
esn = ESN(input_data;
    variation = Default(),
    reservoir = RandSparseReservoir(res_size, radius=res_radius, sparsity=res_sparsity),
    input_layer = WeightedLayer(scaling=input_scaling),
    reservoir_driver = RNN(),
    nla_type = NLADefault(),
    states_type = StandardStates())

#define training method
training_method = StandardRidge(0.0)

#obtain output layer
output_layer = train(esn, target_data, training_method)

#saving + loading 
jldsave("W_1.jld2"; output_layer)
jldsave("esn_1.jld2"; esn)
esn = jldopen("esn_1.jld2")
Wₒᵤₜ =jldopen("W_1.jld2")
output = esn(Generative(predict_len), output_layer)

Am I doing something wrong? Or is there an easier way of saving/loading the ESNs?

Can not run examples code

Dear ReservoirComputing developer
I tried to run the Lorenz test case like in README.md, but I face an issue. I exactly just copied and paste the code, without any modification, however, the code shows an error namely during line nla_type = NLAT2(), it can not recognize NLAT2(). Attached is the exact screenshot of the error message.

image

I hope that you can help me what should I do to resolve this error? I tried to change the nla_type with NLADefault() but it still show a similar error like the screenshot
Regards
Abrari

Incorporating prior knowledge into ESNs using hybrid ESN

Hi,

This paper demonstrates how prior knowledge of the dynamics can be incorporated into the ESN architecture:
https://arxiv.org/pdf/1803.04779.pdf

It had some interesting results produced in the paper below using a reduced order model of a physical process:
https://arxiv.org/pdf/2001.04027.pdf

The basic idea is that instead of having dims = (D_r, M) as an input weight matrix into the reservoir (where D_r is the dimension of the reservoir and M is the dimension of the states) , you have (D_r, M+K) where K is the output dimension of the prior knowledge model. The initialisation of the input weight connections are configured such that some γ of the reservoir nodes are connected to the raw inputs, and the rest are connected to the prior knowledge model. The same is also done for the output read out.

They use a Erdos-Renyi network for the reservoir, and an additional modification to the reservoir states outputs that empirically helped their situation, but I don't think that will be needed for the general case.

Easier GPU calculations and docs

At the moment GPU computation is possible with the package, albeit not really easy to do as it should be. The workflow should be that the user inputs the data defined as CUArray with a chosen precision and ReservoirComputing does everything else like creating the layers and reservoir as CUArrays as well with the right precision.

Of course detailing one simple example on the docs should also be included in this work

Update GPU example for CI

Using @examples in the docs at the moment doesn't allow for GPU example, the Mackey Glass forecasting script is stil available here. Going forward this example needs a way to have GPU compute through CI.

SVESM example error with data_prep function

I've been working through the svesm_svm_noisytraintest.jl example. I get an error when trying to run the data_prep function. All of the code I have executed are copy pasted from the example provided.

MethodError: no method matching embed(::Dataset{1,Float64}, ::Int64, ::Int64)
Closest candidates are:
embed(::AbstractArray{T,1}, ::Any, ::Any) where T at
.julia/packages/DelayEmbeddings/4VbBx/src/embeddings.jl:116
embed(::AbstractArray{T,1}, ::Any, ::Any, ::H) where {T, H} at
.julia/packages/DelayEmbeddings/4VbBx/src/embeddings.jl:116

Stacktrace:
[1] data_prep(::Array{Float64,2}, ::Int64, ::Int64, ::Int64, ::Int64) at ./In[52]:3
[2] top-level scope at In[59]:2
[3] include_string(::Function, ::Module, ::String, ::String) at ./loading.jl:1091

Add `using MKL` on the docs

After a Flux.jl discussion on Slack I think we should at least indicate the possibility of using MKL to improve performance when dealing with CPU calculations. Instead of having using MKL on each example on the docs it could be more useful to have something like a "performance tips" on the main documentation page where we mention MKL

Feature Request: Jacobian along Trajectory

I think it would be very good if we were able to calculate the jacobian of the system along the trajectory. Among other advantages, this would allow us to find the lyapunov exponent spectrum of the attractor. Is this a possibility? I would be willing to help with this in anyway possible, if this is feasible.

What's a good way to deal with matrices containing Infs or NANs?

I have been using ReservoirComputing.jl for a while a now, and one error I get sometimes is the one about a matrix containing Infs and NaNs. This happens, as far as I know, when the random matrix of ESN can't be decomposed/factored. I have run ENS successfully just to have it crash when I increased/decreased the reservoir size (without changing anything else).

Is there a way to deal with this issue? Is there a known/common work around?

Integrate with MLJ

Having a function to convert a ReservoirComputing model to a MLJ machine would give access to a broad spectrum of useful functions and methods

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.