Coder Social home page Coder Social logo

scipy_ode's People

Contributors

aarchiba avatar bmcage avatar drhagen avatar nmayorov avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar

scipy_ode's Issues

Support for complex-valued solvers

A complex-valued ODE can always be treated as a real-valued ODE of dimension 2n, but on the one hand user code can be much simpler if it doesn't have to do the conversion in both the RHS and the solution. On the other hand, solvers with built-in complex support can be more efficient. More to the point, scipy supports the complex-valued ODE solver ZVODE because a complex-valued solver was requested by users. So we should at least think about how one would fit complex output values into the design here.

Logic flaws in step selection

Putting it here so we don't forget.

In this line we multiply theoretical factor by SAFETY, but the idea of SAFETY is to perform milder corrections than the asymptotic error behavior dictates. So in this situation SAFETY must be used in opposite way like 1 / SAFETY.

Also we should check that the step is still increasing / decreasing when multiplied / divided by SAFETY.

There are several places to check.

This repository form

It's not very important. But I suggest to put this repository in a more convenient and straightforward form:

scipy_ode/
  tests/
  __init__.py
  other source files
README.md
.gitignore

It will be self-contained repository without awkward legacy from scipy.integrate. After it's finished it will be very easy to include it back to scipy.integrate.

Allowing n-dimensional state

As we discussed in the original thread: what should we do about the shape of y:

  1. Require that y be a 1-dimensional array.
    • Advantages: This simplifies the internal code. I am pretty sure this is by far the most common case, but I am not sure about scalar y.
    • Disadvantages: Almost everything that can broadcast in Numpy and Scipy does. There is a pretty natural extension to ODE states.
  2. Allow y to be an n-dimensional array in the convenience functions but not in the solver classes. The convenience classes will call the appropriate reshape functions in their inputs and outputs to make this transparent to the user.
    • Advantages: Keeps from complicating the hard code (the solvers), while still providing the expected interface to the casual user.
    • Disadvantages: Inconsistencies in broadcast handling. Communication between solvers and integrators requires constant reshaping. This shouldn't affect performance, but it is king of annoying.
  3. Allow y to be an n-dimensional array in all contexts
    • Advantages: Maximally general. Maximally consistent.
    • Disadvantages: Likely a pain to code, though I have not tried. Might affect performance at the compiled level, but I don't know how important a known array dimension is to things like Cython.

Order of RK methods

I used incorrect definition of the order in RK methods, it should be 1 less. In step selection we should use order + 1.

To clarify: the code is correct, but we are better to follow theoretical conventions.

Order of parameters in signature

I am thinking about what the signature of solve_ivp should be:

  • solve_ivp((t0,tF), y0, fun, ...): This was how @nmayorov had it initially. It is also how Matlab does it.
  • solve_ivp(t0, tF, y0, fun, ...): This is simply the signature above, but with the tuple unpacked. This is how it is right now.
  • solve_ivp(fun, y0, t0, tF, ...): This is most similar to the signature of similar methods in scipy
    • minimize(fun, x0, ...)
    • solve_bvp(fun, bc, x, y, ...)
    • quadrature(fun, a, b, ...)
  • solve_ivp(fun, y0, (t0, tF), ...): Like the one above but with a tuple

I am inclined to use the signature most like the other functions in scipy. At first I thought it was a little backwards (probably because of my experience with Matlab), but now I am starting to like it. Whatever is done, I think the signatures of the solvers should be changed to match.

The advantage of a tuple is that it allows the signature to also accept a scalar, which would be interpreted as tF with a t0=0. I don't think this is worth the complication of the signature. Even though almost everyone will have t0=0, it's not that much typing.

Discussion of the base class and the design in general

Do you really believe than the base class (as it is now) makes writing new solvers easier (including user solvers)? Also objectively the current base class does nothing except introducing some abstractions (I'm not saying that it is never useful).

I have an impression that implementing a new solver is always a special problem and you are better not to be restricted by any abstractions, interfaces to follow, etc. What really matters is the final result (basically arrays with numbers / solution functions to call). Also consider that we don't know how to put multistep methods in such design and whether it will be possible.

My point is that maybe OOP hierarchy is not useful for writing a set of different solvers. Instead maybe it's better to write solvers separately (by whatever way is the most convenient), but assure that they all return consistent outputs. Basically it is the current design of solve_ivp, but we can use a class for more flexibility (including single stepping).

Generally I think we focus on wrong things now and don't progress. I feel it is more fruitful to focus on specific solvers isolated, rather than trying to create some universal abstractions which might be useful. As for the idea of allowing users writing their own solvers --- I suggest to abandon it. Mainly because I strongly doubt it will give any benefits compared to just writing the code from scratch.

Handle size-0 states

All three solvers crash when given a size-0 y0. It is not clear what the right thing to do here is. The technically correct solution doesn't depend on the type of the solver at all, but to immediately return a dummy solution that provides an empty solution at all times.

I am not sure if we should implement this or just throw an error on empty states.

Support for existing C/FORTRAN solver codes

There exist well-tested ODE solvers in the form of C and FORTRAN codes, including VODE, LSODA, DOPRI5, DOP853, and CVODE. Many of these underly scipy's existing ODE solvers, so there are already wrappers allowing access from python. While the new python implementations here have the potential to be efficient, several of the above are well-established solvers that are much better tested in exotic corner cases. It would therefore be good to ensure that it is possible to wrap them up in Solver objects like we have here and use them as alternatives to our own implementations.

These codes generally provide the same basic functionality as our Solver objects: initialize, advance an adaptively-sized step (stopping at t_crit if necessary), and compute solution values (and maybe derivatives) within the last step. So wrapping them up in a Solver object should in principle be possible. But there are important considerations:

  • Most FORTRAN code cannot be made thread-safe, so wrapper classes need to handle locking.
  • Some solvers may store global state and thus require stronger uniqueness enforcement.
  • Most solvers provide a dense output function call rather than direct access to the dense-output polynomial
  • Some solvers do not permit the saving of dense-output state; once the solver has advanced past a step there may not be a way to evaluate the dense output there again.

Most of this can be worked around by code in a Solver-class wrapper, but preserving state for use in dense output is a concern. Worst-case, I suppose, if we know the degree of the polynomial the dense output is using, we could evaluate it (and maybe derivatives) at well-chosen points (Legendre roots, perhaps, or many derivatives at the middle) and construct a scipy polynomial object.

Handle multistep methods

I realized one possible limitation of the current design of the step solvers: they don't make it clear how to write a multistep method. Right now, solve_ivp assumes that the only state that is needed to evaluate t is the state immediately before t and the state immediately after t. Those are the only states that the integrator keeps and sends to interpolator in order to do event detection.

Possible solutions:

  1. Keep things the way they are. Each state of a multistep method should keep all the past state that it needs. It may seem that this is wasteful in terms of memory, but it is not actually that bad because the past state will only be pointers to the same shared arrays.
  2. Each state keeps a pointer to the immediately previous state object, like a linked list. Multistep methods can easily go back in time to get the state that they need and no changes are needed to the integrators. However, this makes it impossible to keep only a part of a solution. Memory would no longer be freed by dropping the first 90% of states because the remaining states would always keep a pointer all the way back to the beginning.
  3. Add a field called step_memory or similar to the solver. This would be 1 for one-step methods and so forth. It would be the responsibility of the integrator to read this field and send enough extra state to satisfy the first state that is needed for the requested time.
  4. Add the step_memory field to the state, because the number of past steps required might vary from step to step. There is little to be gained by this method over the previous method because step_memory on the solver could be set to the maximum number ever needed by that method, and taking a few extra states is harmless.
  5. Remove interpolator and have step return an interpolator for the previous step. The solver needs to keep enough past state anyway in order to evaluate the next point. When doing event handling, these small interpolators all need to be built anyway at each step. The integrator would be responsible for evaluating the correct time window. This method has the additional benefit of simplifying the interface by allowing the OdeState class to be removed.
  6. Change interpolator to take no arguments and return an interpolator only for the previous step. This is exactly the same as the previous method except that it allows the integrator to avoid the cost of building the interpolator if it not needed (this will happen in a solve_ivp_discrete method.). This will require that the solver store the previous state as well as the current state, but I don't perceive that to be a problem. This also allows OdeState to be removed.

Clarify role of ODEState object

Each ODE solver class is accompanied by an ODE state object. What is this object for? I can see two functions it could serve:

  1. It stores all the state the ODE solver needs to continue integration from this point onwards, or
  2. It stores all the state necessary for dense evaluation within this step.

If the answer is 1, what is stored in the solver object? Isn't that the natural place for the solver state? What is gained by having an external state object? It allows one to resume solving at an earlier point, I suppose.

If the answer is 2, then state objects can be used for dense evaluation even in solvers where the dense evaluation is not just evaluating a polynomial (if any exist): solver objects would have an evaluate function that takes a state object. If we're content with polynomials, then it's not clear what an external state object is for.

Incorrect order in the initial step selection

Related to #20.

In select_initial_step I again used non-conventional order definition (treating it as a local error order), so it also needs to be changed to order + 1. After it is fixed the call in Radau methods needs to be changed (5 to 4).

Add discontinuities handler to integrator

As discussed in #2, most ODE solvers are built under some assumptions of the continuity of the RHS and its derivatives. Frequently, there are problems with discontinuities in the RHS that we still want to solve. In an ideal world, the integrator would be able to handle an arbitrary RHS without the user having to list out the discontinuities. However, I know of no method to detect these discontinuities in general. The solution is to solve the system up to the discontinuity and then restart the solver past the discontinuity, but ideally this would be transparent to the user.

Using the step solvers, the user can implement such an integrator on his own. Indeed, I will do it locally if the general consensus is that this does not belong in scipy. However, I think this might be a common enough problem to warrant its inclusion. I have written a discontinuities handler for Matlab as part of my biological modeling toolbox.

I propose adding a keyword discontinuities to solve_ivp. It would have a default value of (). The integrator would follow this procedure:

  1. Initialize the solver with
    • t0=t0
    • y0=y0
    • t_crit=discontinuities[0] - EPS
  2. Step the solver until finished (just a bit before the discontinuity)
  3. Initialize a fresh solver with
    • t0=discontinuities[0] + EPS (just a bit after the discontinuity)
    • y0=y(t_crit)
    • t_crit=discontinuities[1] - EPS
  4. Step the solver until finished
  5. Continue until there are no more discontinuities or tF is reached.

The OdeSolution object will store a list of interpolators, one for each each continuous segment. When the solution is queried, it looks up which interpolator segment has the answer and passes the query onto it. If multiple times were queried, it transparently stitches the solution together and returns a simple array.

The advantage of this solution is that we can integrate a discontinuous RHS. The only change to the interface is the discontinuities keyword. Everything else happens transparently for the user.

Questions

  • Should discontinuities be allowed to be any iterable?
    • Advantages; I would probably going to use a for loop anyway. Some people may want to use range. If we ever want this feature in solve_ivp_lazy, it will have to be an iterable.
    • Disadvantages: We can't preallocate the list of interpolator segments
  • If someone evaluates the solution at the tiny gap at a discontinuity, how should that be solved?
    • Return the last state of the previous segment
    • Turn extrapolation on for every interpolator and extrapolate out an EPS with the previous segment
    • Save the state at each discontinuity as a separate field and return it when requested
    • Note that the only difference in these possibilities is elegance. All give the same answer (well, within an EPS).

Indexing support in OdeSolution produces VisibleDeprecationWarnings

The test suite produces many instances of:

/home/archibald/projects/scipy_ode/scipy_ode/scipy_ode/ivp.py:222: VisibleDeprecationWarning: an index can only have a single Ellipsis (`...`); replace all but one with slices (`:`).
  return result[..., iy]

I don't think there's any need for OdeSolution to index the solution object at all; simply use return result and users can do any indexing they feel they need.

scipy.interpolate.PPoly becomes numerically unstable for high order

The Bulirsch-Stoer integrator can easily produce dense-output polynomials of order higher than 20. This is known to be a problem if you represent them in the power basis, as scipy.interpolate.PPoly does. scipy.interpolate.KroghInterpolator uses an algorithm that is considerablye more stable, numerically. So forcing solvers to output a PPoly "spline" is a numerical problem. Perhaps it would be easier if the solver simply provided a callable for each step, which OdeSolution could then forward calls to?

What is the need for t_final in solver objects?

When a solver object is constructed, the user must provide a number t_final. This is used to define the solver's direction, but users - particularly users using bare solver objects - may not know a final value of the interval; they may be planning to stop when an event occurs or when the solution covers all the yet-to-be-provided data. I suppose users can supply np.inf or -np.inf, but why not simply allow them to specify a solver direction directly?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.