drhagen / scipy_ode Goto Github PK
View Code? Open in Web Editor NEWCollaboration repository for redesigning scipy ODE solvers
License: BSD 3-Clause "New" or "Revised" License
Collaboration repository for redesigning scipy ODE solvers
License: BSD 3-Clause "New" or "Revised" License
A complex-valued ODE can always be treated as a real-valued ODE of dimension 2n, but on the one hand user code can be much simpler if it doesn't have to do the conversion in both the RHS and the solution. On the other hand, solvers with built-in complex support can be more efficient. More to the point, scipy supports the complex-valued ODE solver ZVODE because a complex-valued solver was requested by users. So we should at least think about how one would fit complex output values into the design here.
Putting it here so we don't forget.
In this line we multiply theoretical factor by SAFETY
, but the idea of SAFETY
is to perform milder corrections than the asymptotic error behavior dictates. So in this situation SAFETY
must be used in opposite way like 1 / SAFETY
.
Also we should check that the step is still increasing / decreasing when multiplied / divided by SAFETY
.
There are several places to check.
It's not very important. But I suggest to put this repository in a more convenient and straightforward form:
scipy_ode/
tests/
__init__.py
other source files
README.md
.gitignore
It will be self-contained repository without awkward legacy from scipy.integrate. After it's finished it will be very easy to include it back to scipy.integrate.
As we discussed in the original thread: what should we do about the shape of y
:
y
be a 1-dimensional array.
y
.y
to be an n-dimensional array in the convenience functions but not in the solver classes. The convenience classes will call the appropriate reshape functions in their inputs and outputs to make this transparent to the user.
y
to be an n-dimensional array in all contexts
I used incorrect definition of the order in RK methods, it should be 1 less. In step selection we should use order + 1
.
To clarify: the code is correct, but we are better to follow theoretical conventions.
I am thinking about what the signature of solve_ivp
should be:
solve_ivp((t0,tF), y0, fun, ...)
: This was how @nmayorov had it initially. It is also how Matlab does it.solve_ivp(t0, tF, y0, fun, ...)
: This is simply the signature above, but with the tuple unpacked. This is how it is right now.solve_ivp(fun, y0, t0, tF, ...)
: This is most similar to the signature of similar methods in scipy
minimize(fun, x0, ...)
solve_bvp(fun, bc, x, y, ...)
quadrature(fun, a, b, ...)
solve_ivp(fun, y0, (t0, tF), ...)
: Like the one above but with a tupleI am inclined to use the signature most like the other functions in scipy. At first I thought it was a little backwards (probably because of my experience with Matlab), but now I am starting to like it. Whatever is done, I think the signatures of the solvers should be changed to match.
The advantage of a tuple is that it allows the signature to also accept a scalar, which would be interpreted as tF
with a t0=0
. I don't think this is worth the complication of the signature. Even though almost everyone will have t0=0
, it's not that much typing.
Do you really believe than the base class (as it is now) makes writing new solvers easier (including user solvers)? Also objectively the current base class does nothing except introducing some abstractions (I'm not saying that it is never useful).
I have an impression that implementing a new solver is always a special problem and you are better not to be restricted by any abstractions, interfaces to follow, etc. What really matters is the final result (basically arrays with numbers / solution functions to call). Also consider that we don't know how to put multistep methods in such design and whether it will be possible.
My point is that maybe OOP hierarchy is not useful for writing a set of different solvers. Instead maybe it's better to write solvers separately (by whatever way is the most convenient), but assure that they all return consistent outputs. Basically it is the current design of solve_ivp
, but we can use a class for more flexibility (including single stepping).
Generally I think we focus on wrong things now and don't progress. I feel it is more fruitful to focus on specific solvers isolated, rather than trying to create some universal abstractions which might be useful. As for the idea of allowing users writing their own solvers --- I suggest to abandon it. Mainly because I strongly doubt it will give any benefits compared to just writing the code from scratch.
All three solvers crash when given a size-0 y0
. It is not clear what the right thing to do here is. The technically correct solution doesn't depend on the type of the solver at all, but to immediately return a dummy solution that provides an empty solution at all times.
I am not sure if we should implement this or just throw an error on empty states.
The syntax of using a bare * is invalid in python2.7, so solve_ivp
must have a different signature.
There exist well-tested ODE solvers in the form of C and FORTRAN codes, including VODE, LSODA, DOPRI5, DOP853, and CVODE. Many of these underly scipy's existing ODE solvers, so there are already wrappers allowing access from python. While the new python implementations here have the potential to be efficient, several of the above are well-established solvers that are much better tested in exotic corner cases. It would therefore be good to ensure that it is possible to wrap them up in Solver objects like we have here and use them as alternatives to our own implementations.
These codes generally provide the same basic functionality as our Solver objects: initialize, advance an adaptively-sized step (stopping at t_crit if necessary), and compute solution values (and maybe derivatives) within the last step. So wrapping them up in a Solver object should in principle be possible. But there are important considerations:
Most of this can be worked around by code in a Solver-class wrapper, but preserving state for use in dense output is a concern. Worst-case, I suppose, if we know the degree of the polynomial the dense output is using, we could evaluate it (and maybe derivatives) at well-chosen points (Legendre roots, perhaps, or many derivatives at the middle) and construct a scipy polynomial object.
I realized one possible limitation of the current design of the step solvers: they don't make it clear how to write a multistep method. Right now, solve_ivp
assumes that the only state that is needed to evaluate t
is the state immediately before t
and the state immediately after t
. Those are the only states that the integrator keeps and sends to interpolator
in order to do event detection.
Possible solutions:
step_memory
or similar to the solver. This would be 1
for one-step methods and so forth. It would be the responsibility of the integrator to read this field and send enough extra state to satisfy the first state that is needed for the requested time.step_memory
field to the state, because the number of past steps required might vary from step to step. There is little to be gained by this method over the previous method because step_memory
on the solver could be set to the maximum number ever needed by that method, and taking a few extra states is harmless.interpolator
and have step
return an interpolator for the previous step. The solver needs to keep enough past state anyway in order to evaluate the next point. When doing event handling, these small interpolators all need to be built anyway at each step. The integrator would be responsible for evaluating the correct time window. This method has the additional benefit of simplifying the interface by allowing the OdeState
class to be removed.interpolator
to take no arguments and return an interpolator only for the previous step. This is exactly the same as the previous method except that it allows the integrator to avoid the cost of building the interpolator if it not needed (this will happen in a solve_ivp_discrete
method.). This will require that the solver store the previous state as well as the current state, but I don't perceive that to be a problem. This also allows OdeState
to be removed.Each ODE solver class is accompanied by an ODE state object. What is this object for? I can see two functions it could serve:
If the answer is 1, what is stored in the solver object? Isn't that the natural place for the solver state? What is gained by having an external state object? It allows one to resume solving at an earlier point, I suppose.
If the answer is 2, then state objects can be used for dense evaluation even in solvers where the dense evaluation is not just evaluating a polynomial (if any exist): solver objects would have an evaluate function that takes a state object. If we're content with polynomials, then it's not clear what an external state object is for.
Related to #20.
In select_initial_step
I again used non-conventional order definition (treating it as a local error order), so it also needs to be changed to order + 1
. After it is fixed the call in Radau methods needs to be changed (5 to 4).
As discussed in #2, most ODE solvers are built under some assumptions of the continuity of the RHS and its derivatives. Frequently, there are problems with discontinuities in the RHS that we still want to solve. In an ideal world, the integrator would be able to handle an arbitrary RHS without the user having to list out the discontinuities. However, I know of no method to detect these discontinuities in general. The solution is to solve the system up to the discontinuity and then restart the solver past the discontinuity, but ideally this would be transparent to the user.
Using the step solvers, the user can implement such an integrator on his own. Indeed, I will do it locally if the general consensus is that this does not belong in scipy. However, I think this might be a common enough problem to warrant its inclusion. I have written a discontinuities handler for Matlab as part of my biological modeling toolbox.
I propose adding a keyword discontinuities
to solve_ivp
. It would have a default value of ()
. The integrator would follow this procedure:
t0=t0
y0=y0
t_crit=discontinuities[0] - EPS
t0=discontinuities[0] + EPS
(just a bit after the discontinuity)y0=y(t_crit)
t_crit=discontinuities[1] - EPS
tF
is reached.The OdeSolution
object will store a list of interpolators, one for each each continuous segment. When the solution is queried, it looks up which interpolator segment has the answer and passes the query onto it. If multiple times were queried, it transparently stitches the solution together and returns a simple array.
The advantage of this solution is that we can integrate a discontinuous RHS. The only change to the interface is the discontinuities
keyword. Everything else happens transparently for the user.
Questions
discontinuities
be allowed to be any iterable?
range
. If we ever want this feature in solve_ivp_lazy
, it will have to be an iterable.The test suite produces many instances of:
/home/archibald/projects/scipy_ode/scipy_ode/scipy_ode/ivp.py:222: VisibleDeprecationWarning: an index can only have a single Ellipsis (`...`); replace all but one with slices (`:`).
return result[..., iy]
I don't think there's any need for OdeSolution to index the solution object at all; simply use return result
and users can do any indexing they feel they need.
The Bulirsch-Stoer integrator can easily produce dense-output polynomials of order higher than 20. This is known to be a problem if you represent them in the power basis, as scipy.interpolate.PPoly does. scipy.interpolate.KroghInterpolator uses an algorithm that is considerablye more stable, numerically. So forcing solvers to output a PPoly "spline" is a numerical problem. Perhaps it would be easier if the solver simply provided a callable for each step, which OdeSolution could then forward calls to?
When a solver object is constructed, the user must provide a number t_final. This is used to define the solver's direction, but users - particularly users using bare solver objects - may not know a final value of the interval; they may be planning to stop when an event occurs or when the solution covers all the yet-to-be-provided data. I suppose users can supply np.inf or -np.inf, but why not simply allow them to specify a solver direction directly?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.