Comments (4)
Hi @udemirezen
Yes, and you can do more than that (make sure you are using the latest version of neurodiffeq)! See the following example.
First, instantiate a solver like you always do. The solver will default to an Adam
optimizer
import numpy as np
import torch
from neurodiffeq import diff
from neurodiffeq.solvers import Solver1D
from neurodiffeq.generators import Generator1D
from neurodiffeq.conditions import IVP
solver = Solver1D(
ode_system=lambda u, t: [diff(u, t, order=2) + u], # define ODE: u'' + u = 0
conditions=[IVP(0, 1, 0)], # define initial condition u(0) = 1 amd u'(0) = 0
t_min=0.0, # optional if setting both train_generator and valid_generator
t_max=2*np.pi, # optional if setting both train_generator and valid_generator
train_generator=Generator1D(1000, 0.0, 2*np.pi),
valid_generator=Generator1D(1000, 0.0, 2*np.pi),
)
Then, set a callback, which will be called at global epoch 1000
from neurodiffeq.callbacks import SetOptimizer, ClosedIntervalGlobal
# instantiate a callback that sets the optimizer (you can pass in optional OPTIMIZER kwargs)
cb = SetOptimizer(torch.optim.LBFGS, optimizer_kwargs={'lr': 1e-3, })
# the callback will be called only once at global epoch No. 1000
cb_with_condition = cb.conditioned_on(ClosedIntervalGlobal(min=1000, max=1000))
solver.fit(max_epochs=5000, callbacks=[cb_with_condition])
You can actually dynamically change the loss function (aka criterion
) as well,
from neurodiffeq.callbacks import SetCriterion
You can also customize when to change the optimizer/criterion, e.g., if you want to change the optimizer only when your training loss converges (delta < 1e-5) for 20 consecutive epochs
from neurodiffeq.callbacks import RepeatedMetricConverge
cb = SetOptimizer(torch.optim.LBFGS, optimizer_kwargs={'lr': 1e-3, })
cb_with_condition = cb.conditioned_on(RepeatedMetricConverge(epsilon=1e-5, repetition=20, metric='loss', use_train=True))
solver.fit(max_epochs=5000, callbacks=[cb_with_condition])
You can even use &
, |
, and ~
to chain these conditions, e.g.,
cb = SetOptimizer(torch.optim.LBFGS, optimizer_kwargs={'lr': 1e-3, })
cond1 = RepeatedMetricConverge(epsilon=1e-5, repetition=20, metric='loss')
cond2 = ClosedIntervalGlobal(min=1000, max=None)
cb_with_condition = cb.conditioned_on(cond1 & cond2)
solver.fit(max_epochs=5000, callbacks=[cb_with_condition])
from neurodiffeq.
Hi, you don't have to pass closure
to L-BFGS, neurodiffeq handles that automatically. See #93 for more details.
There's a little caveat. Basically, with L-BFGS, you cannot set n_batches_train
larger than 1 in your Solver
. If you do (for example set n_batches_train=4
), it will be identical to training with more 4x more epochs.
This is unlike other optimizers, (e.g. Adam). If you set n_batches_train=4
and use Adam
, it will be like training on a 4x larger batch, which saves 3/4 GPU memory but takes 4x longer time to run.
from neurodiffeq.
Hi @udemirezen
Yes, and you can do more than that (make sure you are using the latest version of neurodiffeq)! See the following example.
First, instantiate a solver like you always do. The solver will default to an
Adam
optimizerimport numpy as np import torch from neurodiffeq import diff from neurodiffeq.solvers import Solver1D from neurodiffeq.generators import Generator1D from neurodiffeq.conditions import IVP solver = Solver1D( ode_system=lambda u, t: [diff(u, t, order=2) + u], # define ODE: u'' + u = 0 conditions=[IVP(0, 1, 0)], # define initial condition u(0) = 1 amd u'(0) = 0 t_min=0.0, # optional if setting both train_generator and valid_generator t_max=2*np.pi, # optional if setting both train_generator and valid_generator train_generator=Generator1D(1000, 0.0, 2*np.pi), valid_generator=Generator1D(1000, 0.0, 2*np.pi), )Then, set a callback, which will be called at global epoch 1000
from neurodiffeq.callbacks import SetOptimizer, ClosedIntervalGlobal # instantiate a callback that sets the optimizer (you can pass in optional OPTIMIZER kwargs) cb = SetOptimizer(torch.optim.LBFGS, optimizer_kwargs={'lr': 1e-3, }) # the callback will be called only once at global epoch No. 1000 cb_with_condition = cb.conditioned_on(ClosedIntervalGlobal(min=1000, max=1000)) solver.fit(max_epochs=5000, callbacks=[cb_with_condition])You can actually dynamically change the loss function (aka
criterion
) as well,from neurodiffeq.callbacks import SetCriterionYou can also customize when to change the optimizer/criterion, e.g., if you want to change the optimizer only when your training loss converges (delta < 1e-5) for 20 consecutive epochs
from neurodiffeq.callbacks import RepeatedMetricConverge cb = SetOptimizer(torch.optim.LBFGS, optimizer_kwargs={'lr': 1e-3, }) cb_with_condition = cb.conditioned_on(RepeatedMetricConverge(epsilon=1e-5, repetition=20, metric='loss', use_train=True)) solver.fit(max_epochs=5000, callbacks=[cb_with_condition])You can even use
&
,|
, and~
to chain these conditions, e.g.,cb = SetOptimizer(torch.optim.LBFGS, optimizer_kwargs={'lr': 1e-3, }) cond1 = RepeatedMetricConverge(epsilon=1e-5, repetition=20, metric='loss') cond2 = ClosedIntervalGlobal(min=1000, max=None) cb_with_condition = cb.conditioned_on(cond1 & cond2) solver.fit(max_epochs=5000, callbacks=[cb_with_condition])
woww. thank you for these good explanations.. :)
It is very informative thanks.
from neurodiffeq.
Hi, according to torch.optim.LBFGS documentation (https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html), optimizer requires a closure function.
optimizer.step(closure)
Some optimization algorithms such as Conjugate Gradient and LBFGS need to reevaluate the function multiple times, so you have to pass in a closure that allows them to recompute your model. The closure should clear the gradients, compute the loss, and return it.
Example:
`
for input, target in dataset:
def closure():
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
return loss
optimizer.step(closure)
`
Do I have to provide any closure function with your framework if i want to use LBFGS optimizers? If yes, what is the best way of doing it.?
thank you again.
from neurodiffeq.
Related Issues (20)
- Is there a way to access the train/valid loss history for Solver1D, like for solve? HOT 2
- Using Special type activation function HOT 9
- Finding value of weights HOT 2
- Saving subclass of solvers HOT 1
- TqdmKeyError: "Unknown argument(s): {'colour': 'blue'}" HOT 3
- Publish `neurodiffeq` on `conda-forge`?
- Neumann boundary conditions
- Nonzero Dirichlet boundary conditions HOT 8
- Add tests for solver_utils
- BundleSolver setup too restrictive HOT 1
- fitting a variable in a system of ODE as a function of time HOT 4
- Problems :return inspect.signature(optimizer.step).parameters.get('closure').default == inspect._empty AttributeError: 'NoneType' object has no attribute 'default' HOT 2
- Parametric system of ODEs HOT 1
- Bundle Solution for PDEs
- Improve Docs
- Solving system of PDEs/ODEs HOT 1
- Missing eq_param_index when loading BundleSolution HOT 4
- Could we implement the differential equations in symbolic format? HOT 3
- Access to the differential equations of the system.
- Adding the option to use a learning rate schedule during training.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from neurodiffeq.