Comments (9)
Hi,
From what I understood, what fails is your simulation (= objective function), not the optimization procedure itself. If that's the case and you want to identify parameters for which it fails, you can program your objective function so that it returns 1 when simulation works and 0 when it fails. Or, even better, make your objective function return some measure of how singular your matrix is (maybe determinant?).
from blackbox.
Hello.
I think if I knew how the optimization code is working in parallel then I may be able to find when the simulation is failing. Since the optimization code working in parallel, I am not sure how minimization function (objective function in bb.search( )
) is being called unlike in series where I could just define a global variable (say par = [ ]
) and append the objective function parameters to it each time optimization calls the objective function with new set of parameters as following:
global par
par = []
def fn_ob(param):
par.append(param)
out_estimated = foo(param)
obj = np.sum((out_estimated - out_match)**2)
return obj
bb.search(f=fn_obj, ...)
The above code will store all the parameters if the optimization calls the objective function in series and I would get all parameters in par
that have been tried by the optimization even if overall optimization crashes at some point because of simulation error in foo(param_i)
for some param_i
. However, I am not sure how this will work in case of parallel optimization with blackbox
. Can you help how I can do this with blackbox
? Thanks.
from blackbox.
So, the important question is what happens when your simulation fails. If you can control that failure inside of your objective function (say, simulation returns some kind of error code that you can capture), then you can make your function look like this:
def fun(par):
...
if errorcode == 'Simulation fails': # or whatever it is
return 0.
else:
return abs(det) # determinant or whatever tells how poor your matrices are
I hope you understand that failure of your simulation is a side effect that has nothing to do with optimization itself. So you need to control that side effect somehow. The objective function needs to be returning some value, no matter if simulation works or fails.
If failure of the simulation is causing some bad issues (memory/system crush etc) that you cannot control, then you need to go the guts of the blackbox.py
and save the intermediate results into a file or display them. The array points
is what you need to look at, even though it might not be easy to analyze because there are few scaling tricks applied to that array at several spots.
I strongly recommend you going the first way if possible - making your objective function immune to the failure of the simulations.
from blackbox.
As for what you propose (saving a current set of parameters into a global list), that's a good idea, but I'm not sure how this will interfere with parallelism that is used in blackbox.py
. You can try, but I cannot guarantee that it will work correctly. You can at least try on some simple (known) function first.
from blackbox.
The simulation is not easy to track because I am running nested simulations (two different types) with a large number of variables, and the optimization parameters are indirectly (not directly) affecting the simulation that's producing errors...it's the first simulation that produces some output that goes into the second simulation input.
Simple question: How would you store all the parameters that the blackbox optimization is using in parallel?
from blackbox.
You can try your idea (with the global list). If the parallelism is not very crucial, you can set batch=1
when you call the optimization procedure, which means only one function evaluation will be performed at a time (= no parallelism).
Also, you can make your objective function simply print the current set of parameters on the screen. I think this should work well even when you have parallelism.
Finally, as I mentioned, you can go inside of blackbox.py
and output the current values of array points
. But again, you would need to spend some time understanding how the code works (that array is modified at few spots).
from blackbox.
I have already tried what you are suggesting, but it doesn't help. For instance, I print the parameters on screen, but I wouldn't know which set of parameters is causing the error because each core is running the code in parallel and I have no idea which (and how many) set of parameters are clubbed together for each core. That is important to narrow down to the set that's causing the problem.
Latest update: I did save the parameters that the optimization uses for all scenarios by some ad-hoc, and inefficient method, and I didn't seem to have any error when I ran the simulation on those set of parameters individually outside the optimization process. Therefore, it's possible there is some problem with the optimization code itself...
from blackbox.
Dear Paul,
How can I include constraints in bb.search( )
? I looked at the code and it seems like you're calling scipy.optimize.minimize( )
, so I am assuming the constraints can be passed to bb.search( )
just as they would be passed to scipy.optimize.minimize( )
function?
Thanks.
from blackbox.
The code doesn't support constraints.
from blackbox.
Related Issues (19)
- processing in batches is not very efficient when objective function execution time is not constant HOT 3
- Singularity matrix HOT 10
- Input an initial guess HOT 1
- How about constrained optimization? HOT 1
- Parameters constraints HOT 6
- It will freeze in Python3 (Proved to be the problem of WinPython) HOT 5
- Comparing to naive optimization HOT 4
- Python typing support HOT 4
- Comparison with scipy optimize HOT 3
- [Suggestion] PyPi package HOT 1
- Integer / categorical parameters HOT 1
- TypeError: unsupported operand type(s) for /: 'NoneType' and 'float' HOT 7
- Cannot pass function arguments. HOT 1
- Sample code fails with TypeError HOT 1
- High dimensional optimization?
- let the user choose the pool object HOT 1
- Feature Suggestion: Load Past Results HOT 2
- LIcence HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from blackbox.