Coder Social home page Coder Social logo

optimparallel-python's People

Contributors

florafauna avatar lewisblake avatar nikosavola avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

optimparallel-python's Issues

Having difficulties to properly install the module on anaconda-python

Hello,

I installed this module with pip, and executed the .py example :

from optimparallel import minimize_parallel
from scipy.optimize import minimize
import numpy as np
import time

## objective function
def f(x, sleep_secs=.5):
    print('fn')
    time.sleep(sleep_secs)
    return sum((x-14)**2)

## start value
x0 = np.array([10,20])

## minimize with parallel evaluation of 'fun' and
## its approximate gradient.
o1 = minimize_parallel(fun=f, x0=x0, args=.5)
print(o1)

## test against scipy.optimize.minimize()
o2 = minimize(fun=f, x0=x0, args=.5, method='L-BFGS-B')
print(all(np.isclose(o1.x, o2.x, atol=1e-10)),
      np.isclose(o1.fun, o2.fun, atol=1e-10),
      all(np.isclose(o1.jac, o2.jac, atol=1e-10)))

But I get this error in _base.py

line 389, in __get_result
    raise self._exception`

BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

Do someone has an advice to properly install it ?

Regards,

Roman

Broken process pool running in Jypter lab on Mac and Windows

When running your notebooks or trying to use my own function in Jypter lab for Mac or Windows, I get a similar error (linked below. Not sure exactly what the cause is, so I would appreciate advisement. Sorry for the format, it was a real pain trying to get the error message into a code or quote format.

Process SpawnProcess-5:
Process SpawnProcess-6:
Process SpawnProcess-4:
Traceback (most recent call last):
File "/Users/wardbm1/anaconda3/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
Traceback (most recent call last):
File "/Users/wardbm1/anaconda3/lib/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Users/wardbm1/anaconda3/lib/python3.11/concurrent/futures/process.py", line 244, in _process_worker
call_item = call_queue.get(block=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wardbm1/anaconda3/lib/python3.11/multiprocessing/queues.py", line 122, in get
return _ForkingPickler.loads(res)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: Can't get attribute 'f' on <module 'main' (built-in)>
Traceback (most recent call last):
File "/Users/wardbm1/anaconda3/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/Users/wardbm1/anaconda3/lib/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Users/wardbm1/anaconda3/lib/python3.11/concurrent/futures/process.py", line 244, in _process_worker
call_item = call_queue.get(block=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wardbm1/anaconda3/lib/python3.11/multiprocessing/queues.py", line 122, in get
return _ForkingPickler.loads(res)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: Can't get attribute 'f' on <module 'main' (built-in)>
File "/Users/wardbm1/anaconda3/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/Users/wardbm1/anaconda3/lib/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Users/wardbm1/anaconda3/lib/python3.11/concurrent/futures/process.py", line 244, in _process_worker
call_item = call_queue.get(block=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wardbm1/anaconda3/lib/python3.11/multiprocessing/queues.py", line 122, in get
return _ForkingPickler.loads(res)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: Can't get attribute 'f' on <module 'main' (built-in)>

BrokenProcessPool` Traceback (most recent call last)

Cell In[2], line 48
40 print(
41 "Time parallel {:2.2}\nTime standard {:2.2} ".format(
42 o1_end - o1_start, o2_end - o2_start
43 )
44 )
47 if name == "main":
---> 48 main()

Cell In[2], line 21, in main()
17 x0 = np.array([10, 20])
19 # minimize with parallel evaluation of 'fun'
20 # and its approximate gradient
---> 21 o1 = minimize_parallel(fun=f, x0=x0, args=0.5, parallel={"loginfo": True})
22 print(o1)
24 # test against scipy.optimize.minimize(method='L-BFGS-B')

File ~/anaconda3/lib/python3.11/site-packages/optimparallel.py:410, in minimize_parallel(fun, x0, args, jac, bounds, tol, options, callback, parallel)
398 with parallel_used.get("executor") as executor:
399 fun_jac = EvalParallel(
400 fun=fun,
401 jac=jac,
(...)
408 n=n,
409 )
--> 410 out = minimize(
411 fun=fun_jac.fun,
412 x0=x0,
413 jac=fun_jac.jac,
414 method="L-BFGS-B",
415 bounds=bounds,
416 callback=callback,
417 options=options_used,
418 )
420 if parallel_used.get("loginfo"):
421 out.loginfo = {
422 k: (
423 lambda x: np.array(x)
(...)
427 for k, v in fun_jac.info.items()
428 }

File ~/anaconda3/lib/python3.11/site-packages/scipy/optimize/_minimize.py:710, in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
707 res = _minimize_newtoncg(fun, x0, args, jac, hess, hessp, callback,
708 **options)
709 elif meth == 'l-bfgs-b':
--> 710 res = _minimize_lbfgsb(fun, x0, args, jac, bounds,
711 callback=callback, **options)
712 elif meth == 'tnc':
713 res = _minimize_tnc(fun, x0, args, jac, bounds, callback=callback,
714 **options)

File ~/anaconda3/lib/python3.11/site-packages/scipy/optimize/_lbfgsb_py.py:307, in _minimize_lbfgsb(fun, x0, args, jac, bounds, disp, maxcor, ftol, gtol, eps, maxfun, maxiter, iprint, callback, maxls, finite_diff_rel_step, **unknown_options)
304 else:
305 iprint = disp
--> 307 sf = _prepare_scalar_function(fun, x0, jac=jac, args=args, epsilon=eps,
308 bounds=new_bounds,
309 finite_diff_rel_step=finite_diff_rel_step)
311 func_and_grad = sf.fun_and_grad
313 fortran_int = _lbfgsb.types.intvar.dtype

File ~/anaconda3/lib/python3.11/site-packages/scipy/optimize/_optimize.py:383, in _prepare_scalar_function(fun, x0, jac, args, bounds, epsilon, finite_diff_rel_step, hess)
379 bounds = (-np.inf, np.inf)
381 # ScalarFunction caches. Reuse of fun(x) during grad
382 # calculation reduces overall function evaluations.
--> 383 sf = ScalarFunction(fun, x0, args, grad, hess,
384 finite_diff_rel_step, bounds, epsilon=epsilon)
386 return sf

File ~/anaconda3/lib/python3.11/site-packages/scipy/optimize/_differentiable_functions.py:158, in ScalarFunction.init(self, fun, x0, args, grad, hess, finite_diff_rel_step, finite_diff_bounds, epsilon)
155 self.f = fun_wrapped(self.x)
157 self._update_fun_impl = update_fun
--> 158 self._update_fun()
160 # Gradient evaluation
161 if callable(grad):

File ~/anaconda3/lib/python3.11/site-packages/scipy/optimize/_differentiable_functions.py:251, in ScalarFunction._update_fun(self)
249 def _update_fun(self):
250 if not self.f_updated:
--> 251 self._update_fun_impl()
252 self.f_updated = True

File ~/anaconda3/lib/python3.11/site-packages/scipy/optimize/_differentiable_functions.py:155, in ScalarFunction.init..update_fun()
154 def update_fun():
--> 155 self.f = fun_wrapped(self.x)

File ~/anaconda3/lib/python3.11/site-packages/scipy/optimize/_differentiable_functions.py:137, in ScalarFunction.init..fun_wrapped(x)
133 self.nfev += 1
134 # Send a copy because the user may overwrite it.
135 # Overwriting results in undefined behaviour because
136 # fun(self.x) will change self.x, with the two no longer linked.
--> 137 fx = fun(np.copy(x), *args)
138 # Make sure the function returns a true scalar
139 if not np.isscalar(fx):

File ~/anaconda3/lib/python3.11/site-packages/optimparallel.py:191, in EvalParallel.fun(self, x)
190 def fun(self, x: ArrayLike):
--> 191 self.eval_parallel(x=x)
192 if self.verbose:
193 print("fun(" + str(x) + ") = " + str(self.fun_val))

File ~/anaconda3/lib/python3.11/site-packages/optimparallel.py:151, in EvalParallel.eval_parallel(self, x)
142 ftmp = self._eval_approx
144 ret = self.executor.map(
145 ftmp,
146 eps_at,
(...)
149 itertools.repeat(self.eps),
150 )
--> 151 ret = np.array(list(ret))
152 self.fun_val = ret[0]
153 if self.forward:

File ~/anaconda3/lib/python3.11/concurrent/futures/process.py:606, in _chain_from_iterable_of_lists(iterable)
600 def _chain_from_iterable_of_lists(iterable):
601 """
602 Specialized implementation of itertools.chain.from_iterable.
603 Each item in iterable should be a list. This function is
604 careful not to keep references to yielded objects.
605 """
--> 606 for element in iterable:
607 element.reverse()
608 while element:

File ~/anaconda3/lib/python3.11/concurrent/futures/_base.py:619, in Executor.map..result_iterator()
616 while fs:
617 # Careful not to keep a reference to the popped future
618 if timeout is None:
--> 619 yield _result_or_cancel(fs.pop())
620 else:
621 yield _result_or_cancel(fs.pop(), end_time - time.monotonic())

File ~/anaconda3/lib/python3.11/concurrent/futures/_base.py:317, in _result_or_cancel(failed resolving arguments)
315 try:
316 try:
--> 317 return fut.result(timeout)
318 finally:
319 fut.cancel()

File ~/anaconda3/lib/python3.11/concurrent/futures/_base.py:456, in Future.result(self, timeout)
454 raise CancelledError()
455 elif self._state == FINISHED:
--> 456 return self.__get_result()
457 else:
458 raise TimeoutError()

File ~/anaconda3/lib/python3.11/concurrent/futures/_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None

BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

Import ctypes DLL breaks execution of minimize_parallel

I run a minimization of a cost function with a lot of variables.
To reduce the calculation time I first use your package without any issue.
It gets a python function as cost function and minimize it with four threads.
To further reduce the calculation time I wrote the cost function in C.
To run the C-code in python I use ctypes.
I import the shared C lib with the ctypes.CDLL() function and wrote a Python wrapper function for the minimizer.
But as soon as is run the ctypes.CDLL() function the minimize_parallel stops working as intended. The debugger shows that four threads are running but there is no display output whatsoever. The CPU load is also idling.
I am not sure why just the import of the ctypes.CDLL breaks the minimize function.
Thanks in advance for your help.

System:
Linux Ubuntu 18.04
Python 3.7.15
optimParallel 0.1.2

PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

Hello,

I have tried to use the package in Python 2.7.18, macOS Monterey 12.0.1, to run the following code that minimizes a function and keeps track of the computing time:

# Import modules
import numpy as np
import timeit
from scipy.optimize import minimize
from optimparallel import minimize_parallel

# Define the function
def objective(x):
    return x[0]**2.0 + x[1]**2.0

# Define the range for input
r_min , r_max = -5.0, 5.0

# Define the starting point as a random sample from the domain
pt = r_min + np.random.rand(2)*(r_max - r_min)

# Minimize the function
start = timeit.default_timer()
resultpar = minimize_parallel(fun=objective, x0=pt)
finish = timeit.default_timer()
print('Finished in', round(finish-start, 2), 'second(s)')

However, Python issues this error when I try to execute this script:

Traceback (most recent call last):
  File "/Users/montesinos/opt/anaconda3/envs/gambit/lib/python2.7/multiprocessing/queues.py", line 268, in _feed
    send(obj)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
Traceback (most recent call last):
  File "/Users/montesinos/opt/anaconda3/envs/gambit/lib/python2.7/multiprocessing/queues.py", line 268, in _feed
    send(obj)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
Traceback (most recent call last):
  File "/Users/montesinos/opt/anaconda3/envs/gambit/lib/python2.7/multiprocessing/queues.py", line 268, in _feed
    send(obj)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

According to what I read, it seems that Python cannot pass the function to the worker processes. Does anyone know what could be happening?

Early Stopping based on objective rather than gradient: Issue for the optimParallel package in R

Hello! as stated above, this is really an issue for your R-package at
https://git.math.uzh.ch/florian.gerber/optimParallel/-/tree/master

(But I'm not affiliated with uzh so I don't think I can post an issue there)

I love the package!

I would like institute an early stopping condition for L-BFGS-B that's based on my objective and not based on the gradient.

Once my objective crosses a threshold, I don't need my gradient to improve anymore. This is already good enough. Despite a lot of looking, I haven't been able to find a parameter which allows me to do this. (abstol support in L-BFGS-B basically)

I could implement this by adding my own gradient, and then setting the value to 0 when it reaches a certain threshold, but then I need to re-implement your excellent search!

Basically I want to do:

# Gradient function in FGgenerator
g <- function(par){
        # if abstol is set and stopping_condition is met, return a 0 gradient
        if(!is.null(abstol) && stopping_condition(par, abstol)) return(0)
        evalFG(par) 
        i_g <<- i_g+1
        return(grad)
    }

Is this possible? am I overlooking an easier way to accomplish this?

Thank you for your time!

[help] Poor performances when running mpi at each evaluation

Hello,
First of all, thank you for this implementation of minimization.
The optimization scheme I try to do is roughly as follows:

from optimparallel import minimize_parallel
import os

def costFun(p)
   os.system( "mpirun -np 5 somejob {}".format(p) )
   return job_result()

x0=(1,2,3,4,5)
optim=minimize_parallel(costFun, x0)

As you may understand, the idea is to run 6 different evaluations at the same time, but each of those evaluations should be made on 5 cores with OpenMPI (it is actually a Finite Element Analysis). I really don't know why, but this scheme is actually very poor in terms of performances, since each MPI job seems to be twice slower than expected.

Do you have any hint?

Thank you in advance.
Regards.

Python 3.11/Multiple initial guesses

Hi,

  1. Do you support Python 3.11?

  2. My objective is not so nice (i.e., non-convex), so I want to try a handful of x0’s. Would you consider “extending” the parallelization to multiple initial guesses (my solver is not thread-safe)?

Thanks,
Jake

hess_inv returned as array and not object

I am using the minimize_parallel for a optimization of negative log likelihoods.

But for the continuation in my program I need the inv_hessian in the same way that it is returned from scipy.minimize, but it is returned as a multidimensional array.

is it possible to change this return Value in a future release?

kind regards

allow additional arguments to ProcessPoolExecutor

I am running an optimization that uses some unpicklable objects. Because spawned or forked processes require pickling the environment, I would like to supply ProcessPoolExecutor with an initializer function to set up the unpicklable objects independently for each process.

To implement this with optimparallel, I've branched the repo and made the following modifications to optimparallel.py

parallel_used = {'max_workers': None, 'forward': True, 'verbose': False, 'loginfo': False, 'time': False, 'initializer': None, 'initargs': (), 'mp_context'=None}

and

with concurrent.futures.ProcessPoolExecutor(max_workers=parallel_used.get('max_workers'), initializer=parallel_used.get('initializer'), mp_context=parallel_used.get('mp_context'), initargs=parallel_used.get('initargs')) as executor:

Would love to see functionality like this added in a future release!

Method Powell

Hi. I'm trying to speed up the calculation of minimizing a function for which it's not possible to do a derivation, so I'm using method='Powell' which gives me relatively good results (the best of all the scipy.optimize.minimize methods). I've sped the code up using jit, but I'd like to speed it up even more. Would it be possible to rewrite your code to use method Powell?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.