pyxu-org / pyxu Goto Github PK
View Code? Open in Web Editor NEWModular and scalable computational imaging in Python with GPU/out-of-core computing.
Home Page: https://pyxu-org.github.io/
License: MIT License
Modular and scalable computational imaging in Python with GPU/out-of-core computing.
Home Page: https://pyxu-org.github.io/
License: MIT License
The problem I'm trying to solve is stated as such, from Cury et al.:
With y being a 1d vector of targets, one for each t in T, X shaped (T, W, L) and alpha shaped (W, L). The function q is the sum of the elements of the element-wise products between X and alpha, what in numpy would be np.sum(X * alpha, (1,2)).
To vectorize the problem over T, the way I would be temped to do it if it was e.g. pytorch would be to set alpha as requiring gradient and either rely on broadcasting or convolve X with alpha as the kernel.
I am entirely new to pyxu, but to implement this my guess would be I would need
SquaredL2Norm(dim=y.size).asloss(y.ravel()) * [X convolved with alpha]
And optimize this (plus the phi function, which is just the sum of L21Norm and L1Norm) for alpha.
I'm not quite sure how to implement this here, or if this would be supported at all. Could anyone point me in the right direction?
Within pycsou.core.solver, the returned type should probably be float for the following abstract method of GenericIterativeAlgorithm.
@abstractmethod
def stopping_metric(self):
r"""
Stopping metric.
"""
pass
The two functions update_iterand
from file proxalgs.py
start with a test on the iteration number:
def update_iterand(self) -> dict:
if self.iter == 0:
x, z = self.init_iterand.values()
else:
x, z = self.iterand.values()
I think this test is not required as the information we are looking for is already stored in the dictionary old_iterand
. The code could then be replaced by:
x, z = self.old_iterand.values()
However, in case of modifications of the variables x, z
, it might be required to create a deepcopy of the variables (not sure).
Same goes for
x, x_old, t_old = self.old_iterand.values()
for APGD
.
Unless I have missed it, I think there is no description of the actual shape of the iterand present in any iterative algorithm. It could be helpful to have such a description present in the documentation, as PDS and APGD from pycsou.core.proxalgs
have different shapes of iterand.
TorchFromMap
wrapper for transforming a Map
object into a PyTorch Function
(as described here.TorchedMaps
with existing PyTorch Functionals
and compute gradient automatically using pytorch.autograd
.__cupy_array_interface__
.MapFromTorch
wrapper for transforming a PyTorch Function
into a Pycsou Map
object for use in optimisation algorithms.The iterate()
method from GenericIterativeAlgorithm
class from file solver.py
always returns
self.converged = True
which might not be accurate, for instance if it stops after reaching the maximum number of iterations.
An easy fix would be
self.converged = self.stopping_metric() <= self.accuracy_threshold
@infer_array_module
to all numpy-backed functions of the package to extend their scope to Cupy, JAX and Dask arrays and hence provide support for GPU/TPU and out-of-core/distributed computations.numpy
, cupy
or dask
) from the input duck array fed to the function (similarly to cp.get_array_module()
). The code should be made module-agnostic by replacing all np.function
by _xp.function
where _xp
is dynamically inferred at run-time by the decorator.Example:
import numpy as np
import cupy as cp
import dask.array as da
import jax.numpy as jnp
import types
from typing import Callable, Optional
def infer_array_module(decorated_object_type='method'):
def make_decorator(call_fun: Callable):
def wrapper(*args, **kwargs):
if decorated_object_type == 'method':
arr = args[1] #First argument is self
else:
arr = args[0]
if isinstance(arr, cp.ndarray):
xp = cp
elif isinstance(arr, da.core.Array):
xp = da
elif isinstance(arr, jnp.ndarray):
xp = jnp
else:
xp = np # Fall back to Numpy backend if unknown array type
kwargs['_xp'] = xp
return call_fun(*args, **kwargs)
return wrapper
return make_decorator
class FFTOp(object):
def __init__(self):
pass
@infer_array_module()
def __call__(self, x, _xp: Optional[types.ModuleType] = None):
return _xp.fft.fft(x)
dtypes
of Map
, LinearOperator
and Functional
objects as well as algorithms (float64
computations on GPUs are notoriously slower than float32
computations)cupy
module should hence be conditional to this dependency being installed.persist()
(equivalent to compute()
on a single machine but keeps the computation result distributed when working with multiple nodes). This is necessary since the stopping criterion to most algorithms cannot be computed lazily. Moreover, computing the dask arrays avoid overly long dask computation graphs which comes with much overhead.When trying to compute the singular values with spls.svds
, Python raises an error when the dimension of the operator from which it tries to compute the svd is of size one. Indeed, this function needs the number of value to return to be strictly lower than the minimal dimension size of the operator. In the case of a vector, calling the function should return the euclidian norm of the vector I guess.
This problem happens when computing the Lipschitz constant of an operator for which one of the dimension has size one (basically a vector instead of a matrix).
In Linux operating systems, i cannot use pip to quickly install.When I install using the developer installation method, I get an error indicating that there is a problem with the setup.py.Would like to ask how you can install pycsou using Quick Install,
pycsphere,pycgsp these three packs, thank you very much for your answer!
Add support for automatic differentiation via JAX. JAX can automatically differentiate native Python and NumPy functions. It can differentiate through loops, branches, recursion, and closures, and it can take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation) via grad as well as forward-mode differentiation, and the two can be composed arbitrarily to any order.
Example:
from jax._src.dlpack import to_dlpack
import numpy as np
import cupy as cp
import dask.array as da
import jax.numpy as jnp
import types
from typing import Callable, Optional
from jax import jacfwd
import jax.dlpack as jxdl
from warnings import warn
def infer_array_module(decorated_object_type='method'):
def make_decorator(call_fun: Callable):
def wrapper(*args, **kwargs):
if decorated_object_type == 'method':
arr = args[1] #First argument is self
else:
arr = args[0]
if isinstance(arr, cp.ndarray):
xp = cp
elif isinstance(arr, da.core.Array):
xp = da
elif isinstance(arr, jnp.ndarray):
xp = jnp
else:
xp = np # Fall back to Numpy backend if unknown array type
kwargs['_xp'] = xp
return call_fun(*args, **kwargs)
return wrapper
return make_decorator
class FFTOp(object):
@infer_array_module(decorated_object_type='method')
def __call__(self, x, _xp: Optional[types.ModuleType] = None):
return _xp.fft.fft(x)
@infer_array_module(decorated_object_type='method')
def jacobian(self, x, _xp: Optional[types.ModuleType] = None):
if _xp == cp:
arr = jxdl.from_dlpack(x.astype(_xp.float32).toDlpack()) # Zero-copy conversion from Cupy to JAX arrays only works with float32 dtypes.
warn('Automatic differentiation with Cupy arrays only works with float32 precision.')
elif _xp == da:
raise NotImplementedError('Automatic differentiation does not support with lazy Dask arrays.')
else:
arr = jnp.asarray(x)
jaxobian_eval = jacfwd(self.__call__)(arr)
return _xp.asarray(jaxobian_eval)
jax.jacrev
) of column-by-column (jax.jacfwd
).jax.jvp
or jax.vjp
for evaluations of Jacobian-vector products without forming the Jacobian.$ python3 -m pip install -e ".[dev]"
On this command of the developer install, pip was searching for over an hour on determining ipython compatibility.
A fix that work was suggested by Joan which replaces this command: conda create -n pycsou --channel=conda-forge --file=conda/requirements.txt
with: conda create -n pycsou --file=conda/requirements.txt -c conda-forge python=3.9
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.