Coder Social home page Coder Social logo

pyxu-org / pyxu Goto Github PK

View Code? Open in Web Editor NEW
113.0 113.0 16.0 290.9 MB

Modular and scalable computational imaging in Python with GPU/out-of-core computing.

Home Page: https://pyxu-org.github.io/

License: MIT License

Python 100.00%
array-api array-computing artificial-intelligence bayesian-methods computational-imaging computer-vision foundation-models image-enhancement image-processing image-reconstruction image-restoration inverse-problem machine-learning matrix-free modular python-hpc python-library scalable sensing sparse-learning

pyxu's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

pyxu's Issues

PGD optimization for a convolution kernel

The problem I'm trying to solve is stated as such, from Cury et al.:
Pasted image 20240327144556

With y being a 1d vector of targets, one for each t in T, X shaped (T, W, L) and alpha shaped (W, L). The function q is the sum of the elements of the element-wise products between X and alpha, what in numpy would be np.sum(X * alpha, (1,2)).

To vectorize the problem over T, the way I would be temped to do it if it was e.g. pytorch would be to set alpha as requiring gradient and either rely on broadcasting or convolve X with alpha as the kernel.

I am entirely new to pyxu, but to implement this my guess would be I would need
SquaredL2Norm(dim=y.size).asloss(y.ravel()) * [X convolved with alpha]
And optimize this (plus the phi function, which is just the sum of L21Norm and L1Norm) for alpha.
I'm not quite sure how to implement this here, or if this would be supported at all. Could anyone point me in the right direction?

Minor remark: data type missing

Within pycsou.core.solver, the returned type should probably be float for the following abstract method of GenericIterativeAlgorithm.

    @abstractmethod
    def stopping_metric(self):
        r"""
        Stopping metric.
        """
        pass

Suggestion for code simplification

The two functions update_iterand from file proxalgs.py start with a test on the iteration number:

    def update_iterand(self) -> dict:
        if self.iter == 0:
            x, z = self.init_iterand.values()
        else:
            x, z = self.iterand.values()

I think this test is not required as the information we are looking for is already stored in the dictionary old_iterand. The code could then be replaced by:

x, z = self.old_iterand.values()

However, in case of modifications of the variables x, z, it might be required to create a deepcopy of the variables (not sure).

Same goes for

x, x_old, t_old = self.old_iterand.values()

for APGD.

Suggestion for documentation improvement

Unless I have missed it, I think there is no description of the actual shape of the iterand present in any iterative algorithm. It could be helpful to have such a description present in the documentation, as PDS and APGD from pycsou.core.proxalgs have different shapes of iterand.

Interoperability with PyTorch

  • Make a TorchFromMap wrapper for transforming a Map object into a PyTorch Function (as described here.
  • Allow composition of TorchedMaps with existing PyTorch Functionals and compute gradient automatically using pytorch.autograd.
  • Ensure no copy between cupy and PyTorch using __cupy_array_interface__.
  • Make a MapFromTorch wrapper for transforming a PyTorch Function into a Pycsou Map object for use in optimisation algorithms.

GenericIterativeAlgorithms always "converges"

The iterate() method from GenericIterativeAlgorithm class from file solver.py always returns

self.converged = True

which might not be accurate, for instance if it stops after reaching the maximum number of iterations.

An easy fix would be

self.converged = self.stopping_metric() <=  self.accuracy_threshold

Add Support for Cupy, Dask and JAX arrays

Extend scope of functions/methods to Cupy, Dask and JAX arrays (Duck arrays)

  • Add decorator @infer_array_module to all numpy-backed functions of the package to extend their scope to Cupy, JAX and Dask arrays and hence provide support for GPU/TPU and out-of-core/distributed computations.
    The decorator should infer the array module to be used (numpy, cupy or dask) from the input duck array fed to the function (similarly to cp.get_array_module()). The code should be made module-agnostic by replacing all np.function by _xp.function where _xp is dynamically inferred at run-time by the decorator.

Example:

import numpy as np
import cupy as cp
import dask.array as da
import jax.numpy as jnp
import types
from typing import Callable, Optional

def infer_array_module(decorated_object_type='method'):
  def make_decorator(call_fun: Callable):
    def wrapper(*args, **kwargs):
      if decorated_object_type == 'method':
        arr = args[1] #First argument is self
      else:
        arr = args[0]

      if isinstance(arr, cp.ndarray):
        xp = cp
      elif isinstance(arr, da.core.Array):
        xp = da 
      elif isinstance(arr, jnp.ndarray):
        xp = jnp
      else:
        xp = np # Fall back to Numpy backend if unknown array type
      kwargs['_xp'] = xp
      return call_fun(*args, **kwargs)
    return wrapper
  return make_decorator

class FFTOp(object):

  def __init__(self):
    pass

  @infer_array_module()
  def __call__(self, x, _xp: Optional[types.ModuleType] = None):
    return _xp.fft.fft(x)

Cupy specifics:

  • Control better dtypes of Map, LinearOperator and Functional objects as well as algorithms (float64 computations on GPUs are notoriously slower than float32 computations)
  • Cupy should remain an optional dependency. Any call to the cupy module should hence be conditional to this dependency being installed.
  • Benchmark cupy vs numpy computations.

Dask specifics:

  • Output of callables are by default lazy dask arrays. In iterative algorithms, lazy arrays should be computed via the method persist() (equivalent to compute() on a single machine but keeps the computation result distributed when working with multiple nodes). This is necessary since the stopping criterion to most algorithms cannot be computed lazily. Moreover, computing the dask arrays avoid overly long dask computation graphs which comes with much overhead.

JAX specifics:

  • JAX arrays are immutable so inplace computations in the code must be updated (potentially cumbersome).

Treat exception with svds when k=1

When trying to compute the singular values with spls.svds, Python raises an error when the dimension of the operator from which it tries to compute the svd is of size one. Indeed, this function needs the number of value to return to be strictly lower than the minimal dimension size of the operator. In the case of a vector, calling the function should return the euclidian norm of the vector I guess.
This problem happens when computing the Lipschitz constant of an operator for which one of the dimension has size one (basically a vector instead of a matrix).

There was a problem installing Pycsou and its subpackages

In Linux operating systems, i cannot use pip to quickly install.When I install using the developer installation method, I get an error indicating that there is a problem with the setup.py.Would like to ask how you can install pycsou using Quick Install,
pycsphere,pycgsp these three packs, thank you very much for your answer!

Add Support for Automatic Differentiation via JAX

Add support for automatic differentiation via JAX. JAX can automatically differentiate native Python and NumPy functions. It can differentiate through loops, branches, recursion, and closures, and it can take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation) via grad as well as forward-mode differentiation, and the two can be composed arbitrarily to any order.

Example:

from jax._src.dlpack import to_dlpack
import numpy as np
import cupy as cp
import dask.array as da
import jax.numpy as jnp
import types
from typing import Callable, Optional
from jax import jacfwd
import jax.dlpack as jxdl
from warnings import warn

def infer_array_module(decorated_object_type='method'):
  def make_decorator(call_fun: Callable):
    def wrapper(*args, **kwargs):
      if decorated_object_type == 'method':
        arr = args[1] #First argument is self
      else:
        arr = args[0]

      if isinstance(arr, cp.ndarray):
        xp = cp
      elif isinstance(arr, da.core.Array):
        xp = da 
      elif isinstance(arr, jnp.ndarray):
        xp = jnp
      else:
        xp = np # Fall back to Numpy backend if unknown array type
      kwargs['_xp'] = xp
      return call_fun(*args, **kwargs)
    return wrapper
  return make_decorator

class FFTOp(object):


  @infer_array_module(decorated_object_type='method')
  def __call__(self, x, _xp: Optional[types.ModuleType] = None):
    return _xp.fft.fft(x)
  
  @infer_array_module(decorated_object_type='method')
  def jacobian(self, x, _xp: Optional[types.ModuleType] = None):
    if _xp == cp:
      arr = jxdl.from_dlpack(x.astype(_xp.float32).toDlpack()) # Zero-copy conversion from Cupy to JAX arrays only works with float32 dtypes. 
      warn('Automatic differentiation with Cupy arrays only works with float32 precision.')
    elif _xp == da:
      raise NotImplementedError('Automatic differentiation does not support with lazy Dask arrays.')
    else:
      arr = jnp.asarray(x)
    jaxobian_eval = jacfwd(self.__call__)(arr)
    return _xp.asarray(jaxobian_eval)

Notes:

  1. Automatic differentiation is an eager transformation and therefore won't work on lazy Dask arrays.
  2. Zero-copy conversion from Cupy to JAX arrays seems to only work with float32 dtypes (to be investigated).
  3. The size of the nonlinear map should be checked to determine whether the Jacobian matrix should be computed row-by-row (jax.jacrev) of column-by-column (jax.jacfwd).
  4. Investigate jax.jvp or jax.vjp for evaluations of Jacobian-vector products without forming the Jacobian.
  5. JAX arrays are created by default on the first GPU available. When copying from Numpy arrays stored on the CPU, one should control explicitly where the JAX array is stored to avoid unnecessary data transfers between CPU and GPU.

Developer Install - ".[dev]"

$ python3 -m pip install -e ".[dev]"
On this command of the developer install, pip was searching for over an hour on determining ipython compatibility.

A fix that work was suggested by Joan which replaces this command: conda create -n pycsou --channel=conda-forge --file=conda/requirements.txt
with: conda create -n pycsou --file=conda/requirements.txt -c conda-forge python=3.9

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.