Coder Social home page Coder Social logo

lanl / scico Goto Github PK

View Code? Open in Web Editor NEW
91.0 7.0 17.0 2.6 MB

Scientific Computational Imaging COde

License: BSD 3-Clause "New" or "Revised" License

Shell 1.09% Python 98.91%
computational-imaging inverse-problems optimization admm fista total-variation plug-and-play-priors jax convex-optimization proximal-algorithms

scico's Introduction

Python >= 3.8 Package License Code style Documentation Status JOSS paper
Lint status Test status Test coverage CodeFactor
PyPI package version PyPI download statistics Conda Forge Release
View notebooks at nbviewer Run notebooks on binder Run notebooks on google colab

Scientific Computational Imaging Code (SCICO)

SCICO is a Python package for solving the inverse problems that arise in scientific imaging applications. Its primary focus is providing methods for solving ill-posed inverse problems by using an appropriate prior model of the reconstruction space. SCICO includes a growing suite of operators, cost functionals, regularizers, and optimization routines that may be combined to solve a wide range of problems, and is designed so that it is easy to add new building blocks. SCICO is built on top of JAX, which provides features such as automatic gradient calculation and GPU acceleration.

Documentation is available online. If you use this software for published work, please cite the corresponding JOSS Paper (see bibtex entry balke-2022-scico in docs/source/references.bib).

Installation

See the online documentation for installation instructions.

Usage Examples

Usage examples are available as Python scripts and Jupyter Notebooks. Example scripts are located in examples/scripts. The corresponding Jupyter Notebooks are provided in the scico-data submodule and symlinked to examples/notebooks. They are also viewable on GitHub or nbviewer, or can be run online by binder.

License

SCICO is distributed as open-source software under a BSD 3-Clause License (see the LICENSE file for details).

LANL open source approval reference C20091.

(c) 2020-2024. Triad National Security, LLC. All rights reserved. This program was produced under U.S. Government contract 89233218CNA000001 for Los Alamos National Laboratory (LANL), which is operated by Triad National Security, LLC for the U.S. Department of Energy/National Nuclear Security Administration. All rights in the program are reserved by Triad National Security, LLC, and the U.S. Department of Energy/National Nuclear Security Administration. The Government has granted for itself and others acting on its behalf a nonexclusive, paid-up, irrevocable worldwide license in this material to reproduce, prepare derivative works, distribute copies to the public, perform publicly and display publicly, and to permit others to do so.

scico's People

Contributors

bwohlberg avatar crstngc avatar danaroth83 avatar fernandodavis avatar lukepfister avatar michael-t-mccann avatar sauravmaheshkar avatar shnaqvi avatar sibgatulin avatar smajee avatar tbalke avatar wjgancn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

scico's Issues

Fix Nelder-Mead / gradient warning during pytest

Fix warning Method Nelder-Mead does not use gradient information appearing during pytest.

Check out line 264 in solver.py. It needs to check whether the solver requires gradient info and modify the call to minimize accordingly.

Output types of operators

The question got raised in #88 (comment) since the svmbir and BM3D routines returned ndarrays which may be a problematic type to pop up inside the solvers.

In general this poses the questions:

  • what types do our routines return?
  • do we want consistency (as in type_in=type_out or type_out=Device-array-like)

Functionals tests are rather opaque

The high level structure of scico.tests.test_functional.py is quite difficult to follow, in part due to the large number of helper functions, which make it difficult to identify the overall structure the main entry points for the tests. It would benefit greatly from some additional comments to explain the structure and identify the main components.

Address TODO comments in code

There are a number of TODO comments still to be addressed in the code:

  • scico/blockarray.py:1322
  • scico/__init__.py:8
  • scico/numpy/__init__.py:81
  • scico/loss.py:73
  • scico/typing.py:29
  • scico/scipy/special.py:50

Github actions: black version mismatch

I just had a PR fail the lint test, apparently due to a mismatch between black versions being used locally (19.10b0) and on github (presumably 21.9b0, the latest stable release). The local version of black remained at 19.10b0 after a conda update, despite 21.9b0 begin available via conda, presumably due to some dependency conflicts. It would be useful to try to figure out a strategy for avoiding such a situation in the future.

UserWarning in tests

Some of the tests (e.g. in scico/test/test_admm.py) generate warnings such as

scico/scico/loss.py:122: UserWarning: Argument 1 of 1 is an np.ndarray. Will cast it to DeviceArray. To suppress this warning cast all np.ndarrays to DeviceArray first.
    y = ensure_on_device(y)

Jax and jaxlib version restrictions in scico/__init__.py

Care needs to be taken to ensure that the jax and jaxlib version restrictions in scico/__init__.py are synchronized with those in requirements.txt and setup.py. Perhaps add a test to setup.py to issue a warning if they differ?

Replace objax with flax

Since objax seems to no longer be under development, it would be best to replace it with an active project such as flax.

Document the _wrap_mul_div_scalar and _wrap_add_sub methods

Currently _wrap_mul_div_scalar and _wrap_add_sub are both private methods within the Generic Operators class. We have comments describing both the methods already in the code but no docstrings are attached.

TODO: Document both wrap methods

Do we want to make them visible in the Operators section of the ReadTheDocs? If so, we need to also enable that and provide an example.

Make tests run on the latest version of jax

Thanks to #73, SCICO depends on a specific jax version which won't always be the latest one. In order to spot compatibility changes coming down the pike and to help us keep up with jax, we could make the CI run tests with both the pinned jax version and the current latest jax version.

Links from Luke:

`jit`ing the `DnCNN` prox on GPU

When I run pytest scico/test/test_functional.py -k DnCNN with a GPU install of SCICO, I get a failure in TestDnCNN.test_prox, which is comparing the jited to unjited version of the DnCNN prox. These results are disturbingly different:

E       Mismatched elements: 1020 / 1024 (99.6%)
E       Max absolute difference: 0.70825666
E       Max relative difference: 83.99731
E        x: array([[-1.311581e+00, -4.477247e-01, -2.993065e-01, ...,  1.231516e-01,
E                2.924881e-01,  1.054722e-01],
E              [ 2.548240e-01,  7.504759e-01, -3.427111e-01, ...,  1.717497e-01,...
E        y: array([[-1.290338, -0.550369, -0.359681, ...,  0.004449,  0.33103 ,
E                0.088125],
E              [ 0.211299,  0.899408, -0.37546 , ...,  0.460677, -0.817596,...

Might this be a JAX bug? One could try bumping the jax version and see if it is fixed.

Additional tests for svmbir interface

Additional tests needed to confirm that the svmbir prox solver gives the same results as a CG solve of the prox problem involving the svmbir forward and adjoint operators.

Todo in docs Style Guide

Todo note (with reference to coding conventions) removed from Overview subsection of Style Guide section of docs in branch brendt/docs-edits:

Briefly explain which components are taken from each convention (see above) to avoid ambiguity in cases in which they differ.

Positivity of svmbir prox

Currently, it is the case that when positivity=True in our SVMBIRWeightedSquaredL2Loss, the prox is a proper prox, but not of the loss of the SVMBIRWeightedSquaredL2Loss but of the sum of the SVMBIRWeightedSquaredL2Loss and an non-zero indicator.

  • First this should be properly documented in the code. (Addressed in 079ac16)

  • Secondly, since the positivity flag may be useful, we should keep it but adjust the loss eval so that it is correct.

  • Lastly, we should create some tests to check whether the corrected loss yields the same prox as the internal svmbir prox.

Scale problem in ADMM / CG

@bwohlberg @lukepfister @Michael-T-McCann

Scaling the f-loss seems to scale the reciprocal of the reconstruction (see below). Initially, I brought this up when designing the svmbir PPP example but now I found out that it is not limited to that. It seems like the problem should be somewhere in the CG but not sure.

Lastly, using subproblem_solver=None produces nan as a result which may or may not be from the same origin.

import jax
import numpy as np
from scico import functional, linop, loss
from scico.admm import ADMM, LinearSubproblemSolver

np.random.seed(123)
N = 10
M = 100

x = jax.device_put(np.random.randn(N))
A = linop.MatrixOperator(np.random.randn(M,N))
y = A @ x

f1 = loss.SquaredL2Loss(y=y, A=A, scale=1)
f42 = loss.SquaredL2Loss(y=y, A=A, scale=42)

x1 = ADMM(
    f=f1,
    g_list=[functional.ZeroFunctional()],
    C_list=[linop.Identity(x.shape)],
    rho_list=[1],
    subproblem_solver=LinearSubproblemSolver(cg_function='scico'), #same with 'jax'
).solve()

x42 = ADMM(
    f=f42,
    g_list=[functional.ZeroFunctional()],
    C_list=[linop.Identity(x.shape)],
    rho_list=[1],
    subproblem_solver=LinearSubproblemSolver(cg_function='scico'), #same with 'jax'
).solve()

print(np.mean(x1/x42))

# 41.99956

API docs for scico.solver

Some issues related to the API docs for the scico.solver module:

  • There is still a .. todo: in the module header docstring.
  • There are two instances of TODO comments in the code (see #96).
  • Docstrings for scipy.optimize.minimize and scipy.optimize.minimize_scalar are explicitly included. Is there a good reason for doing this instead of adopting the strategy followed for scico.numpy (i.e. automatically attach the docstrings when running sphinx)?
  • The split and join functions should be made private

Installation Issue in shared machine

When trying to install dependencies on a shared machine with "./conda_env.sh -g -p 3.8 -c 11", I get the following error.

EnvironmentNotWritableError: The current user does not have write permissions to the target environment.
environment location: /packages/anaconda/Anaconda3-2019.10

Improve consistency of notation for proxes

In various places in the code and docs prox functions are written with an input of x and minimization over v, e.g.,

def prox(self, x: Union[JaxArray, BlockArray], lam: float = 1) -> Union[JaxArray, BlockArray]:
In other places, it is the opposite (e.g., https://scico.readthedocs.io/en/latest/functional.html).

Unifying these notations would be cleaner and less confusing. Let's use the input v, minimization over x convention, which matches wikipedia, Boyd's "Proximal Algorithms", etc.

Errors while installing Scico manually

When I tried to install scico I ran into the following message using the command pip install -e .

ERROR: Package 'scico' requires a different Python: 3.6.9 not in '>3.8'

If I changed the version in setup.py to >= 3.6.9 it finished doing the installation, otherwise, there was no progress.

Additionally, running pip install -r requirements.txt threw an error about not recognizing JAX above or equal to 0.2.19. And halted install progress there too. I updated my pip and pip3 and ran them but no progress. Not sure if it was a mistake on my end.

Module scico.linop.radon_astra tests not running in GitHub workflows

The tests for scico.linop.radon_astra are not running as part of the CI configure in GitHub workflows. This appears to be because astra-toolbox is not listed as a dependency in requirements.txt, so that it isn't installed, and the corresponding tests are skipped. Listing it as a dependency doesn't seem to be an option since the astra-toolbox package on PyPI has not been updated for five years. The best solution is probably to configure the pytest workflow to install astra-toolbox via conda.

Loss is_smooth attribute

Currently PGM requires the losses to be smooth (f.is_smooth = True) in order to run although at the same time some of our losses are not smooth, for example, PoissonLoss.

It is still often possible to run PGM (see Poisson examples and TV example) even though the losses are not smooth everywhere by forcing the is_smooth flag to be true.

I suggest re-thinking the use of is_smooth in conjunction with PGM.

Originally posted by @tbalke in #78 (comment)

Address ToDo markers in docs

ToDo markers in the docs need to be addressed and removed.

To find ToDo sections

  • in the main docs:find docs -name "*.rst" | xargs grep 'todo::'
  • in the api docs: find scico -name "*.py" | xargs grep 'todo::'

Tests for scico.data fail when run on installed package

The tests for scico.data fail if run on an installed version of scico

pytest --pyargs scico

because the test skip conditional assumes the test is being run from the distribution root directory, and fails when no data directory is found.

The curious case of the ASTRA adjoint on GPU

The ASTRA Toolbox GPU back projector is known to not be a highly accurate adjoint to the forward operator. In the old days, we handled this with a generous tolerance on our adjoint test. After some improvements to that adjoint test, the ASTRA back projector fails very hard (relative error 0.78).

The question: is this a problem with the test, our interface to ASTRA, or ASTRA itself? If it's the latter, let's just skip these tests on GPU.

First step: get a GPU install of SCICO and run pytest scico/test/linop/test_radon_astra.py to see if the failures occur for you.

CG solver in admm.LinearSubproblemSolver should have default relative tolerance

The default CG solver options for admm.LinearSubproblemSolver include a maximum number of CG iterations, but don't override the default CG solver relative tolerance (e.g. 1e-5 for scico.solver.cg):

def __init__(self, cg_kwargs: dict = {"maxiter": 100}, cg_function: str = "scico"):

and this is repeated in all of the example scripts, e.g.

subproblem_solver=LinearSubproblemSolver(cg_kwargs={"maxiter": num_inner_iter}),

It would be much better if the admm.LinearSubproblemSolver initializer also included a default relative tolerance for the CG solver (e.g. 1e-3 is often sufficient for ADMM subproblems) and if the primary mechanism for controlling number of CG iterations in the example scripts were via relative tolerance rather than maximum iterations.

Add helpful messages in test_data.py

The test_data.py tests fail if the scico data submodule isn't present; this probably means the user didn't clone recursively or do git submodule init && git submodule update after cloning.

We should tweak the error messages in test_data.py to point the user in this direction.

Update Contributing docs

At the time of public release, section Install a Development Version of the Contributing docs page should be updated to a forking tutorial (see the jax example).

Consistent notation for Hermitian adjoint

Both A^* and A^H are currently used in different parts of the documentation (primarily API docstrings). A choice should be made and consistency imposed.

I advocate for A^H as the less ambiguous choice.

Circular import ParallelBeamProjector, SVMBIRWeightedSquaredL2Loss

I ran into a circular import issue when importing from radon_svmbir

In [1]: from scico.linop.radon_svmbir import ParallelBeamProjector, SVMBIRWeightedSquaredL2Loss
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
~/pythonModules/scico/scico/linop/radon_svmbir.py in <module>
     19 try:
---> 20     import svmbir
     21 except ImportError:

~/pythonModules/scico/examples/scripts/svmbir.py in <module>
     11 
---> 12 from scico.linop.radon_svmbir import ParallelBeamProjector, SVMBIRWeightedSquaredL2Loss
     13 

ImportError: cannot import name 'ParallelBeamProjector' from partially initialized module 'scico.linop.radon_svmbir' (most likely due to a circular import) (/Users/thilobalke/pythonModules/scico/scico/linop/radon_svmbir.py)

During handling of the above exception, another exception occurred:

ImportError                               Traceback (most recent call last)
<ipython-input-1-6f8b52ba3359> in <module>
----> 1 from scico.linop.radon_svmbir import ParallelBeamProjector, SVMBIRWeightedSquaredL2Loss

~/pythonModules/scico/scico/linop/radon_svmbir.py in <module>
     20     import svmbir
     21 except ImportError:
---> 22     raise ImportError(
     23         "Could not import svmbir, please refer to INSTALL.rst "
     24         "for instructions on how to install the SVMBIR."

ImportError: Could not import svmbir, please refer to INSTALL.rst for instructions on how to install the SVMBIR.

Pytest runs okay for me so it is kind of strange ...

@Michael-T-McCann @bwohlberg @lukepfister

Consistent naming for Radon transforms

We currently have linop.radon (astra) and linop.radon_svmbir (svmbir). For consistency, consider renaming linop.radon to linop.radon_astra, or perhaps switching to linop.radon.astra and linop.radon.svmbir?

Header docs for Poisson loss example scripts

The header docstrings for examples sparsecode_poisson_blkarr_pgm.py and sparsecode_poisson_pgm.py are almost identical, with only one cryptic comment (composed as a sum of the
application of individual dictionaries
) to distinguish between them. The distinction should be explained in more detail, and they should have different titles so that they don't appear to be a replication of the same example in the example index.

All tests failing in github CI

All tests are currently failing with an error ValueError: 'shape' is not in list. This appears to be related to the line

shape_ind = list(inspect.signature(fun).parameters.keys()).index("shape")

in scico.random. Since most tests are passing when pytest is run locally, this is presumably due to a recent update of the version of some dependency package in the github CI run environment.

DnCNN broken for RGB input

When passed a 32 x 32 x 3 input array, scico.functional.DnCNN gives a long stack trace with error

ScopeParamShapeError: Inconsistent shapes between value and initializer for parameter "kernel" in
"/conv_start": (3, 3, 1, 64), (3, 3, 3, 64).
(https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.ScopeParamShapeError)

Also, while there are tests for the code in scico.flax, there don't appear to be any for scico.functional.DnCNN.

SquaredL2Loss should inherit from WeightedSquaredL2Loss

SquaredL2Loss = WeightedSquaredL2Loss with W=None, however, internally W will still become the and Identity operator.

Hence, should we have the following inheritance?

class SquaredL2Loss(WeightedSquaredL2Loss):

Pro's and con's:

  • (+) less code, more reused code since SquaredL2Loss trivially small
  • (-) create some performance overhead (?)

Someone with more understanding of the inner workings of scico (@lukepfister) can perhaps comment?

Otherwise the performance concerns can also easily be tested using

f_unw = SquaredL2Loss(y, A=A)
f_wei = WeightedSquaredL2Loss(y, A=A, W=None)

while benchmarking the performance of the eval and prox methods.

Examples categorization

Issue was raised about how to best categorize the examples.

Currently the file names are mostly categorized by optimization algorithm which does not match the documentation with is categorized by application.

Suggestions that came up were:
(1) Categorize by "What is this example mainly trying to illustrate?"
(2) Categorize by application only (also the file names).
(3) Tags rather than categories. So that users can browse the examples in the docs by "Optimization Algorithm", "Denoiser", "Application" etc. separately.

Add handling of 2D inputs to SVMBIR

SVMBIR currently works only on 3D volumes. While it's possible to unsqueeze 2D images to get them to work, let's simplify/hide this from users.

Test for scico.util.url_get failing

Test test_url_get in scico/test/test_util.py recently started failing with error ssl.SSLCertVerificationError. It's not clear whether this is due to a change of status of the test URL or in some dependency library.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.