lanl / scico Goto Github PK
View Code? Open in Web Editor NEWScientific Computational Imaging COde
License: BSD 3-Clause "New" or "Revised" License
Scientific Computational Imaging COde
License: BSD 3-Clause "New" or "Revised" License
Some of the tests (e.g. in scico/test/test_admm.py
) generate warnings such as
scico/scico/loss.py:122: UserWarning: Argument 1 of 1 is an np.ndarray. Will cast it to DeviceArray. To suppress this warning cast all np.ndarrays to DeviceArray first.
y = ensure_on_device(y)
SVMBIR provides a method for the proximal map associated with f(x) = || Ax - y ||^2_W (https://svmbir.readthedocs.io/en/latest/theory.html). We want to provide a way to use it inside ADMM.
SVMBIR currently works only on 3D volumes. While it's possible to unsqueeze 2D images to get them to work, let's simplify/hide this from users.
The header docstrings for examples sparsecode_poisson_blkarr_pgm.py
and sparsecode_poisson_pgm.py
are almost identical, with only one cryptic comment (composed as a sum of the
application of individual dictionaries) to distinguish between them. The distinction should be explained in more detail, and they should have different titles so that they don't appear to be a replication of the same example in the example index.
LinearOperator adjoint validity is currently tested using functions adjoint_AtA_test
and adjoint_AAt_test
in scico/test/linop/test_linop.py
. It would be both more efficient and more reliable to replace these functions with a single function that takes the approach of sporco.linalg.valid_adjoint.
Fix warning Method Nelder-Mead does not use gradient information
appearing during pytest.
Check out line 264 in solver.py
. It needs to check whether the solver requires gradient info and modify the call to minimize
accordingly.
At the time of public release, section Install a Development Version of the Contributing docs page should be updated to a forking tutorial (see the jax example).
The tests for scico.linop.radon_astra
are not running as part of the CI configure in GitHub workflows. This appears to be because astra-toolbox
is not listed as a dependency in requirements.txt
, so that it isn't installed, and the corresponding tests are skipped. Listing it as a dependency doesn't seem to be an option since the astra-toolbox
package on PyPI has not been updated for five years. The best solution is probably to configure the pytest workflow to install astra-toolbox
via conda.
ToDo
markers in the docs need to be addressed and removed.
To find ToDo sections
find docs -name "*.rst" | xargs grep 'todo::'
find scico -name "*.py" | xargs grep 'todo::'
The * Edit on GitHub* links in the online docs (e.g. ct_astra_pcg) are broken because they refer to lanl/scico
rather than lanl/scico-data
.
The question got raised in #88 (comment) since the svmbir and BM3D routines returned ndarrays which may be a problematic type to pop up inside the solvers.
In general this poses the questions:
type_in=type_out
or type_out=Device-array-like
)Some issues related to the API docs for the scico.solver
module:
.. todo:
in the module header docstring.TODO
comments in the code (see #96).scipy.optimize.minimize
and scipy.optimize.minimize_scalar
are explicitly included. Is there a good reason for doing this instead of adopting the strategy followed for scico.numpy
(i.e. automatically attach the docstrings when running sphinx)?A brief summary of available optimization algorithms should be added as a subsection of Important Classes.
Thanks to #73, SCICO depends on a specific jax version which won't always be the latest one. In order to spot compatibility changes coming down the pike and to help us keep up with jax, we could make the CI run tests with both the pinned jax version and the current latest jax version.
Links from Luke:
Test test_url_get
in scico/test/test_util.py
recently started failing with error ssl.SSLCertVerificationError
. It's not clear whether this is due to a change of status of the test URL or in some dependency library.
Since objax seems to no longer be under development, it would be best to replace it with an active project such as flax.
When trying to install dependencies on a shared machine with "./conda_env.sh -g -p 3.8 -c 11", I get the following error.
EnvironmentNotWritableError: The current user does not have write permissions to the target environment.
environment location: /packages/anaconda/Anaconda3-2019.10
Typo in this notebook title.
On my machine compiling the notebooks does not work properly, otherwise I would fix it myself.
Both A^*
and A^H
are currently used in different parts of the documentation (primarily API docstrings). A choice should be made and consistency imposed.
I advocate for A^H
as the less ambiguous choice.
Todo note (with reference to coding conventions) removed from Overview subsection of Style Guide section of docs in branch brendt/docs-edits
:
Briefly explain which components are taken from each convention (see above) to avoid ambiguity in cases in which they differ.
Additional tests needed to confirm that the svmbir prox solver gives the same results as a CG solve of the prox problem involving the svmbir forward and adjoint operators.
Issue was raised about how to best categorize the examples.
Currently the file names are mostly categorized by optimization algorithm which does not match the documentation with is categorized by application.
Suggestions that came up were:
(1) Categorize by "What is this example mainly trying to illustrate?"
(2) Categorize by application only (also the file names).
(3) Tags rather than categories. So that users can browse the examples in the docs by "Optimization Algorithm", "Denoiser", "Application" etc. separately.
Currently PGM requires the losses to be smooth (f.is_smooth = True) in order to run although at the same time some of our losses are not smooth, for example, PoissonLoss.
It is still often possible to run PGM (see Poisson examples and TV example) even though the losses are not smooth everywhere by forcing the is_smooth flag to be true.
I suggest re-thinking the use of is_smooth in conjunction with PGM.
Originally posted by @tbalke in #78 (comment)
When I run pytest scico/test/test_functional.py -k DnCNN
with a GPU install of SCICO, I get a failure in TestDnCNN.test_prox
, which is comparing the jit
ed to unjit
ed version of the DnCNN prox. These results are disturbingly different:
E Mismatched elements: 1020 / 1024 (99.6%)
E Max absolute difference: 0.70825666
E Max relative difference: 83.99731
E x: array([[-1.311581e+00, -4.477247e-01, -2.993065e-01, ..., 1.231516e-01,
E 2.924881e-01, 1.054722e-01],
E [ 2.548240e-01, 7.504759e-01, -3.427111e-01, ..., 1.717497e-01,...
E y: array([[-1.290338, -0.550369, -0.359681, ..., 0.004449, 0.33103 ,
E 0.088125],
E [ 0.211299, 0.899408, -0.37546 , ..., 0.460677, -0.817596,...
Might this be a JAX bug? One could try bumping the jax version and see if it is fixed.
Currently _wrap_mul_div_scalar
and _wrap_add_sub
are both private methods within the Generic Operators class. We have comments describing both the methods already in the code but no docstrings are attached.
TODO: Document both wrap methods
Do we want to make them visible in the Operators section of the ReadTheDocs? If so, we need to also enable that and provide an example.
The GitHub workflow lint tests should be extended (e.g. perhaps an additional action for flake8
) to include checks for issues such as missing docstrings.
All tests are currently failing with an error ValueError: 'shape' is not in list
. This appears to be related to the line
shape_ind = list(inspect.signature(fun).parameters.keys()).index("shape")
in scico.random
. Since most tests are passing when pytest is run locally, this is presumably due to a recent update of the version of some dependency package in the github CI run environment.
The default CG solver options for admm.LinearSubproblemSolver
include a maximum number of CG iterations, but don't override the default CG solver relative tolerance (e.g. 1e-5 for scico.solver.cg
):
Line 144 in 0ae09a9
and this is repeated in all of the example scripts, e.g.
It would be much better if the admm.LinearSubproblemSolver
initializer also included a default relative tolerance for the CG solver (e.g. 1e-3 is often sufficient for ADMM subproblems) and if the primary mechanism for controlling number of CG iterations in the example scripts were via relative tolerance rather than maximum iterations.
There are a number of TODO comments still to be addressed in the code:
scico/blockarray.py:1322
scico/__init__.py:8
scico/numpy/__init__.py:81
scico/loss.py:73
scico/typing.py:29
scico/scipy/special.py:50
I ran into a circular import issue when importing from radon_svmbir
In [1]: from scico.linop.radon_svmbir import ParallelBeamProjector, SVMBIRWeightedSquaredL2Loss
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~/pythonModules/scico/scico/linop/radon_svmbir.py in <module>
19 try:
---> 20 import svmbir
21 except ImportError:
~/pythonModules/scico/examples/scripts/svmbir.py in <module>
11
---> 12 from scico.linop.radon_svmbir import ParallelBeamProjector, SVMBIRWeightedSquaredL2Loss
13
ImportError: cannot import name 'ParallelBeamProjector' from partially initialized module 'scico.linop.radon_svmbir' (most likely due to a circular import) (/Users/thilobalke/pythonModules/scico/scico/linop/radon_svmbir.py)
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
<ipython-input-1-6f8b52ba3359> in <module>
----> 1 from scico.linop.radon_svmbir import ParallelBeamProjector, SVMBIRWeightedSquaredL2Loss
~/pythonModules/scico/scico/linop/radon_svmbir.py in <module>
20 import svmbir
21 except ImportError:
---> 22 raise ImportError(
23 "Could not import svmbir, please refer to INSTALL.rst "
24 "for instructions on how to install the SVMBIR."
ImportError: Could not import svmbir, please refer to INSTALL.rst for instructions on how to install the SVMBIR.
Pytest runs okay for me so it is kind of strange ...
suggestion: PnP with BM3D.
The tests for scico.data
fail if run on an installed version of scico
pytest --pyargs scico
because the test skip conditional assumes the test is being run from the distribution root directory, and fails when no data
directory is found.
When passed a 32 x 32 x 3 input array, scico.functional.DnCNN
gives a long stack trace with error
ScopeParamShapeError: Inconsistent shapes between value and initializer for parameter "kernel" in
"/conv_start": (3, 3, 1, 64), (3, 3, 3, 64).
(https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.ScopeParamShapeError)
Also, while there are tests for the code in scico.flax
, there don't appear to be any for scico.functional.DnCNN
.
It looks to me like the CI just grabs the latest jax: jax>=0.2.19. This is bad, right? Given that the jax API is changing, I would suggest we pin SCICO to a specific version(s) of jax.
Luke says:
Yes, definitely pin it.
It wouldn't be a bad idea to set up a test matrix and run the tests on jax==whatever we pin to and jax upstream.
SquaredL2Loss
= WeightedSquaredL2Loss
with W=None
, however, internally W
will still become the and Identity operator.
Hence, should we have the following inheritance?
class SquaredL2Loss(WeightedSquaredL2Loss):
Pro's and con's:
SquaredL2Loss
trivially smallSomeone with more understanding of the inner workings of scico (@lukepfister) can perhaps comment?
Otherwise the performance concerns can also easily be tested using
f_unw = SquaredL2Loss(y, A=A)
f_wei = WeightedSquaredL2Loss(y, A=A, W=None)
while benchmarking the performance of the eval and prox methods.
Currently, it is the case that when positivity=True
in our SVMBIRWeightedSquaredL2Loss
, the prox is a proper prox, but not of the loss of the SVMBIRWeightedSquaredL2Loss
but of the sum of the SVMBIRWeightedSquaredL2Loss
and an non-zero indicator.
First this should be properly documented in the code. (Addressed in 079ac16)
Secondly, since the positivity flag may be useful, we should keep it but adjust the loss eval so that it is correct.
Lastly, we should create some tests to check whether the corrected loss yields the same prox as the internal svmbir prox.
The most appropriate solver should be automatically selected if subproblem_solver
is None
.
In various places in the code and docs prox functions are written with an input of x
and minimization over v
, e.g.,
scico/scico/functional/_functional.py
Line 78 in d01c90c
Unifying these notations would be cleaner and less confusing. Let's use the input v
, minimization over x
convention, which matches wikipedia, Boyd's "Proximal Algorithms", etc.
I just had a PR fail the lint test, apparently due to a mismatch between black versions being used locally (19.10b0) and on github (presumably 21.9b0, the latest stable release). The local version of black remained at 19.10b0 after a conda update, despite 21.9b0 begin available via conda, presumably due to some dependency conflicts. It would be useful to try to figure out a strategy for avoiding such a situation in the future.
The merge of PR #78 seems to have broken the example script ct_svmbir_ppp_bm3d_admm_cg.py
.
Since our test suite takes quite a while to run, it would be convenient to be able to run the tests in parallel using either pytest-xdist or pytest-parallel. Unfortunately neither of these pytest plugins works at the moment
The test_data.py
tests fail if the scico data submodule isn't present; this probably means the user didn't clone recursively or do git submodule init && git submodule update
after cloning.
We should tweak the error messages in test_data.py
to point the user in this direction.
We currently have linop.radon
(astra) and linop.radon_svmbir
(svmbir). For consistency, consider renaming linop.radon
to linop.radon_astra
, or perhaps switching to linop.radon.astra
and linop.radon.svmbir
?
The ASTRA Toolbox GPU back projector is known to not be a highly accurate adjoint to the forward operator. In the old days, we handled this with a generous tolerance on our adjoint test. After some improvements to that adjoint test, the ASTRA back projector fails very hard (relative error 0.78).
The question: is this a problem with the test, our interface to ASTRA, or ASTRA itself? If it's the latter, let's just skip these tests on GPU.
First step: get a GPU install of SCICO and run pytest scico/test/linop/test_radon_astra.py
to see if the failures occur for you.
Care needs to be taken to ensure that the jax and jaxlib version restrictions in scico/__init__.py
are synchronized with those in requirements.txt
and setup.py
. Perhaps add a test to setup.py
to issue a warning if they differ?
The high level structure of scico.tests.test_functional.py
is quite difficult to follow, in part due to the large number of helper functions, which make it difficult to identify the overall structure the main entry points for the tests. It would benefit greatly from some additional comments to explain the structure and identify the main components.
Theme customization and/or custom CSS needs attention in order to improve section and itemized list spacing.
svmbir is among requirements, the automatic installation script takes care of it
@bwohlberg @lukepfister @Michael-T-McCann
Scaling the f-loss seems to scale the reciprocal of the reconstruction (see below). Initially, I brought this up when designing the svmbir
PPP example but now I found out that it is not limited to that. It seems like the problem should be somewhere in the CG but not sure.
Lastly, using subproblem_solver=None
produces nan
as a result which may or may not be from the same origin.
import jax
import numpy as np
from scico import functional, linop, loss
from scico.admm import ADMM, LinearSubproblemSolver
np.random.seed(123)
N = 10
M = 100
x = jax.device_put(np.random.randn(N))
A = linop.MatrixOperator(np.random.randn(M,N))
y = A @ x
f1 = loss.SquaredL2Loss(y=y, A=A, scale=1)
f42 = loss.SquaredL2Loss(y=y, A=A, scale=42)
x1 = ADMM(
f=f1,
g_list=[functional.ZeroFunctional()],
C_list=[linop.Identity(x.shape)],
rho_list=[1],
subproblem_solver=LinearSubproblemSolver(cg_function='scico'), #same with 'jax'
).solve()
x42 = ADMM(
f=f42,
g_list=[functional.ZeroFunctional()],
C_list=[linop.Identity(x.shape)],
rho_list=[1],
subproblem_solver=LinearSubproblemSolver(cg_function='scico'), #same with 'jax'
).solve()
print(np.mean(x1/x42))
# 41.99956
When I tried to install scico I ran into the following message using the command pip install -e .
ERROR: Package 'scico' requires a different Python: 3.6.9 not in '>3.8'
If I changed the version in setup.py to >= 3.6.9 it finished doing the installation, otherwise, there was no progress.
Additionally, running pip install -r requirements.txt
threw an error about not recognizing JAX above or equal to 0.2.19. And halted install progress there too. I updated my pip and pip3 and ran them but no progress. Not sure if it was a mistake on my end.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.