Coder Social home page Coder Social logo

brandondube / prysm Goto Github PK

View Code? Open in Web Editor NEW
249.0 18.0 42.0 12.05 MB

physical optics: integrated modeling, phase retrieval, segmented systems, polynomials and fitting, sequential raytracing...

Home Page: https://prysm.readthedocs.io/en/stable/

License: MIT License

Python 99.51% TeX 0.49%
optics modeling mtf psf wavefront phase-retrieval wavefront-sensing zernike zygo trioptics

prysm's Introduction

Prysm

CircleCI Documentation Status Coverage Status DOI

Prysm is a python 3.6+ library for numerical optics. Its features are a superset of those in both POPPY and PROPER, not limited to physical optics, thin lens, thin film, and detector modeling. There is also a submodule that can replace the software that comes with an interferometer for data analysis.

Prysm is believed to be by significant margin the fastest package in the world at what it does. On CPU, end-to-end calculation is more than 100x as fast as the above for like-for-like calculations. On GPU, prysm is more than 1,000x faster than its competition. The lowfssim model can run at over 2kHz in real-time and is all prysm under the hood.

Prysm can be used for everything from forward modeling of optical systems from camera lenses to coronographs to reverse modeling and phase retrieval. Due to its composable structure, it plays well with others and can be substituted in or out of other code easily. Of special note is prysm's interchangeable backend system, which allows the user to freely exchange numpy for cupy, enabling use of a GPU for all computations, or other similar exchanges, such as pytorch for algorithmic differentiation.

Installation

prysm is on pypi:

pip install prysm

prysm requires only numpy, and scipy.

Optional Dependencies

Prysm uses numpy for array operations or any compatible library. To use GPUs, you may install cupy and use it as the backend at runtime. Plotting uses matplotlib. Images are read and written with imageio. Some MTF utilities utilize pandas and seaborn. Reading of Zygo datx files requires h5py.

Features

Propagation

  • Pupil-to-Focus
  • Focus-to-Pupil
  • Free space ("plane to plane" or "angular spectrum")
  • FFTs, Matrix DFTs, Chirp C Transforms
  • Thin Lens Phase Screens

Polynomials

  • Zernike
  • Legendre
  • Chebyshev (1st, 2nd, 3rd, 4th kind)
  • Jacobi
  • 2D-Q, Qbfs, Qcon
  • Hopkins
  • Hermite (Probablist's and Physicist's)
  • Dickson
  • fitting
  • projection

All of these polynomials provide highly optimized GPU-compatible implementations, as well as derivatives.

Pupil Masks

  • circles, binary and anti-aliased
  • ellipses
  • rectangles
  • N-sided regular convex polygons
  • N-vaned spiders

Segmented systems

  • parametrized pupil mask generation
  • per-segment errors based on any polynomial basis expansion

Image Simulation

  • Convolution
  • Smear
  • Jitter
  • in-the-box targets
    • Siemens' Star
    • Slanted Edge
    • BMW Target (crossed edges)
    • Pinhole
    • Slit
    • Tilted Square

Metrics

  • Strehl
  • Encircled Energy
  • RMS, PV, Sa, Std, Var
  • Centroid
  • FWHM, 1/e, 1/e^2
  • PSD
  • MTF / PTF / OTF
  • PSD (and parametric fit, synthesis from parameters)
  • slope / gradient
  • Total integrated scatter
  • Bandlimited RMS

Detectors

  • fully integrated noise model (shot, read, prnu, etc)
  • arbitrary pixel apertures (square, oblong, purely numerical)
  • optical low pass filters
  • Bayer compositing, demosaicing

Thin Films

  • r, t parameters, even over spatially varying extent with high performance
  • Brewster's angle
  • Critical Angle
  • Snell's law

Refractive Index

  • Cauchy's equation
  • Sellmeier's equation

Thin Lenses

  • Defocus to delta z at the image and reverse
  • object/image distance relation
  • image/object distances and magnification
  • image/object distances and NA/F#
  • magnification and working F/#
  • two lens BFL, EFL (thick lenses)

Tilted Planes and other surfaces

  • forward or reverse projection of surfaces

Deformable Mirrors

  • surface synthesis in or out of beam normal based on arbitrary influence function with arbitrary sampling
  • DM surface misalignment / registration errors

Interferometry

  • PSD
  • Low/High/Bandpass/Bandreject filtering
  • spike clipping
  • polynomial fitting and projection
  • statistical evaluation (PV, RMS, PVr, Sa, bandlimited RMS...)
  • total integrated scatter
  • synthetic fringe maps with extra tilt fringes
  • synthesize map from PSD spec

Tutorials, How-Tos

See the documentation on each

Contributing

If you find an issue with prysm, please open an issue or pull request. Prysm has some usage of f-strings, so any code contributed is only expected to work on python 3.6+, and is licensed under the MIT license.

Issue tracking, roadmaps, and project planning are done on Zenhub. Contact Brandon for an invite if you would like to participate; all are welcome.

Heritage

  • prysm was used to perform phase retrieval used to focus Nav and Hazcam, enhanced engineering cameras used to operate the Mars2020 Perserverence rover.

  • prysm is used to build the official model of LOWFS, the Low Order Wavefront Sensing and Control system for the Roman coronoagraph instrument. In this application, it has been used to validate dynamics of a hardware testbed to 35 picometers, or 0.08% of the injected dynamics. The model runs at over 2kHz, faster than the real-time control system, at the same fidelity used to achieve 35 pm model agreement in hardware experiments.

  • prysm is used by several FFRDCs in the US, as well as their equivalent organizations abroad

  • prysm is used by multiple ultra precision optics manufactures as part of their metrology data processing workflow

  • prysm is used by multiple interferometer vendors to cross validate their own software offerings

  • prysm is used at multiple universities to model optics both in a generic capacity and laboratory systems

  • your name here(?)

prysm's People

Contributors

brandondube avatar erik-bu avatar jashcraf avatar mpetroff avatar u-yuta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prysm's Issues

Welch Overlapping Sub-Aperture (WOSA) PSD

The WOSA method can be implemented for PSD to reduce variance in the PSD estimate. The subapertures can also be shifted away from the center an extra amount to enable robust PSD calculation for data with annular support.

Interferogram.mask can be used for inspiration on how to embed a (shifted) circular mask. The data should be cropped before feeding through the PSD function. Each subaperture is guaranteed to have the same sample spacing. Roundoff should be carefully controlled to ensure each sub window has the same shape, ensuring the DFT samples are aligned and they can be easily averaged. An in-place add, then divide by K approach to the average is likely faster (and is much more memory efficient) than allocating an NxMxK matrix and using mean across one of the axes.

More MHT (Trioptics) MTF vs Field sample files

Areas of prysm.mtf_utils are not tested because there is only one MTF vs Field sample file - three more are needed, from a set of 4 measurements of the same objective, to complete some of the unit testing.

Object Oriented API: scope

This issue is to define the scope of an object-oriented front-end to prysm. This can include comparison of prysm models to e.g., lentil or HCIpy or POPPY or PROPER and discussion with users to see what parts of the non-OO API are stumbling blocks.

With this information we can describe a scope that can be used to make a design

OTF

Prysm should have a class to calculate the OTF. Perhaps the data backing it should be stored in .mtf and .ptf attributes.

The implementation would be easy -- just copy prysm.MTF.from_psf but do not take the modulus, or also take the phase (np.angle) and unwrap it (skimage.restoration.unwrap_phase).

Return of the Object-Oriented API

The changes from prysm v0.19 to v0.20 basically deleted most of the object oriented-ness of the library. This has made it easier to write prysm, but the API is pretty radically different to its contemporaries (PROPER/POPPY/HCIpy/FALCO). This can be intimidating for some users, who may not be familiar with or want to learn this.

We should consider building out an object oriented API more similar to other projects. The core propagation and interferometry modules are already really object-oriented. Maybe this is a concept of OpticalSystem (from POPPY) that is missing.

remove usage of retry

User report - retry causes crashes due to a naming conflict with their own retry package. prysm's usage was just for some convenience in recursive classes, so it should be removed.

thinfilm: allow matrix index and thickness inputs

At present, users have to write clumsy for i, j, v in enumerate(enumerate(...)) loops to compute 2D arrays of complex reflection/transmission coefficients. It should be possible with a tensordot syntax to perform the appropriate matrix products. This would speed up thin film computations likely by a factor of almost 1,000x for complicated/large structures.

This has use in e.g. coronagraph design.

Angular spectrum propagation

At this time, prysm only can do pupil to PSF plane propagations via Fresnel transforms. It would be a boon if prysm could support angular spectrum propagations, ideally modified shifted angular spectrum. To do this it may become desirable to implement a triple matrix product DFT or chirp Z transform in order to finess sampling.

This feature would live in the prysm.propagation module and may require additional properties on some of the class (e.g. the Wavefront class). The goal would be to enable propagation of optical fields through arbitrary systems, but a reduced scope would be OK for a first cut.

Zernikes?

Hi Brandon,

Thanks for sharing this nice code. I have an issue when transforming some random phase wavefront data into Zernike polynomials:

x, y, phase = np.linspace(-1,1,128), np.linspace(-1,1,128), 10*np.random.rand(128,128)
fz = FringeZernike(x=x, y=y, phase=phase, z_unit='um')
fz.magnitudes

The outputs are all 0. I'm expecting some positive numbers for this random phase - where am I doing wrong?

interferogram spatial and phase units raise error

Reported by a user via email:

Accessing Interferogram.spatial_unit will raise an attribute error.

MWE

from prysm import Interferogram, sample_files
i = Interferogram.from_zygo_dat(sample_files(dat))
i.spatial_unit


---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-3-272dc136b67c> in <module>
----> 1 i.spatial_unit

~/miniconda3/envs/prysmdev/lib/python3.7/site-packages/prysm/_phase.py in spatial_unit(self)
     62         """Unit used to describe the spatial phase."""
     63         warnings.warn('spatial_unit has been folded into self.units.<x/y> and will be removed in prysm v0.18')
---> 64         return str(self.units.x)
     65 
     66     @spatial_unit.setter

AttributeError: 'Interferogram' object has no attribute 'units'

Will be fixed in 0.17.2

FWHM estimates for PSFs

FWHM is a common metric for analyzing signals like PSFs. A FWHM method should be implemented for the PSF class which calculates this metric. Several algorithms might be implemented, e.g. Moffat's fit for nearly diffraction limited cases, or the naive method.

polynomials: aid user in dtype stabilizing sum_of_2d_modes

The implementation of sum_of_2d_modes is a one-line wrapper around tensordot by intent. However, it is common for the modes to be generated by prysm (and so respect dtype convention) and the weights to be generated by the user on the fly (and so not respect dtype convention / be f64 always). The weights should be cast to the precision of the modes

Better slicing

At the moment, the interface for slices of an object is not very ergonomic. The kwargs x, y, and azavg are strewn throughout the library wherever those slices are enabled. An object that holds the slices should be made, and a new interface designed.

The current interface:

# i is an interferogram
# this is pretty bad, a new arg for each slice
i.psd_slices(x=True, y=True, azavg=True, azmin=False, azmax=False) 
# a whole line could be removed from this
i.plot_psd_slices(\
    x=True, y=True, azavg=True, azmin=False, azmax=False,
    a=None, b=None, c=None, mode='freq', alpha=1,
    legend=True, lw=3, zorder=3, xlim=None, ylim=None,
    fig=None, ax=None)  

A reasonable solution might be to pass an iterable of strings with the slices, and have the "Slice" object, not to be confused with the slice builtin, have a static field returning all possible choices. So the .psd_slices() method would look like:

i.psd_slices(slices=config.default_slices)
print(config.default_slices)
>>> ['x', 'y', 'azavg']

Consider using Tensorflow or PyTorch as a gpu backend?

I am doing some phase retrieval work for systems with large fields of view and typically I end up wanting to recover the phase for multiple local regions. In terms of use for astronomy and photography the use of tensors as opposed to 2D arrays could be very useful.

Tensorflow now includes conventional optimizers such as CG and LBFGS (tensorflow-probability). There are some inherent advantages to using the auto differentiation features offered by the library as well.

improve radiometric PSF normalization

when using PSF.from_pupil(norm='radiometric') this should be normalized better, so that the units are, e.g., W/cm^2 and tied correctly to the pupil area.

How to create a psf from a pupil with the new (v0.20) style?

Hello,

This is an incredibly promising package, bravo for all the work!
I've installed the current development branch (0.19.2.dev129) which I later on discovered is quite different from the stable version: https://prysm.readthedocs.io/en/latest/releases/v0.20.html

This explains why I've got lots of errors when trying to run the notebook Defocus and Contrast Inversion.ipynb in the docs/source/examples folder found in the development branch.

Since I'm new to prysm I decided to go on and stick to the new syntax and changes (which hopefully are here to stay!) and dig into the documentation to try to adapt said example notebook by myself.

Well I've not managed very far actually, besides importing the correct modules and dealing with the siemensstar object, I'm hitting a wall with this part:

    pu = NollZernike(Z4=defocus_values[idx])
    ps = PSF.from_pupil(pu, 4, norm='radiometric')  # pu defaults to diameter of 1, this makes F/4
    mt = MTF.from_psf(ps)

For the pupil, I used zernike.zernike_nm and zernike.noll_to_nm from the prysm.polynomials module to convert from Z4 (defocus) to n,m, as follows (parameters for the grid yet to chose properly but this works):

x, y = make_xy_grid(64, diameter=1)
r, t = cart_to_polar(x, y)
znoll = noll_to_nm(4) #NollZernike(Z4=defocus_values[idx])
pu = zernike_nm(*znoll,r,t)

I can plot the pupil all right.
But the first question arises:
how can different amounts of defocus be taken into account as in the loop in the example notebook?

Moving on to PSF.from_pupil, the doc says the following about the PSF module:

The PSF module has changed from being a core part of propagation usage to a module purely for computing criteria of PSFs, such as fwhm, centroid, etc.
PSF has been removed

So now that prysm.psf is no more about propagation but dimensional specifications of the psf, how can I create a PSF from the pupil just created with zernike_nm?

Also,

MTF was removed, use mtf_from_psf()

And the documentation about mtf_from_psf() explains:

mtf_from_psf(psf, dx=None)
Compute the MTF from a given PSF.
Parameters
psf (prysm.RichData or numpy.ndarray) – object with data property having 2D data containing the psf, or the array itself

I suppose that if I had obtained the psf this part with the new mtf_from_psf would be easy to deal with.

So it seems my issues boil down to my first 2 questions:

  • how to create a pupil taking into account various amount of defocus?
  • how to create a psf object (RichData or np.ndarray) from my pupil?

Thanks a lot in advance for any hints in this regard!

Ludo

Standard Models

Some private codes (e.g., hcim) provide standard models for common systems, for example a five plane coronagraph (DM1, DM2, FPM, Lyot, image). prysm does not provide any of these, and should.

Ideally, these standard models have some amount of code sharing, since certain things (polynomial caches, e.g.) will be common among them.

They likely should not be under the main prysm repo (testing will be problematic). They probably should be in the same repo.

These are naturally pretty OO (see: lowfssim's designdata pragma). Reflection on that design can probably identify ways to improve.

Reading Datx files can result in improperly scaled phase

Some datx files are saved differently to others and require multiplication of the phase by wavelength / 2 after importing. Investigation will need to be done with something like HDFView to see what flags are present in the files to denote this, then those flags used by prysm to import these files properly.

Zernike RMS normalization

Hello,

I discovered an apparent issue in the Zernike RMS normalization for the m = 0 terms.

The source material (http://wp.optics.arizona.edu/jcwyant/wp-content/uploads/sites/13/2016/08/ZernikePolynomialsForTheWeb.pdf) has an RMS normalization table on page 28 showing the correct factors.

The normalization formula used in prysm.zernike.zernike_norm() is correct (sqrt(2*(n + 1)/(1 + kronecker(m, 0))), but there's an obscure difference between Wyant's (n, m) indices and that of other systems: for the m = 0 (purely radial) terms, his radial indices n are half that of other systems. This leads to the normalization factors generated in the code being off for the spherical aberration terms. This can be checked by generating some m = 0 values and seeing that they disagree from Wyant's table.

See pg. 17 here: https://www.iap.uni-jena.de/iapmedia/de/Lecture/Imaging+and+aberration+theory1554069600/IAT+18_Imaging+and+Aberration+Theory+Lecture+12+Zernike+polynomials.pdf

Raytracing

It is conceivable that one day we might want prysm to have a raytracing engine. Addition of such a feature will require considerable planning and discussion, followed by an extended period of hard work.

Wrong calculation of self.center_y, self.center_x for Convolvable class

Current calculation of center_x and center_y of Convolvable class is:
self.center_y, self.center_x = int(m.ceil(self.samples_y / 2)), int(m.ceil(self.samples_x / 2))
This gives the wrong values for odd dimensions (e.g. 10 -> 6 instead of 5) as these values are used as indices later, for example in plot_slice_xy().
In other classes (e.g. MTF), this is calculated as:
self.center_x = self.samples_x // 2
self.center_y = self.samples_y // 2
Which gives the expected results.

I'm using prysm version 0.14.0 with python 3.6.7 (installed via pip).

Thank you.

Improved filtering on Interferogram objects

The current approach to filtering uses scipy.signal to filter once in each dimension. These filters do a good job of affecting the spectrum along the Cartesian axes in k-space, but do not touch the corners. Filtering should be re-implemented based on a soft-edge annulus or circle in k-space. This soft edge filter could be made, for example, by convolving the brickwall filter with a gaussian of appropriate width, though the rolloff of this is probably too slow.

Wrong sign in odd angular part

When using the Zernike module, I noticed an error in the implemented algorithm. In the angular part of odd zernike terms (sine function), the sign of the argument is wrong. The argument of both sine and cosine depends only on abs(m). sign(m) is only used to determine, if you should use the sine or cosine (see https://en.wikipedia.org/wiki/Zernike_polynomials#Zernike_polynomials, compare pairs of +m and -m).

azterm = e.sin(m * p)

ret = func(m * p)

Circular references in RichData, Labels, and Units

on the dev branch, I am working on a unified slicing and plotting implementation. This introduces the new Slices, Units, and Labels classes. A sample instance of RichData can be populated with

# imports
import numpy as np
from astropy import units as u

from prysm import RichData, Labels, Units

# the wavelength unit
wvl_HeNe = u.Unit('λ', 632.8 * u.nm)

# the newness
rdu = Units(x=u.mm, y=u.mm, z=u.nm, wavelength=wvl_HeNe)
rdlab = Labels(xybase='Pupil',
                        z='OPD', 
                        xy_additions=['ξ', 'η'],
                        xy_addition_side='left')


# standard xyz data
x = np.linspace(-1, 1, 128)
y = np.linspace(-1, 1, 128)
z = np.random.rand(128, 128)

# create the instance
rd = RichData(x=x, y=y, z=z, units=rdu, labels=rdlab)

Additional functionality w.r.t. plotting and slices is then used as

rd.plot2d()
rd.slices().plot(['x', 'y', 'z', 'azavg'])
s = rd.slices()
s.x
s.y
s.azavg

all labels, units, etc are known and displayed.

Internally, there is some difficulty in implementing this. The labels in the above case are intended to produce:

>>> rd.labels.x
     Pupil ξ [mm]
>>> rd.labels.z
     OPD [λ]

and thus the implementation of plot2d naturally wants to look like:

def plot2d(self, ...):
    ...
    labs = self.labels
    ax.set(xlabel=labs.x, ylabel=labs.y)
    cb = colorbar(label=labs.z)
    ...

However, in order to produce this, labels needs to know something about units. One way to accomplish this is to pass a parent breadcrumb down to labels in the RichData constructor:

class RichData:
    def __init__(self, ..., labels):
        labels.parent = self
        self.labels = labels

however, this double reference is a data structure I am not very fond of, and we will have to copy the labels and units instances on each RichData initialization to avoid shared state woes.

Another mechanism is to make labels.x produce a joinable sequence that we can insert the label text into:

>>> rd.labels.x
    ['Pupil ξ [', ']']

and then do something like:

class RichData:
    ....
    @property
    def xlabel(self):
        pieces = self.labels.x
        pieces.insert(-1, str(self.units.x))
        return ''.join(pieces)

This avoids bad shared state (units) and means we can actually share a labels instance on all Pupil (etc) instances pretty safely. However, we define the labels in two separate places. On the other hand, this allows a graceful adoption of the labels and units objects, since we can check if self.labels is None and use the old behavior in that case.

see _basicdata.py and conf.py on https://github.com/brandondube/prysm/tree/f7d8b43133118199f6ed1ed9745bba9895797c8f for the current implementation. Reaching into the labels and units on RichData is not implemented yet, since I haven't figured out an elegant API for that yet.

PTF

Just as in #16 - but some users may only care about the PTF, so we should provide this in a class as well. With this, the .mtf and .ptf proposed attributes on the OTF class could be MTF and PTF objects, though this doubles the memory needed to store the coordinates.

fttools: cropcenter (adjoint of pad2d)

In the detector module, we have both the bindown and tile functions. They are each other's adjoints. In fttools, a cropcenter function is needed which is the adjoint of pad2d. Likely, as well, pad2d needs an addition to its function signature to take output size, such that cropcenter(pad2d(a, output_size), a.size) is a no-op.

Interferogram from fringe-pattern - FFT(Takeda) and Phase Shifting

i would like to implement the FFT(Takeda) algorithm (https://www.osapublishing.org/josa/abstract.cfm?uri=josa-72-1-156) and PSI into prysm to generate Interferogram objects directly from fringe patterns optained by a camera or via a video. For the FFT i already have some code but speed of the fft is an issue there for bigger fringe pattern. I already had a look into propagation.py for this. Also unwrapping is an issue but i saw that prysm uses skimage/restoration here. Before i start to implement it in my fork i wanted to ask if this would be a useful tool for prysm and in wich class it should be best implemented. I thought interferometry would be good but maybe i do not see the whole picture.
Jakob

Sample file and unit tests for Datx files

The prysm.io submodule should have full unit test coverage. To do this, tests for the datx reader are needed. Tests require a sample file -- someone will need to provide one via donation or license to prysm for this to happen. The file could be synthesized with prysm, saved as asc, then converted with Mx.

AttributeError: 'Pupil' object has no attribute 'clip'

Notebook https://github.com/brandondube/prysm/blob/master/Examples/Pupils.ipynb raises an error against the current pypi package:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-24-5187cd26f916> in <module>()
      3 xx, yy = np.meshgrid(x,y)
      4 p.phase = np.cos(xx*1.5) * np.sin(yy*1.5)
----> 5 p.clip();  # if you modify the phase, you should call .clip to re-enforce the mask
      6 
      7 # pupils can be plotted in 2D

AttributeError: 'Pupil' object has no attribute 'clip'

Interface for class changes

There is sometimes a desire to hop between prysm classes. For example, convert a Wavefront to a Convolvable instance. An elegant way to implement this, in terms of API, is to have a single to or from method that takes a class to convert to, and returns that class:

from prysm import Pupil, Wavefront, Interferogram, PSF

# any of these will work for this, since they all inherit from basicdata, 
# most of the other classes would work too.
pu = Pupil()
wvfrnt = Wavefront()
interf = Interferogram()

# interface 1 -- _to
pu_i = pu.to(Interferogram) # pu_i is an interferogram instance with the data from pu

# interface 2 -- from
pu_i = Interferogram.from(pu) # the same

these interfaces have a similar level of verbosity, and would use some reflection magic to be implemented, but it is not that hard. The .from is more like the typical prysm syntax -- PSF.from_pupil, MTF.from_psf, etc. So on the one hand, it is more consistent. On the other, from_pupil and from_psf both do work make this transformation, while the generic class changers above wouldn't.

There is also the option 3 of just refactoring as much as possible into BasicData, but I kind of feel that some things, e.g. encircled energy or PVr, belong explicitly to one class.

Perhaps there is another possible interface design?

Planar and Nonplanar warping, SFE=>RWE/TWE

Today, every piece of prysm assumes the optical system is a series of parallel planes. There are no features for dealing with tilted planes of any sort, or more arbitrary tilted geometries (for example, propagating to and reflecting off of an OAP). Two features are then needed,

  • surface figure error to reflected/transmitted wavefront error (which may or may not be spatially varying)
  • geometric warping or distortion caused by a nonplanar view of a surface

This is relevant to propagating through OAPs and deformable mirrors, e.g.

There may be an interaction or codependency between a spatially varying cosine obliquity and warping. The design should be careful about that.

Since these funcs will perturb x,y,data, likely they should be written with data and dense coordinate arrays as inputs.

interferogram: interpolated fills

At present, the only fill option prysm has is the kludgy constant value fill. This creates discontinuities in data that corrupt spectral estimates. It would be preferred to allow the user to fill zeros with interpolated values. It's not certain whether actual interpolation (scipy.interpolate.interp2d) or something like a polynomial fit would be better. The latter is easier to implement and may be faster.

geometry: mechanism for chamfered corners

at the moment, all rasterized shapes have sharp corners. For nonconvex shapes (and even for convex shapes), this is not very manufacturable and prysm has no mechanism for rasterizing more manufacturable shapes, per-se. Bezier curves are one way to accomplish this, which has the benefit of being compatible with the vertex-based approach used by the convex polygon shader. Others would include figuring out how to do the bitwise masking on offset circles, but there is a bit of trickiness there.

migrate scipy to mathops

Replace imports of scipy with a dispatcher like Engine from mathops. This would allow, e.g., cupyx to be used, improving the portion of prysm that runs on GPU.

Fluent API for arbitrary grid motion with interferograms

The new interface for interferograms requires the user to explicitly pass the mask, and provides the x,y,r,t variables in class with the positional state. This makes it significantly easier to use offset masks. The geometry module includes shifted versions of most shapes, but it may be worthwhile to add a interf.shift((dx,dy)) or interf.shift(dx, dy=None) method to move the grid.

It would be most efficient not to shift the dense grid. Benchmarking would need to be done to see whether optimize_xy_separable, shifting the vectors, and regridding is faster than shifting the grid. If the grid has not yet been computed, then the initialization could be modified, which will be cheapest overall.

controling the amount of defocus with sum_of_2d_modes (vs old syntax)

Hi Brandon,

This is somehow a follow-up to issue #38 but I thought it'd be better to have a dedicated write-up for clarity and conciseness (and future reference).

Considering the following as an illustrative example:

nms = [noll_to_nm(j) for j in range(2,25)]
idefocus = nms.index((4,0)) # index for defocus, element 9 in the list above
basis_set = list(zernike_nm_sequence(nms, r, t)) # (r,t having been adequately defined) 
coefs = np.zeros(len(nms)) 
phs = sum_of_2d_modes(basis_set, coefs)  # no defocus, no phase change
coefs[idefocus] = 1e-6 # defocus is now weighted, is it 1um of Z4 defocus?
phs_defc = sum_of_2d_modes(basis_set, coefs)
# go on and e.g., compute a pupil and derive a psf

  • how to relate the amount of defocus defined the old way (NollZernike(Z4=defocus_values[idx]) as in the defocus notebook to that defined with the weight in sum_of_2d_modes? In other words how much defocus is introduced in the old syntax from one index idx of defocus_values to the other and how does this translate in weights for sum_of_2d_modes?
  • are the weights in sum_of_2d_modes in meter, so that from the example below if I do coefs[idefocus] = 1e-6 then I have 1um of defocus? I ask because in the doc you make reference to other packages using zernike_compose as an equivalent to sum_of_2d_modes, and in one of them (lentil) the coefficient (weight) can directly be set to a "distance". A weight isn't supposed to have a unit so the answer would be "no", but then:
  • how do I set an xμm amount of defocus?

Thanks, Ludo

MTF result from FringeZernike pupil is diffraction limited?

I'm attempting to calculate the aberrated MTF using a pupil defined using FringeZernike. In this particular case, it has only first order defocus. I expect the result to match closely with Hopkins' solution (or Shannon's approximation). Plotting the pupil function and printing the output from the Pupil object, I have the expected amount of defocus, but the MTF that's plotted is the classic diffraction-limited MTF. A prysm-specific snippet that plots using the variables I've separately calculated for Shannon's approximation is below.

If I'm missing something in the implementation, I apologize, but it looks after several reviews like the inputs are consistent with what I want and what prysm wants.

Any assistance is appreciated.

prysmWavelength = system_wavelength * 1.0E9 # Convert wavelength to nm
prysmDiameter = system_aperture_diameter * 1000.0 # Convert diameter to mm
prysmDefocus = prysmWavelength * system_W_pp # Convert to nm
prysmPupil = FringeZernike(wavelength = prysmWavelength, dia = prysmDiameter, samples=5000, Z4 = prysmDefocus / 2.0) # Set defocus coefficient to 1/2 of peak-to-peak value
print(prysmPupil)
prysmPupil.plot2d()
prysmMTF = MTF.from_pupil(prysmPupil, prysmDiameter * system_f_number, Q=2)

fig, (ax1, ax2) = plt.subplots(ncols=2,figsize=(10,4))
prysmMTF.slices().plot('x', fig=fig, ax=ax1)

Use Recurrent Zernike generation

Work on :dev/prysm/qpoly.py has lead to the realization that for forbes polynomials, the recursive generation scheme can be O(n) instead of O(n^2) in a naïve implementation.

When compared to the Zernike polynomials, we see that the time constant with numba-jit'd Zernikes is ~6.5x larger than the time constant for the recurrant Q polynomials. QED, we can achieve a speedup of Zernike generation of ~5x by using a recursive generation scheme. This also means that in the numba-free case, our gains will be even larger (order of 20-30x) and we can (perhaps) reduce the size of the Zernike codebase. We can also remove the optional numba dependency, improving import times for users that have numba installed.

The necessary work is to:

  1. write a recursive Zernike generation function, taking inspiration from the new qpoly codebase. The two should probably be refactored into a _recursivepoly.py that both import from, since I believe both use Jacobi polynomials and are almost the same.
  2. write an ANSIZernike class that uses m,n indexing.
  3. write maps from Fringe => m, n and the same for Noll
  4. write a name generator based on the radial and azimuthal orders
  5. adapt the NollZernike and FringeZernike to use the recurrant set.

At the same time, we should investigate whether storing the basis vectors on the cache in a large mxnxNxN array is feasible (vis-a-vis memory footprint, performance) to use a single multiply and add to do all of the surface generation. In the past, this was less performant than an on-the-fly addition scheme. I suspect the single multiply and add is better on GPU, while the on-the-fly add is better on CPU.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.