Coder Social home page Coder Social logo

pysingfel's People

Contributors

antoinedujardin avatar bkuchhal avatar chuckie82 avatar elliottslaughter avatar fredericpoitevin avatar haoyuanli93 avatar irischang020 avatar junceee avatar marcgri avatar monarin avatar rdeeban avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pysingfel's Issues

Calibration constants for Epix10k Detectors

For the Epix10k detector, the calibration files (e.g. pedestal, pixel_status, etc.) seem to have seven values per pixel:

detector.pedestals.shape: (7,16,352,384)
shot.shape: (16,352,384) # random data frame taken from exp=xcsx39618

Do these values correspond to different camera configurations? However, there appear to be only five configurations (two auto-ranging, three fixed), so I’m not sure which set of calibration constants is the appropriate one to use in different circumstances, e.g. for adding in pedestal-subtracted dark noise.

example/scripts

Due to active development (API changes etc), example scripts quickly become outdated and don't work anymore.

it would be great if we can run example scripts in pytest

Short path for geometry file likely to break

I noted the recent following change commits 4da07da and b29311b, coming from the geometry file added in commit 95724f3:

     det = ps.PnccdDetector(
-        geom=ex_dir_+'/lcls/amo86615/'
-             'PNCCD::CalibV1/Camp.0:pnCCD.1/geometry/0-end.data',
+        geom=ex_dir_+'/input/geometry/0-end.data',
         beam=beam)

In Python 2 / psana 1, the PNCCD::CalibV1 part might be needed to get the pedestals.
The doc says, in an error message:

Sorry, at present, the package is not very smart. Please specify the path of the detector as deep as the geometry profile.
And example would be like: /reg/d/psdm/amo/experiment_name/calib/group/source/geometry/0-end.data where the '/calib/group/source/geometry/0-end.data' part is essential. The address before that part is not essential and can be replaced with your absolute address or relative address.

In Python 3 / psana 2, the amo86615 part is needed to get the pedestals (through the call to MongoDB).

pysingfel in PyPI

The version of pysingfel on PyPI has not been updated in 3 years. Should we remove the PyPI project or try to update it?

Clean up Beam object

update only works the first time.
A clean-up is required if we want to make it work properly, and to maybe integrate it with SASE later on.

tox

Add tox.ini for python version

Clean up paths

There are hard coded and relative paths that should be removed

Make Particle constructor able to read PDB file.

Right now, we create an empty particle and then we populate it by reading a PDB file.
We should probably be able to provide the PDB file name to the constructor instead, as already done with HDF5 files.

Try detector oversampling

Try oversampling the detector up to electron sampling, then bin.
That would allow us to stop assuming photons are centered in the middle of the pixel and start doing subpixel resolution stuff.

Photon integration in the previous pysingfel radiationDamage.py script

Hi @chuckie82, this radiationDamage.py script created a long time ago was implemented in SimEx to generate diffraction patterns from the PMI output of XMDYN. When I checked the codes, I found an inconsistency between what we expected and what we got from the code. The formulas were derived from the supplementary material of Chuck's 2016srep paper.
image

image

The code excerpt from pysingfel/radiationDamage.py:

while not done:
# set time slice to calculate diffraction pattern
if time_slice + slice_interval >= num_slices:
slice_interval = num_slices - time_slice
done = True
time_slice += slice_interval
# load particle information
datasetname = '/data/snp_' + '{0:07}'.format(time_slice)
particle = Particle(input_name, datasetname)
rotate_particle(quaternion, particle)
if not given_fluence:
# sum up the photon fluence inside a slice_interval
set_fluence_from_file(input_name, time_slice, slice_interval, beam)
total_phot += beam.get_photons_per_pulse()
# Coherent contribution
f_hkl_sq = calculate_molecular_form_factor_square(particle=particle,
q_space=det.pixel_distance_reciprocal,
q_position=det.pixel_position_reciprocal)
# Incoherent contribution
if consider_compton:
compton = calculate_compton(particle, det)
else:
compton = np.zeros((py, px))
photon_field = f_hkl_sq + compton
detector_intensity += photon_field
detector_intensity *= (det.solid_angle_per_pixel *
det.polarization_correction *
beam.get_photons_per_pulse_per_area())
detector_counts = np.random.poisson(detector_intensity)
pu.save_as_diffr_outfile(output_name, input_name, counter,
detector_counts, detector_intensity,
quaternion, det, beam)

If my understanding is correct, these lines
detector_intensity += photon_field
detector_intensity *= (det.solid_angle_per_pixel *
det.polarization_correction *
beam.get_photons_per_pulse_per_area())
should be modified to

        detector_intensity += photon_field*beam.get_photons_per_pulse_per_area()
   detector_intensity *= (det.solid_angle_per_pixel *
                           det.polarization_correction)

to manifest the expected function in the picture.

Do you have any comments on that? Is there a new script to conduct similar computing in this new version of pySingFEL?

By the way, The total_phot was calculated on line 160 but was not called in the code.

occupancy

Keep all atoms regardless of occupancy. Currently, throw away <50% occupancy atoms

Repair/test photons -> ADU

The photon -> ADU transformation is unclear and requires a precomputed database.

I had a discussion with Haoyuan. He considers that photons produce electrons in a 5-by-5 pixel region around their hit point. The number of electrons is considered to be 130 and they are sampled according to a (truncated) Gaussian distribution in that 5-by-5 window.

Since his sampling method of taking 130 electron for each photon was extremely slow, he exported 1 million of pre-sampled distributions in a numpy dump file (the precomputed database). He then samples one of these 5-by-5 spreading kernels for each photon.
In practice, I think we could use

np.random.multinomial(130, the_5_by_5_distribution_weights,
                      (number_of_photons_in_the_image,))

to generate our kernels.

The sigma value is unclear. The sampling also assumes that the photon is centered on one of the pixels.

To interpret the data, we can use psana's ADU -> photon functionality presented in:
https://lcls-psana.github.io/Detector/index.html#module-AreaDetector

Feature request: Interactive diffraction display

One feature that would be nice to have is:
2 displays side-by-side.
We show the particle on the left. We show the diffraction pattern on the right.
And allow the user to rotate the particle and display corresponding diffraction pattern near real-time.

test_generate.py failing

@AntoineDujardin this seems to fail for py2 and py3

(ps-3.1.12) [yoon82@psanagpu101 pysingfel]$ pytest
============================= test session starts ==============================
platform linux -- Python 3.7.6, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
PyQt5 5.12.3 -- Qt runtime 5.12.5 -- Qt compiled 5.12.5
rootdir: /reg/neh/home4/yoon82/Software/pysingfel
plugins: qt-3.3.0, asyncio-0.12.0
collected 96 items / 3 skipped / 93 selected

pysingfel/beam/tests/test_beam_base.py ................. [ 17%]
pysingfel/detector/tests/test_user_defined_detector.py ...... [ 23%]
pysingfel/geometry/tests/test_convert.py ............................... [ 56%]
[ 56%]
pysingfel/geometry/tests/test_generate.py ..FFF [ 61%]
pysingfel/geometry/tests/test_geometry.py ......... [ 70%]
pysingfel/geometry/tests/test_mapping.py ... [ 73%]
pysingfel/geometry/tests/test_slice_.py ........ [ 82%]
pysingfel/solvent_form_factor/tests/form_factor_test.py . [ 83%]
pysingfel/solvent_form_factor/tests/form_factor_variants_test.py .... [ 87%]
pysingfel/solvent_form_factor/tests/libsaxs_compare_test.py ..... [ 92%]
pysingfel/solvent_form_factor/tests/radial_distribution_test.py .... [ 96%]
pysingfel/tests/test_particle.py ... [100%]

=================================== FAILURES ===================================
_____________ TestPointsOn2Sphere.test_points_on_2sphere_moment_1 ______________

self = <test_generate.TestPointsOn2Sphere object at 0x7fe88435de90>

def test_points_on_2sphere_moment_1(self):
    """Test points_on_2sphere for moments of order 1."""
    thresh = 2.3e-3  # Lower limit for original implementation (1000)
    points = self.points.copy()
    self._test_points_on_2sphere_moment_1(points, thresh)
    with pytest.raises(AssertionError):
      self._test_points_on_2sphere_moment_1(points, thresh*.95)

E Failed: DID NOT RAISE <class 'AssertionError'>

pysingfel/geometry/tests/test_generate.py:48: Failed
---------------------------- Captured stdout setup -----------------------------
Deprecation warning: The function points_on_2sphere actually generates points on a 3-sphere, in 4D. Please call points_on_3sphere instead.
_____________ TestPointsOn2Sphere.test_points_on_2sphere_moment_2 ______________

self = <test_generate.TestPointsOn2Sphere object at 0x7fe884332c50>

def test_points_on_2sphere_moment_2(self):
    """Test points_on_2sphere for moments of order 2."""
    thresh_xx = 0.34  # Lower limit for original implementation (1000)
    thresh_xy = 1.16  # Lower limit for original implementation (1000)
    points = self.points.copy()
  self._test_points_on_2sphere_moment_2(points, thresh_xx, thresh_xy)

pysingfel/geometry/tests/test_generate.py:70:


self = <test_generate.TestPointsOn2Sphere object at 0x7fe884332c50>
points = array([[ 0.99115132, 0.07138709, 0.04162661, 0.10387578],
[ 0.99115132, 0.07138709, -0.10184068, -0.0463834... [-0.94078156, -0.12973254, 0.22756036, -0.21521108],
[-0.94078156, -0.3132062 , 0.06721999, 0.1109658 ]])
thresh_xx = 0.34, thresh_xy = 1.16

def _test_points_on_2sphere_moment_2(self, points, thresh_xx, thresh_xy):
    """Utilisty to test points_on_2sphere for moments of order 2."""
    # No need to normalize for double-cover because for even order
    moments_xx = []
    moments_xy = []
    for (i, j) in itertools.product(*(range(4),)*2):
        M2 = np.dot(points[:,i], points[:,j])
        if i == j:
            moments_xx.append(M2)
        else:
            moments_xy.append(M2)
    moments_xx = np.array(moments_xx)
  assert np.all(np.abs(moments_xx-moments_xx.mean()) < thresh_xx)

E AssertionError: assert False
E + where False = <function all at 0x7fe8997c0050>(array([ 4.44651051, 3.06173139, 3.65832287, 11.16656477]) < 0.34)
E + where <function all at 0x7fe8997c0050> = np.all
E + and array([ 4.44651051, 3.06173139, 3.65832287, 11.16656477]) = <ufunc 'absolute'>((array([245.55348949, 246.93826861, 246.34167713, 261.16656477]) - 250.0000000000001))
E + where <ufunc 'absolute'> = np.abs
E + and 250.0000000000001 = <built-in method mean of numpy.ndarray object at 0x7fe8815743f0>()
E + where <built-in method mean of numpy.ndarray object at 0x7fe8815743f0> = array([245.55348949, 246.93826861, 246.34167713, 261.16656477]).mean

pysingfel/geometry/tests/test_generate.py:62: AssertionError
_______________ TestPointsOn2Sphere.test_points_on_2sphere_angle _______________

self = <test_generate.TestPointsOn2Sphere object at 0x7fe8842ac6d0>

def test_points_on_2sphere_angle(self):
    """Test points_on_2sphere for angle between elements."""
    thresh = 0.105  # Lower limit for original implementation (1000)
    points = self.points.copy()
    self._test_points_on_2sphere_angle(points, thresh)
    with pytest.raises(AssertionError):
      self._test_points_on_2sphere_angle(points, thresh*.95)

E Failed: DID NOT RAISE <class 'AssertionError'>

pysingfel/geometry/tests/test_generate.py:90: Failed
=========================== short test summary info ============================
FAILED pysingfel/geometry/tests/test_generate.py::TestPointsOn2Sphere::test_points_on_2sphere_moment_1
FAILED pysingfel/geometry/tests/test_generate.py::TestPointsOn2Sphere::test_points_on_2sphere_moment_2
FAILED pysingfel/geometry/tests/test_generate.py::TestPointsOn2Sphere::test_points_on_2sphere_angle
=================== 3 failed, 93 passed, 3 skipped in 19.95s ===================

RecursionError in generate_new_sample_state()

I will look into it soon, but put this here in case someone else runs into it. It seems that generate_new_sample_state() in the FXSExperiment class sometimes results in a Recursion Error. Example shown below for pysingfel installed while on ps-4.0.9 and using LCLS-II py3 kernel in Jupyter

The code:

input_dir='../input'
pdbfile=input_dir+'/pdb/2cex.pdb'
beamfile=input_dir+'/beam/amo86615.beam'
geom=input_dir+'/lcls/amo86615/PNCCD::CalibV1/Camp.0:pnCCD.1/geometry/0-end.data'

beam = ps.Beam(beamfile)
increase_factor = 1e2
beam.set_photons_per_pulse(increase_factor * beam.get_photons_per_pulse())

det = ps.PnccdDetector(geom=geom, beam=beam)
increase_factor = -0.5
det.distance = increase_factor * det.distance

particle = ps.Particle()
particle.read_pdb(pdbfile, ff='WK')

experiment = ps.FXSExperiment(det, beam, [particle], 100)

image = experiment.generate_image()

The error:

/cds/home/f/fpoitevi/Toolkit/pysingfel/pysingfel/particlePlacement.py:60: RuntimeWarning: invalid value encountered in true_divide
  direction = direction/np.linalg.norm(direction)
---------------------------------------------------------------------------
RecursionError                            Traceback (most recent call last)
<ipython-input-6-5bf0731ec4f4> in <module>
----> 1 image = experiment.generate_image()
      2 fig = plt.figure(figsize=(4,3), dpi=180)
      3 viz = ps.Visualizer(experiment, diffraction_rings="auto", log_scale=True)
      4 viz.imshow(image)

~/Toolkit/pysingfel/pysingfel/experiment/base.py in generate_image(self, return_orientation)
     36             return self.det.assemble_image_stack(img_stack), orientation
     37         else:
---> 38             img_stack = self.generate_image_stack()
     39             return self.det.assemble_image_stack(img_stack)
     40 

~/Toolkit/pysingfel/pysingfel/experiment/base.py in generate_image_stack(self, return_photons, return_intensities, return_orientation, always_tuple)
     62         beam_spectrum = self.beam.generate_new_state()
     63 
---> 64         sample_state = self.generate_new_sample_state()
     65 
     66         intensities_stack = 0.

~/Toolkit/pysingfel/pysingfel/experiment/fxs.py in generate_new_sample_state(self)
     37         particle_dict = {self.particles[i]: n_particles for i, n_particles in enumerate(particle_distribution)}
     38         print("particle_dict =", particle_dict)
---> 39         part_states, part_positions = distribute_particles(particle_dict, self.beam.get_focus()[0]/2, jet_radius=1e-4, gamma=0.5)
     40         part_states = np.array(part_states)
     41         for i in range(self.n_particle_kinds):

~/Toolkit/pysingfel/pysingfel/particlePlacement.py in distribute_particles(particles, beam_focus_radius, jet_radius, gamma)
     65     checkList = [overlap[i][j] for i in range(N) for j in range(N) if j > i]
     66     if any(item == True for item in checkList):
---> 67         distribute_particles(particles, beam_focus_radius, jet_radius, gamma)
     68     return state, coords
     69 

... last 1 frames repeated, from the frame below ...

~/Toolkit/pysingfel/pysingfel/particlePlacement.py in distribute_particles(particles, beam_focus_radius, jet_radius, gamma)
     65     checkList = [overlap[i][j] for i in range(N) for j in range(N) if j > i]
     66     if any(item == True for item in checkList):
---> 67         distribute_particles(particles, beam_focus_radius, jet_radius, gamma)
     68     return state, coords
     69 

RecursionError: maximum recursion depth exceeded while calling a Python object

Sign convention for detector distance

Has the convention of what’s considered a positive versus negative detector distance changed? A PnCCD detector initialized from the current example .geom file (amo86615/PNCCD::CalibV1/Camp.0:pnCCD.1/geometry/0-end.data) has a negative distance:

detector = ps.PnccdDetector(geom = geom_file, beam = beam)
print(f"Detector distance is: {detector.distance:.2f} m")
Detector distance is: -0.58 m

Looking through example scripts, it looks like this detector distance was used previously. However, when computing the intensity field using detector.get_intensity_field, it seems that a positive detector distance yields the correct diffraction pattern (centrosymmetric, with intensity decreasing at high q) while the negative distance does not. @fredericpoitevin mentioned that orientation conventions in psana might have changed recently (thanks for the tip!). I find similar results for the PnCCDDetector class and a non-psana style SimpleSquareDetector, but I'm not sure if psana could affect conventions elsewhere in the code.

download-1

On the other hand, it seems like this is reversed for the calculated Compton scattering (calculate_compton in diffraction.py). In this case, a negative detector distance results in decreasing intensity as a function of q:

download-2

In addition, it looks like computing the intensity field using either calculate_molecular_form_factor_square in diffraction.py or detector.get_intensity_field (which calls the gpu code) yields different results, as shown below. In this case the patterns were calculated with a positive detector distance.

download

For completeness, here are the results using a negative detector distance:

download-3

Allow to move detector

Create a function that changes the PnCCD detector distance.
Update pixel_position accordingly and propagate the change (initialize_pixels_with_beam).

experiment.generate_image versus detector.get_photon

For the same beam, detector, and particle in the same orientation, I anticipated experiment.generate_image and detector.get_photons to yield the same result within Poisson noise/quantization error. However, they appear to be transposes, as shown in the attached. I think experiment.generate_images rotates the particle while detector.get_photons rotates the diffraction volume, but I don’t understand why this would swap the axes of the resulting image.
approaches

Remove hdf5 file

There is a 41MB file at examples/scripts/imStack.hdf5, which should be removed & purged.

Verify atoms types

In the form factor tables, the variant O2p is key both to O and OH.
Since it's currently a suite of conditional statements, only the first one is used, but that might hide a bug.

Do we have a source for these coefficients?

Missing geometry file?

It seems that unit tests in test_diffraction.py and for the detector class are failing due to the missing geometry file: ‘PNCCD::CalibV1/Camp.0:pnCCD.1/geometry/0-end.data'. Tests that rely on a user-defined detector pass.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.