Coder Social home page Coder Social logo

deepskies / deeplenstronomy Goto Github PK

View Code? Open in Web Editor NEW
26.0 9.0 7.0 42.23 MB

A pipeline for versatile strong lens sample simulations

License: MIT License

Makefile 0.05% Jupyter Notebook 48.39% Python 2.30% TeX 0.16% Batchfile 0.04% HTML 49.05%
simulation strong-lensing gravitational-lensing

deeplenstronomy's People

Contributors

aerabelais avatar bnord avatar jasonpoh avatar joaocaldeira avatar joshualin24 avatar michael7198 avatar musoke avatar rmorgan10 avatar sibirrer avatar voetberg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deeplenstronomy's Issues

Check for existence of metadata

As a precaution, it would be good to have a structure in place to check that everything the user asked for is present in the final metadata

missing update_param_dist

@bnord @AleksCipri Moving the issue here

I did install everything successfully but...:
AttributeError: 'Dataset' object has no attribute 'update_param_dist'

Can you do a dir(dataset)? and check the attributes?

Remove Poisson Resampling of Image Background

Motivated by Erik Zaborowski and Alex Drlica-Wagner, applying a Poisson resampling to image backgrounds for dataset diversity is not appropriate if the background images have been background-subtracted.

To remedy this behavior, Erik suggests removing the Poisson resampling. To ensure dataset diversity, we should also warn the user if the distribution of background images utilized could prove troublesome.

Consider a function placed in deeplenstronomy.py:

def check_background_indices(idx_list: list):
    """Issue a warning if an element occurs to frequently in the list.

    Calculate number of elements in the list that deviate from a uniform distribution
    by more than 1 standard deviation. If this number is more than 1/3 of the elements
    in the list, print a warning.

    Args:
        idx_list (list): List of background image indices to use.
    """
    values = np.unique(idx_list)
    average_value, std = np.mean(values), np.std(values)
    num_deviating = sum(np.abs(values - average_value) > np.std(values))
    if num_deviating > len(idx_list) / 3:
        print("WARNING: Non-uniform distribution of background images detected, check map.txt file.")

If this function is called on the output of utils.organize_image_backgrounds, then we should be safe to remove the Poisson resampling.

Updated documentation is needed

@bnord @sibirrer I've started writing documentation for this implementation of deeplenstronomy and I'd like to revamp the docs directory. Is that cool or are there certain things you want preserved?

fits image output

It would be neat to have fits as an output image format option, specifically matching the DES Strong Lensing Cutout format

Improving documentation (JOSS review)

These are some minor suggestions on improving the documentation in the README. openjournals/joss-reviews#2854

  • (optional) It'd be great if the README could point directly to the docs page, where the user can read the per-module docstrings. The "Generating Datasets" has the parameter description for make_dataset() in the markdown, but I wouldn't have known to look for it there.
  • "Getting Started" notebook: link for "Creating deeplenstronomy Configuration Files" is broken.
  • Currently, all the tests are in test_all.py and it's a bit hard to tell exactly what functionality of what module is being tested. A better organization would be to break it down into multiple files named after the modules being tested. Also, I strongly recommend setting up CI via GitHub Actions!

HTML Notebooks

Some of the notebooks take a while to load on GitHub. It would be good to use jupyter to convert the notebooks to html, use those html pages in the documentation, and then link to the full notebooks

specific lenstronomy version as requirement

I would make a specific lenstronomy version as a requirement of the installation (e.g. the latest) or have the current requirement on the latest version. In this way the deployment is stable and you are not getting people approaching us with error messages due to outdated lenstronomy versions.

Cadence Flexibility

To better match astronomical data, it would be nice to have

  • ability to specifiy multiple cadences to be utilized without requiring an additional configuration
  • different nites for each band

Possible bug in the make_dataset() Function

When I ran view_image(dataset.CONFIGURATION_1_images[2][1]) as instructed in the DeepLenstronomyDemo.ipynb, it output one object instead of four objects in the demo.yaml file. I've tried with different image indexes and bands, but the problem persists:
截屏2021-10-20 下午4 14 12
as opposed to
截屏2021-10-20 下午4 14 40

Possible solutions:

  • Debug make_dataset() function in deeplenstronomy.py
  • Use earlier versions of deeplenstronomy

BYO Functionalities

To make deeplenstronomy lightweight, the user will be allowed to provide their own input files to specify

  • probability distributions (of arbitrary dimension)
  • time-series SEDs
  • photometric bandpasses
  • images to use as backgrounds

Each of these will need a default file format and options in the main configuration file to specify them.

New profiles in lenstronomy

Lenstronomy has some new profiles so they need to be added to the check.py functionalities.

Even better, check.py should be updated to find all lenstornomy modules programmatically.

what is the full list of parameters we need associated with a lensing systems

    # Lens properties
    Velocity_Dispersion = 250.

    # light Model
    Einstein_Radius = 1.
    Apparent_Magnitude_Lens = [21., 21., 21., 21., 21]
    Absolute_Magnitude_Lens = [21., 21., 21., 21., 21]
    Apparent_Magnitude_Source = [21., 21., 21., 21., 21]
    Absolute_Magnitude_Source = [21., 21., 21., 21., 21]
    Position_angle_Lens = 0.0
    Position_angle_Source = 0.0
    Halflight_Radius_Lens = 1.0
    Halflight_Radius_Source = 1.0
    magnification = 2.0

    Source_position_x = 0.0
    Source_position_y = 0.0
    Source_position_ra = 0.0
    Source_position_dec = 0.0
    Lens_position_x = 0.0
    Lens_position_y = 0.0
    Lens_position_ra = 0.0
    Lens_position_dec = 0.0

    # Mass Model
    Halo_mass_Lens = 10.
    Halo_mass_Lens_slope = 1.0
    Halo_mass_Lens_core = 1.0 # e.g., NFW
    Stellar_mass_Lens = 10.
    Shear_External = 1.

    # Distances
    Redshift_Lens = 0.5
    Redshift_Source = 1.0

    Signal_to_Noise_model = 5.0

    Cross_section = 1.0

Different redshift values for different dictionary elements in the same plane

These two dictionary elements give me different redshifts:

zs_plane = dataset.CONFIGURATION_1_metadata['PLANE_2-REDSHIFT-g']
zs_object = dataset.CONFIGURATION_1_metadata['PLANE_2-OBJECT_1-REDSHIFT-g']

They have nearly the same distributions (histograms), but there is no obvious element-by-element correlation.

I didn't find the specific documentation on "planes" and how those connect to object attributes. Could you share some guidance here?

LSST survey broken?

The LSST survey seems to be broken in version 0.0.1.3

Steps to reproduce:

dataset = dl.make_dataset('data/demo.yaml', survey='lsst', store_in_memory=False, store_sample=True, verbose=True)

This results in an error,

SURVEY.PARAMETERS.num_exposures.DISTRIBUTION.PARAMETERS must be a dictionary or None

Fatal error(s) detected in config file. Please edit and rerun.

---------------------------------------------------------------------------
ConfigFileError                           Traceback (most recent call last)
~/anaconda3/envs/deeplens/lib/python3.7/site-packages/deeplenstronomy/check.py in _run_checks(full_dict, config_dict)
    959     try:
--> 960         check_runner = AllChecks(full_dict, config_dict)
    961     except ConfigFileError:

~/anaconda3/envs/deeplens/lib/python3.7/site-packages/deeplenstronomy/check.py in __init__(self, full_dict, config_dict)
     83             _kind_output(total_errs)
---> 84             raise ConfigFileError
     85 

ConfigFileError: 

During handling of the above exception, another exception occurred:

ConfigFileError                           Traceback (most recent call last)
<ipython-input-6-2a8fd3d7f5db> in <module>
----> 1 dataset = dl.make_dataset('data/demo.yaml', survey='lsst', store_in_memory=False, store_sample=True, verbose=True)

~/anaconda3/envs/deeplens/lib/python3.7/site-packages/deeplenstronomy/deeplenstronomy.py in make_dataset(config, dataset, save_to_disk, store_in_memory, verbose, store_sample, image_file_format, survey, return_planes, skip_image_generation, solve_lens_equation)
    313         if not _check_survey(survey):
    314             raise RuntimeError("survey={0} is not a valid survey.".format(survey))
--> 315         parser = Parser(config, survey=survey)
    316         dataset.config_dict = parser.config_dict
    317 

~/anaconda3/envs/deeplens/lib/python3.7/site-packages/deeplenstronomy/input_reader.py in __init__(self, config, survey)
     49 
     50         # Check for user errors in inputs
---> 51         self.check()
     52 
     53         return

~/anaconda3/envs/deeplens/lib/python3.7/site-packages/deeplenstronomy/input_reader.py in check(self)
    173         Check configuration file for possible user errors.
    174         """
--> 175         big_check._run_checks(self.full_dict, self.config_dict)
    176 
    177         return

~/anaconda3/envs/deeplens/lib/python3.7/site-packages/deeplenstronomy/check.py in _run_checks(full_dict, config_dict)
    961     except ConfigFileError:
    962         print("\nFatal error(s) detected in config file. Please edit and rerun.")
--> 963         raise ConfigFileError
    964 
    965     return

ConfigFileError: 

Backgrounds with timeseries

A good feature to have would be BACKGROUNDS functionalities with time series of images. In principle you could use one background image and realize the noise as a time series, but it would be cool to have the ability to load single-epoch images into deeplenstronomy

Group and Shuffle

Often, multiple deeplenstronomy CONFIGURATIONs might get mapped to the same class in a ML problem. Perhaps a feature like this:

    GROUP_1: 
        NAME: InsertNameHere
        MEMBERS: ['CONFIGURATION_1', 'CONFIGURATION_2']
        SHUFFLE: True
    GROUP_2: 
        NAME: InsertOtherNameHere
        MEMBERS: ['CONFIGURATION_3, 'CONFIGURATION_4']
        SHUFFLE: True

at the main level in the configuration file could trigger automatic grouping and shuffling of the specified configurations after the simulation finishes

Coverage of the test suite (JOSS Review)

The test suite has two major issues:

  1. There's no logical separation to check which module is being tested.
  2. The test suite seems very small. I am curious if that is getting you enough coverage to check everything. However, this is not a necessary requirement for the JOSS Review, but it would be great to understand how much of the code is tested.

openjournals/joss-reviews#2854

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.