Coder Social home page Coder Social logo

sbi_lens's People

Contributors

aboucaud avatar dlanzieri avatar eiffl avatar justinezgh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

yomori

sbi_lens's Issues

Add Unit Tests

Since @Justinezgh added the code, I think it could be a good way for you @dlanzieri to practice redacting test cases for the function classes.

  1. Start getting familiar with pytest (if not already) and draft a few test cases on a PR. We can guide you along the way.
  2. Set up a github workflow to automate the execution of the tests on any new PR.

Add pre sampled reference posteriors

@dlanzieri we have functions to compute reference posteriors but running mcmcs can be very long. Do we want to pre run mcmcs and save the chains in the data folder for specific cases like the one corresponding to lsst year 1 or year 10 for instance? It can be accessed through the functions if we use the following names:

"posterior_power_spectrum__{}N_{}ms_{}gpa_{}se.npy".format(N, map_size, gals_per_arcmin2, sigma_e)
"posterior_power_spectrum__{}N_{}ms_{}gpa_{}se.npy".format(N, map_size, gals_per_arcmin2, sigma_e)
"m_data__{}N_{}ms_{}gpa_{}se.npy".format(N, map_size, gals_per_arcmin2, sigma_e)

MCMC Full Field

Hi @EiffL I tried to run the mcmc full field as you suggested (sequentially) for 35 hours on azure but it did not converge. Here are the results:
image

This is the setup I used to run the mcmc:

from sbi_lens.simulator.utils import get_reference_sample_posterior_full_field

samples_ff_store = get_reference_sample_posterior_full_field(
    run_mcmc=True,
    model=model,
    m_data=m_data,
    num_results=1000,
    num_warmup=200,
    nb_loop=10,
    num_chains=1,
    chain_method='parallel',
    max_tree_depth=6,
    step_size=1e-2,
    key=key(0)
)

What do you think? Should I wait a bit longer or is there a way to speed it up?

Choose a good name for the project

There are few kinds of GitHub projects. The ones that aim at being imported as a library in other codes (numpy, tensorflow, jax) and the standalone ones that have a single purpose (produce data, reproduce an analysis, run a benchmark).
Depending on the category, the name of the project is more or less important.

Then there are a few considerations, Python-wise

  • short names should be preferred
  • a non existing package on pypi

You can also have a project name that is different from the Python package name (e.g. "scikit-learn" vs. sklearn)

In your case, you might want the name to convey the purpose of the package, or give it the name of your choosing.
But at least we should start discussing it here.

Fix current tests

Now that the CI is working and sbi_lens pip installable, we need to fix the existing tests.

Starting adding features

Hello @Justinezgh !
We can finally start adding pieces of code to the repo.
As discussed, we should work making our own branch and adding features via PR.
As a good starting point we could add:

  • The python scripts defining the physical models of the simulator (e.g simulator)
  • All the python scripts required to generate/load the dataset (e.g gen_dataset )
  • The codes defining the NF used (e.g normflow)
  • All the mathematical functions needed to other tasks (e.g bijectors )

We should also make sure we always document the Python codes! :)

Package Structure

Hello @EiffL @Justinezgh @aboucaud ! This first issue is to keep track of development needed to build the SBI package.
In this PR, we can find the first prototype for the general structure. There's a lot more to do!

Task

If you think something else we should do, please add a TODO list in this issue! :-)

Shift-Powermap/Shift-Cls : differences in final angular power spectrum

This issue aims to discuss the possible reasons why the theoretical angular power spectrum does not match the one computed from the log-normal simulated maps when we sample from correlated maps.
After conducting several tests, I start to think that this is not related to how we are sampling from correlated maps, but rather to how we are generating the log-normal power spectrum in this new implementation.

To illustrate this, I have computed the power spectrum for a single redshift bin (the 5th in the following plot) using two different procedures.
Senza titolo2
In the first case, I generated the power map from the theoretical angular power spectrum first and then shifted the map (map 1). In the second case (map 2), I first shifted the angular power spectrum and then generated the power map.

Below are a few lines of code that can help to understand the differences.

def make_power_map(pk_fn, N, map_size, zero_freq_val=0.0, model_type=None ):
    k = 2 * jnp.pi * jnp.fft.fftfreq(N, d=map_size / N)
    kcoords = jnp.meshgrid(k, k)
    k = jnp.sqrt(kcoords[0]**2 + kcoords[1]**2)
    ps_map = pk_fn(k)
    ps_map = ps_map.at[0, 0].set(zero_freq_val)
    if model_type=='no_correlation':
        power_map = ps_map * (N / map_size)**2
    else:
        power_map = ps_map * (N / map_size)
    return power_map

def make_lognormal_power_map(power_map, shift, zero_freq_val=0.0):
    power_spectrum_for_lognorm = jnp.fft.ifft2(power_map).real
    power_spectrum_for_lognorm = jnp.log(1 +
                                         power_spectrum_for_lognorm / shift**2)
    power_spectrum_for_lognorm = jnp.abs(
        jnp.fft.fft2(power_spectrum_for_lognorm))
    power_spectrum_for_lognorm = power_spectrum_for_lognorm.at[0, 0].set(0.)
    return power_spectrum_for_lognorm


def make_lognormal_cl(cl,shift):
    xsi = jnp.fft.ifft(cl)
    xsi_shift = jnp.log(1 + xsi/(shift)**2)
    cl_shifted = jnp.fft.fft(xsi_shift).real
    return  cl_shifted

N=128            
map_size=5   
pix_area = (map_size * 60 / N)**2 # arcmin2 
map_size = map_size / 180 * jnp.pi    # radians
omega_c = 0.3
sigma_8 =0.8
cosmo = jc.Planck15(Omega_c=omega_c, sigma8=sigma_8)
ell_tab = 2 * jnp.pi * abs(jnp.fft.fftfreq(N, d=map_size / N))
tracer = jc.probes.WeakLensing([nz_shear[-1]])
shift = shift_fn(cosmo.Omega_m, sigma_8)
cell_tab = jc.angular_cl.angular_cl(cosmo, ell_tab, [tracer])[0]

P_1 = lambda k: jc.scipy.interpolate.interp(k.flatten(), ell_tab, cell_tab).reshape(k.shape)
power_map_1 = make_power_map(P_1, N, map_size, model_type='no_correlation') 
power_map_1 = make_lognormal_power_map(power_map_1, shift)


P_2 = lambda k: jc.scipy.interpolate.interp(k.flatten(), ell_tab, cell_tab_shifted).reshape(k.shape)
power_map_2 = make_power_map(P_2, N, map_size, model_type='no_correlation') 


field_1 = jnp.fft.ifft2(jnp.fft.fft2(z) * jnp.sqrt(power_map_1)).real
field_1 = shift * (jnp.exp(field_1 - jnp.var(field_1) / 2) - 1)
field_2 = jnp.fft.ifft2(jnp.fft.fft2(z) * jnp.sqrt(power_map_2)).real
field_2 = shift * (jnp.exp(field_2 - jnp.var(field_2) / 2) - 1)

Below are the results computed from the two maps, averaged over 20 realizations:
resu

If you remember @EiffL, when we compared the two power maps from the two methods, we noted some differences. I am starting to think that these differences are the ones causing the bias in the results.

LSST year 10 setting

Redshift

  • 5 redshift bins
  • equal number of galaxies
  • redshift distribution:
    image
    with (alpha, z0) = (0.11, 0.68)

Number density

  • 27 / arcmin^2

Shape Noise

  • 0.27

Map size

  • 10 degree

Number of pixels

  • 256

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.