differentiableuniverseinitiative / sbi_lens Goto Github PK
View Code? Open in Web Editor NEWA python package for Weak Lensing Implicit Inference with Differentiable Simulator
License: MIT License
A python package for Weak Lensing Implicit Inference with Differentiable Simulator
License: MIT License
Hi @EiffL I tried to run the mcmc full field as you suggested (sequentially) for 35 hours on azure but it did not converge. Here are the results:
This is the setup I used to run the mcmc:
from sbi_lens.simulator.utils import get_reference_sample_posterior_full_field
samples_ff_store = get_reference_sample_posterior_full_field(
run_mcmc=True,
model=model,
m_data=m_data,
num_results=1000,
num_warmup=200,
nb_loop=10,
num_chains=1,
chain_method='parallel',
max_tree_depth=6,
step_size=1e-2,
key=key(0)
)
What do you think? Should I wait a bit longer or is there a way to speed it up?
Hello @Justinezgh !
We can finally start adding pieces of code to the repo.
As discussed, we should work making our own branch and adding features via PR.
As a good starting point we could add:
We should also make sure we always document the Python codes! :)
@dlanzieri @Justinezgh it is time to start drafting a README page with
There are few kinds of GitHub projects. The ones that aim at being imported as a library in other codes (numpy, tensorflow, jax) and the standalone ones that have a single purpose (produce data, reproduce an analysis, run a benchmark).
Depending on the category, the name of the project is more or less important.
Then there are a few considerations, Python-wise
You can also have a project name that is different from the Python package name (e.g. "scikit-learn" vs. sklearn
)
In your case, you might want the name to convey the purpose of the package, or give it the name of your choosing.
But at least we should start discussing it here.
@dlanzieri we have functions to compute reference posteriors but running mcmcs can be very long. Do we want to pre run mcmcs and save the chains in the data folder for specific cases like the one corresponding to lsst year 1 or year 10 for instance? It can be accessed through the functions if we use the following names:
"posterior_power_spectrum__{}N_{}ms_{}gpa_{}se.npy".format(N, map_size, gals_per_arcmin2, sigma_e)
"posterior_power_spectrum__{}N_{}ms_{}gpa_{}se.npy".format(N, map_size, gals_per_arcmin2, sigma_e)
"m_data__{}N_{}ms_{}gpa_{}se.npy".format(N, map_size, gals_per_arcmin2, sigma_e)
Basic dependencies like numpy or jaxlib shouldn't be specified with exact version in the install, because it easily messes up entire environments. Do we really need this?
Trying to install this package just downgraded the numpy version on my system...
Hello @EiffL @Justinezgh @aboucaud ! This first issue is to keep track of development needed to build the SBI package.
In this PR, we can find the first prototype for the general structure. There's a lot more to do!
If you think something else we should do, please add a TODO list in this issue! :-)
Since @Justinezgh added the code, I think it could be a good way for you @dlanzieri to practice redacting test cases for the function classes.
Now that the CI is working and sbi_lens
pip installable, we need to fix the existing tests.
This issue aims to discuss the possible reasons why the theoretical angular power spectrum does not match the one computed from the log-normal simulated maps when we sample from correlated maps.
After conducting several tests, I start to think that this is not related to how we are sampling from correlated maps, but rather to how we are generating the log-normal power spectrum in this new implementation.
To illustrate this, I have computed the power spectrum for a single redshift bin (the 5th in the following plot) using two different procedures.
In the first case, I generated the power map from the theoretical angular power spectrum first and then shifted the map (map 1). In the second case (map 2), I first shifted the angular power spectrum and then generated the power map.
Below are a few lines of code that can help to understand the differences.
def make_power_map(pk_fn, N, map_size, zero_freq_val=0.0, model_type=None ):
k = 2 * jnp.pi * jnp.fft.fftfreq(N, d=map_size / N)
kcoords = jnp.meshgrid(k, k)
k = jnp.sqrt(kcoords[0]**2 + kcoords[1]**2)
ps_map = pk_fn(k)
ps_map = ps_map.at[0, 0].set(zero_freq_val)
if model_type=='no_correlation':
power_map = ps_map * (N / map_size)**2
else:
power_map = ps_map * (N / map_size)
return power_map
def make_lognormal_power_map(power_map, shift, zero_freq_val=0.0):
power_spectrum_for_lognorm = jnp.fft.ifft2(power_map).real
power_spectrum_for_lognorm = jnp.log(1 +
power_spectrum_for_lognorm / shift**2)
power_spectrum_for_lognorm = jnp.abs(
jnp.fft.fft2(power_spectrum_for_lognorm))
power_spectrum_for_lognorm = power_spectrum_for_lognorm.at[0, 0].set(0.)
return power_spectrum_for_lognorm
def make_lognormal_cl(cl,shift):
xsi = jnp.fft.ifft(cl)
xsi_shift = jnp.log(1 + xsi/(shift)**2)
cl_shifted = jnp.fft.fft(xsi_shift).real
return cl_shifted
N=128
map_size=5
pix_area = (map_size * 60 / N)**2 # arcmin2
map_size = map_size / 180 * jnp.pi # radians
omega_c = 0.3
sigma_8 =0.8
cosmo = jc.Planck15(Omega_c=omega_c, sigma8=sigma_8)
ell_tab = 2 * jnp.pi * abs(jnp.fft.fftfreq(N, d=map_size / N))
tracer = jc.probes.WeakLensing([nz_shear[-1]])
shift = shift_fn(cosmo.Omega_m, sigma_8)
cell_tab = jc.angular_cl.angular_cl(cosmo, ell_tab, [tracer])[0]
P_1 = lambda k: jc.scipy.interpolate.interp(k.flatten(), ell_tab, cell_tab).reshape(k.shape)
power_map_1 = make_power_map(P_1, N, map_size, model_type='no_correlation')
power_map_1 = make_lognormal_power_map(power_map_1, shift)
P_2 = lambda k: jc.scipy.interpolate.interp(k.flatten(), ell_tab, cell_tab_shifted).reshape(k.shape)
power_map_2 = make_power_map(P_2, N, map_size, model_type='no_correlation')
field_1 = jnp.fft.ifft2(jnp.fft.fft2(z) * jnp.sqrt(power_map_1)).real
field_1 = shift * (jnp.exp(field_1 - jnp.var(field_1) / 2) - 1)
field_2 = jnp.fft.ifft2(jnp.fft.fft2(z) * jnp.sqrt(power_map_2)).real
field_2 = shift * (jnp.exp(field_2 - jnp.var(field_2) / 2) - 1)
Below are the results computed from the two maps, averaged over 20 realizations:
If you remember @EiffL, when we compared the two power maps from the two methods, we noted some differences. I am starting to think that these differences are the ones causing the bias in the results.
This will improve consistency across the repository.
I can do that PR myself if you prefer. Let me know.
Originally posted by @aboucaud in #2 (comment)
Hi @dlanzieri I think that we need to add / update tests ^^
Hi @EiffL and @dlanzieri, I checked the simulator (by looking at the power spectrum) for different cosmo params than the fiducials, and I got weird results:
Omega_c: 0.24082552
Omega_b: 0.03865506
Sigma_8: 0.6434185
h_0: 0.70308214
n_s: 0.88341194
w_0: -0.561326
You can have a look at the notebook here.
What do you think?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.