Coder Social home page Coder Social logo

pynbody / genetic Goto Github PK

View Code? Open in Web Editor NEW
20.0 20.0 8.0 615.6 MB

The GM initial conditions generator

License: GNU General Public License v3.0

CMake 0.03% Makefile 0.03% C++ 94.02% Mathematica 0.18% Python 0.60% Shell 0.01% PostScript 0.24% Jupyter Notebook 4.90% Dockerfile 0.01%

genetic's Introduction

pynbody

Build Status

Pynbody is an analysis framework for N-body and hydrodynamic astrophysical simulations supporting PKDGRAV/Gasoline, Gadget, Gadget4/Arepo, N-Chilada and RAMSES AMR outputs. It supports Python 3 only (versions prior to 1.0 are still available on PyPI for Python 2). Minor version support adheres roughly to SPEC0.

Written in Python, the core tools are accompanied by a library of publication-level analysis routines. For a quick tour of some of the features, have a look at this IPython notebook.

Getting started

If python and the standard pip package manager is installed and properly configured, you can simply do:

$ pip install pynbody

If this fails, you may need some more detailed installation instructions. Once you have the package installed, try the introductory tutorials. The full documentation can be found here.

Contributing

Help us make pynbody better! As you develop analysis for your science with pynbody, consider making your code available for everyone else to use. You can do this by creating a tutorial or cookbook or by adding your code to the relevant sub-module and submitting a pull request (make a fork first -- see https://help.github.com/articles/using-pull-requests).

Acknowledging the code

When using pynbody, please acknowledge it by citing the Astrophysics Source Code Library entry. Optionally you can also cite the Zenodo DOI for the specific version of pynbody that you are using, which may be found here.

Support and Contact

If you have trouble with Pynbody or you have feature requests/suggestions you can submit an issue, and/or send us an email on the Usergroup mailing list.

genetic's People

Contributors

apontzen avatar cphyc avatar martin-rey avatar nroth0815 avatar rc-softdev-admin avatar svstopyra avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

genetic's Issues

Which commands should be executed when setting up an input mapper?

Hi all,

I have been setting up initial conditions with respect to previously defined grid structures, using the set_input_mapper command.

Previously, this operation used to be very fast, as the DummyICGenerator would skip all costly commands to just figure out the grid and ID structure.

However, my reference parameter file is fairly complex, calling calculate, apply_modifications and clear_modifications. These functions actually get executed, triggering the draw of the random field and applying modifications to it. It makes the execution of set_input_mapper almost as long as generating the first ICs.

These commands are not overridden by the dummy IC generator, which triggers this behaviour:
https://github.com/ucl-cosmoparticles/genetIC/blob/53f63ba47f3b6c3044a508db99f01c8168b7f08b/genetIC/src/dummyic.hpp#L60-L93

Is there a use-case in which it is useful to executecalculate, apply_modifications, or even initialise the random draw, when using the dummy IC generator? Has this behaviour been changed in recent iterations of the code or has it never been noticed?

It feels to me like these operations should be overridden as empty functions in the dummy IC generator, but I want to make sure I am not missing something.

Martin

Documentation

  • Write some user documentation
  • Write a short paper on the novel methods

Clean the code of TODOs

The source files are scattered with TODO entries. Some are documentation questions, others proper problems. It is hard to know which one are hard/useful to implement.

Tests are not running by default and fail when enabled

@Martin-Rey I uncovered a pretty serious bug today in the tests! In commit 7baec09 you changed compare.py to ask whether a tipsy file actually exists. However the code is broken and always returns False (you can't use wildcards in os.path.exists). As a result, the particle data has not been compared since that commit.

When I re-enabled this I get a failure on test 02g. Could you investigate and issue a PR to repair compare.py and fix the failing unit test?

Assertion error when not using `-DZELDOVICH_GRADIENT_FOURIER_SPACE`

When deactivating the -DZELDOVICH_GRADIENT_FOURIER_SPACE flag with zoom grids, an assertion error is raised.

Steps to reproduce

  1. Compile without -DZELDOVICH_GRADIENT_FOURIER_SPACE
  2. Execute genetIC genetIC/tests/test_01c_zoom_grid
  3. An assertion error is raised (see traceback below)

Traceback

$ git rev-parse HEAD
af8559ae76a1a5d421825efb417a4507fc759e01
$ (cd genetIC/tests/test_01c_zoom_grid; OMP_NUM_THREADS=1 ../../../genetIC/genetIC  paramfile.txt)
genetIC v1.1.1, compiled: Apr  8 2021, 11:34:51
              runtime:  Apr 08 2021, 11:47:18

git HEAD:af8559a
At the time of compilation, the following files had been modified:
  Makefile
  
 2021-04-08 11:47:18   Note: 1 FFTW Threads (determined by OpenMP) were initialised
 2021-04-08 11:47:18   WARNING: basegrid is a deprecated command and has been replaced by base_grid
 2021-04-08 11:47:18   Initialized the base grid:
 2021-04-08 11:47:18     Box length            = 50 Mpc/h
 2021-04-08 11:47:18     n                     = 64
 2021-04-08 11:47:18     dx                    = 0.78125
 2021-04-08 11:47:18   WARNING: IDfile is a deprecated command and has been replaced by id_file
 2021-04-08 11:47:18   Centre of region is 25.0097565911 24.996747803 25.0126474329
 2021-04-08 11:47:18   WARNING: dump_IDfile is a deprecated command and has been replaced by dump_id_file
 2021-04-08 11:47:18   WARNING: zoomgrid is a deprecated command and has been replaced by zoom_grid
 2021-04-08 11:47:18   Initialized a zoom region:
 2021-04-08 11:47:18     Subbox length         = 16.6666666667 Mpc/h
 2021-04-08 11:47:18     n                     = 64
 2021-04-08 11:47:18     dx                    = 0.260416666667
 2021-04-08 11:47:18     Zoom factor           = 3
 2021-04-08 11:47:18     Num particles         = 1081
 2021-04-08 11:47:18     Low-left corner in parent grid = (21, 21, 21)
 2021-04-08 11:47:18     Low-left corner (h**-1 Mpc)    = 16.40625, 16.40625, 16.40625
 2021-04-08 11:47:18     Total particles       = 290250
 2021-04-08 11:47:18   Drawing random numbers (base grid)
 2021-04-08 11:47:18   Drawing random numbers (zoom level 1)
 2021-04-08 11:47:18   Calculating particles from overdensity fields...
genetIC/genetIC/src/simulation/grid/grid.hpp:603: size_t grids::Grid<T>::getIndexFromCoordinateNoWrap(int, int, int) const [with T = double; size_t = long unsigned int]: Assertion `this->containsCell(index)' failed.

Additional info
The assertion fault seems to be due to the following line when called with the first zoomed grid

ind_m1 = grid2.getIndexFromIndexAndStep(index, negDirectionVector);

Ordering of virtual grids in grafIC hierarchies

Hi all,

It turns out that submitting a fix to issue #69 only partially helped, reducing the amount of errors by half :)

I noticed that the virtual grids, introduced to pad the hierarchy for grafIC outputs, are now shuffled in their ordering. Following is my typical output, in which the 4096 grid comes before the 2048 grid which comes before the 1024 grid in the hierarchy.

Zeldovich approximation on successive levels...
Interpolating low-frequency information into zoom regions...
done.
Adding offset to the output
Write, ndm=1377828864, ngas=0
Adding virtual grid with effective resolution 256
Adding real grid with resolution 512
Adding virtual grid with effective resolution 4096
Adding virtual grid with effective resolution 2048
Adding virtual grid with effective resolution 1024
Adding real grid with resolution 8192

I think I am having issues because this shuffle of grids then interacts with the propagation of masks throughout the hierarchy. Creating the masks for virtual grids assume that the hierarchy is ordered with increasing resolution to propagate from fine zoom grids to coarser levels.

The problem likely lies in the following loop when generating the virtual grids
https://github.com/ucl-cosmoparticles/genetIC/blob/5f0982c62b9433e0b2c096395c47dee2266b5dd4/genetIC/src/simulation/multilevelcontext/multilevelcontext.hpp#L361-L365
, which was last modified in 3f0dd5a during the big clean-up of the code. The changes in this commit seem to be cosmetic, so the problem could have been introduced earlier (but has to be after e4039d6 which was used to generate the EDGE ICs without problem).

I would have assumed such problem to be caught by the mask tests (test_08 and so on), but it turns out that all grafIC tests have at most one intermediate virtual grid, and the case of multiple intermediate grids is never tested for. I can reproduce the problematic behaviour if asking for two or more virtual grids in a small test.

I'll see if I can figure out a fix.

Martin

Recreating a MUSIC zoom within genetIC

Hi all,

I am posting this here, as this is likely a question which will come back several times and this issue can serve as documentation.

I would like to reconstruct a zoom simulation that was first generated with MUSIC, and start genetically modify it.

I believe that it is now potentially possible, by importing the white noise fields produced by MUSIC at the different levels using the “import_level_field” commands, with a syntax resembling

import_level_field 0 wnoise_00009.npy
import_level_field 1 wnoise_00010.npy
import_level_field 2 wnoise_00011.npy

where 00009 would correspond to the basegrid. The following are the zoom levels which need to be opened manually at the right spatial position, and by factors of 2 to match the inner workings of MUSIC. Would you agree with that?

Another problem is that MUSIC produces the white noise files in binary format, compared to the required 3D-numpy input format for genetIC. Do you know of an existing tool to convert between these two formats? If yes, we could include it as part of the distribution.

Thanks in advance,
Martin

Baryon growth for RAMSES

Hi Andrew,

I might have found the source of the disparity between baryon mass growth and DMO mass growth for EDGE:
https://github.com/ucl-cosmoparticles/genetIC/blob/9f2d5fd1ebaf59f45f4512ecf3b57e4d660eb63c/genetIC/src/io/grafic.hpp#L133-L134

I swear I can remember this line was changed to actually using the CDM density (via an evaluator) but somehow this change never made it to the master branch...

I am going to do further checks to verify that this is indeed the problem. But it might need rerunning the baryon runs if the problem came from the ICs...

Martin

Inconsistent centering for GRAFIC ICs

Hi all,

I was running into issues when producing RAMSES initial conditions today. I am generating initial conditions for a halo quite close to the edge of the box (center is 2.19, 41.45, 46.1 Mpc/h in a box of 50 Mpc/h) and using the center_output option.

Adding real grid with resolution 512
Loading ./halo635_3rvir.txt
  -> total number of particles is 16657
Centre of region is 2.19617514877 41.4562858567 46.141012764
Initialized a zoom region:
  Subbox length         = 6.25 Mpc/h
  n                     = 1024
  dx                    = 0.00610352
  Zoom factor           = 8
  Num particles         = 16657
  Low-left corner in parent grid = (504, 389, 440)
  Low-left corner (h**-1 Mpc)    = 49.2188, 37.9883, 42.9688

Given that my zoom grid is 6.25 Mpc/h, I would expect my centered grafic grids to be written on disk with a lower-left corner offset of 25 - 6.25/2 = 21.875 Mpc/h. After failing to load the ICs into Ramses, I opened the grafic files to find:

In [26]: zoom = pynbody.load("./halo635.grafic_1024")
In [29]: nx, ny, nz, dx, lx, ly, lz, a, omegam, omegal, H0 = zoom._header

In [32]: lx * H0  /100
Out[32]: 71.9726513671875

In [33]: ly * H0  /100
Out[33]: 21.58203125

In [34]: lz * H0  /100
Out[34]: 21.875

The y and z lower-left corners have been centered successfully, but the x corner is off, with what looks like an extra factor boxlength.

I don't know whether this bug is fundamental to the pre-exiting implementation and never noticed, or was added in #64 when revising the centering implementation. Any insights would be appreciated.

Martin

Consistent computation of velocities

Currently, genetIC uses by default a fourth-order finite difference method to compute velocities (for velocity and angular momentum modifications), but it uses a Fourier-based method to compute the displacements and velocities to generate the output.

For the sake of consistency, it would be arguably better to use finite-differences everywhere for gradient estimations.

Note after #80 is merged, the compilation flag ZELDOVICH_GRADIENT_FOURIER_SPACE can be deactivated to switch off the gradient computation in Fourier space. We would however need to update all the answer tests.

Additional cerr print statements

Creating zoom IC with gas and custom particle types

Hi,

I'm trying to create hydro zoom initial conditions where the dark matter particles at every level have a different GADGET particle type so that I can check for contamination. My goal is to set the DM in the coarsest level to particle type 5 and in the zoomed region have the DM set to particle type 1 and the gas to particle type 0.

For a two-level grid, I believe this should be possible with the following parameter file:

# Cosmology
Om     0.3111
TCMB   2.7255
Ol     0.6889
Ob     0.0489
hubble 0.6766
s8     0.8102
ns     0.9665 
z      127

# Transfer functions
camb	./test_ic_zoom/test_ics_zoom00_tk.dat

# Seed
random_seed 192837465

# Specify the base-level grid. Basegrid 75 Mpc/h, 128^3
base_grid 75.0 128
gadget_particle_type 5

# Zoom region #1
centre 37.5 37.5 37.5
select_cube 22.5
zoom_grid 2 128
gadget_particle_type 1

centre_output

done

However, the resulting IC defaults to particle type 1 for both the DM particles in the coarse and the zoomed region. Only by disabling the gas and creating DM-only ICs, i.e. commenting Ob in the parameter file, I can obtain the DM particle types I want.

I see that AddGasMapper() definition in gasmapper.hpp requires default particle types for DM, can that be changed after the initial gas particle assignment in the zoomed region?

Import random field is incompatible with zooms

Hi,

I am encountering a new error when using the new command import_level, and trying to open a zoom after it.

Since import_level initialise the random draw, opening a zoom then fails with the following error:
Error "Trying to initialize a grid after the random field was already drawn" on line 25 ("zoomgrid 2 64")

This error should be reproducible by adding to parameter file of test11 the following.:
import_level 0 ../test_11_import/reference_grid/grid-0.npy centre 12.5 12.8 34.5 select_sphere 2 zoomgrid 2 64

I am not sure how this fix this in a self-consistent way.
Martin

Zoom ICs with gas particles and custom particles types with autopad

Hi,

This issue is related to #105. Would it be possible to keep the assigned GADGET custom particle types when using the autopad feature? It seems that turning that feature on resets the particle type assignment.

The following parameter file:


# Output parameters
outdir	 .
outformat gadget3
outname B75_N64_CDM_genetIC_ICS-parent

# Cosmology, based on Planck 2018 (CMB + lensing + BAO)
Om     0.3111
TCMB   2.7255
Ol     0.6889
Ob     0.0489
hubble 0.6766
s8     0.8102
ns     0.9665 
z      127

# Import transfer function data.
# If you change the cosmological parameters above be sure to update the transfer
# function accordingly. Some help is given in the folder tools/transfer_function/
# within the genetIC distribution
camb	./LCDM_planck18+_genetIC_tk.dat

# Seed the field using the default algorithm (parallel Fourier space):
random_seed 192837465

# Specify the base-level grid. Basegrid 75 Mpc/h, 64^3
base_grid 75.0 64
gadget_particle_type 5

# Zoom region #1
centre 37.5 37.5 37.5
select_cube 22.5
zoom_grid 2 256
gadget_particle_type 1

autopad 8

centre_output

done

Outputs only type 0 and type 1 particles, regardless of their resolution.

I would like to assign the added particles refined with intermediate resolution (i.e. between the basegrid and the finest level) the same type as the low resolution particles in the coarsest level so they are considered as part of the contamination... or even better, be able to assign them a different particle type if that is not too hard to implement.

GSL interpolation error when using a WDM transfer function

Hi,

I'm getting the following error when I run genetIC with a WDM transfer function:

gsl: interp.c:37: ERROR: insufficient number of points for interpolation type
Default GSL error handler invoked.
[1]    55914 IOT instruction (core dumped)  ./genetIC 

The genetIC parameter file I used is the following:

# Output parameters
outdir	 .
outformat gadget3
outname B75_N128_WDM_genetIC_ICS-parent

# Cosmology, based on Planck 2018 (CMB + lensing + BAO)
Om     0.3111
TCMB   2.7255
Ol     0.6889
#Ob     0.0489
hubble 0.6766
s8     0.8102
ns     0.9665 
z      127


# Import transfer function data.
camb	./LWDM_planck18+_genetIC_tk.dat

# Seed the field using the default algorithm (parallel Fourier space):
random_seed 192837465

# Specify the base-level grid. Basegrid 75 Mpc/h, 128^3
base_grid 75.0 128
gadget_particle_type 1

centre_output

done

The WDM transfer function I'm using was generated by taking a CDM transfer function from CLASS (using the CAMB output setting, so it should have the correct normalization) and then multiplying it by the Viel et al. 05 (Eq.6 here) WDM cutoff parametrization for a certain WDM particle mass. It has 7 columns, where only the first two, corresponding to the wavenumber and the DM transfer function, are non-zero.

I initially thought that my WDM transfer function has too few points for the interpolation (140 wavenumbers) but I tried with a CDM one that has the same number of values and for that setup it does work. Here are some values from the CDM transfer function:


#    1:k (h/Mpc)        2:-T_cdm/k2        3:-T_b/k2           4:-T_g/k2          5:-T_ur/k2         6:-T_ncdm/k2       7:-T_tot/k2
1.043719779506e-05  1.983212541770e+07  1.983212496349e+07  2.642898281863e+07  2.642868383264e+07  0.000000000000e+00  1.982339280717e+07
1.055805484953e-05  1.983212369680e+07  1.983212323202e+07  2.642865796929e+07  2.642835202515e+07  0.000000000000e+00  1.982339108082e+07
...
5.053402767380e+02  1.105144958511e+00  5.747035482245e-01 -6.041156819334e-14 -6.143209445162e-14  0.000000000000e+00  1.021467887905e+00

And from the WDM one:

#    1:k (h/Mpc)        2:-T_cdm/k2        3:-T_b/k2           4:-T_g/k2          5:-T_ur/k2         6:-T_ncdm/k2       7:-T_tot/k2
1.043719779506e-05 1.983212541749e+07 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00
1.055805484953e-05 1.983212369658e+07 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00
...
5.053402767380e+02 1.205880041635e-25 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00

Error "Particle ID out of range" but particles should be in range

I'm trying to create a "superzoom" IC from existing zoomed ICs. I'm using ICs that Michael Tremmel made of a zoomed MW+ galaxy, and I can use genetIC to reproduce his process. I then ran these ICs to z=10, and selected a cube that encompasses ~90% of the zoom region. I wish to further resample this region. I output the iords into a file and have verified that they are in the proper cube and they are all dark matter. Corentin has assisted me in creating the correct genetIC commands for my needs. But I keep getting: Error "Particle ID out of range"

I'll attach my param file here, all other files are quite large... but I'm glad to share more if it will help.
corentin_param.txt

I would appreciate any help!

Segmentation fault with g++ and optimization flags

The test test_01c_zoom_grid fails with a segfault.

Configuration

  • Linux (archlinux, kernel: 5.10.11-arch1-1 #1 SMP PREEMPT)
  • g++ (GCC) 10.2.0
  • Compilation flags: -g -O2

Note that with the same configuration and -O1 or -O0, the tests all seem to work.

Steps to reproduce

Compile the code and run the parameter file tests/test_01c_zoom_grid/paramfile.txt.

Debugging

When debugging, it seems the issue comes the variable valsForInterpolation being unallocated before being used. The variable is created here

DataType valsForInterpolation[4][4][4];

The line where the segfault is reported is somewhere around here (not exact location because of the optimization flags).
auto interp = getTricubicInterpolatorCached(x_p_0, y_p_0, z_p_0);
return interp(dx, dy, dz);

Adding a 2LPT implementation

Is the gain of precision over the Zel'dovich approximation worth the time ? This feature seems to be present in most latest IC codes.

Assertion error when generating double/triple zooms near grid boundaries

HI all,

I am hitting an assertion error when generating a zoom on a specific edge case (zooming on other objects in the box proceeds as usual). My setup is quite involved, with three nested zooms as the "volume" and I am trying to generate the fourth, final zoom level on my halo of interest.

genetIC v1.1.1, compiled: Apr 29 2021, 13:50:27
              runtime:  Apr 30 2021, 09:16:24

git HEAD:07dbc11
At the time of compilation, the following files had been modified:
  Makefile
  
 2021-04-30 09:16:24   
 2021-04-30 09:16:24   Limiting number of FFTW Threads to 8, because FFTW on Mac OS seems to become slow beyond this point.
 2021-04-30 09:16:24   To disable this behaviour, recompile with -DIGNORE_APPLE_FFTW_THREAD_LIMIT
 2021-04-30 09:16:24   OpenMP parts of the code will still run with 12 threads.
 2021-04-30 09:16:24   
 2021-04-30 09:16:24   Using post 2015 CAMB transfer function
 2021-04-30 09:16:24   Initialized the base grid:
 2021-04-30 09:16:24     Box length            = 67.32 Mpc/h
 2021-04-30 09:16:24     n                     = 256
 2021-04-30 09:16:24     dx                    = 0.262969
 2021-04-30 09:16:24   Adding real grid with resolution 256
 2021-04-30 09:16:24   Loading ../void/void.iords.txt
 2021-04-30 09:16:25     -> total number of particle IDs loaded is 119826
 2021-04-30 09:16:25        total number of accessible cells flagged is 119826
 2021-04-30 09:16:25   Centre of region is 17.8711588248 58.6582382854 25.0155118907
 2021-04-30 09:16:25   Initialized a zoom region:
 2021-04-30 09:16:25     Subbox length         = 33.66 Mpc/h
 2021-04-30 09:16:25     n                     = 256
 2021-04-30 09:16:25     dx                    = 0.131484375
 2021-04-30 09:16:25     Zoom factor           = 2
 2021-04-30 09:16:25     Num particles         = 119826
 2021-04-30 09:16:25     Low-left corner in parent grid = (3, 161, 35)
 2021-04-30 09:16:25     Low-left corner (h**-1 Mpc)    = 0.78890625, 42.33796875, 9.20390625
 2021-04-30 09:16:25   Adding real grid with resolution 256
 2021-04-30 09:16:25   Adding real grid with resolution 512
 2021-04-30 09:16:25     Total particles       = 33554432
 2021-04-30 09:16:35   Initialized a zoom region:
 2021-04-30 09:16:35     Subbox length         = 16.83 Mpc/h
 2021-04-30 09:16:35     n                     = 256
 2021-04-30 09:16:35     dx                    = 0.0657421875
 2021-04-30 09:16:35     Zoom factor           = 2
 2021-04-30 09:16:35     Num particles         = 1481544
 2021-04-30 09:16:35     Low-left corner in parent grid = (65, 59, 55)
 2021-04-30 09:16:35     Low-left corner (h**-1 Mpc)    = 9.335390625, 50.095546875, 16.435546875
 2021-04-30 09:16:35   Adding real grid with resolution 256
 2021-04-30 09:16:35   Adding real grid with resolution 512
 2021-04-30 09:16:35   Adding real grid with resolution 1024
 2021-04-30 09:16:35     Total particles       = 50331648
 2021-04-30 09:16:58   Initialized a zoom region:
 2021-04-30 09:16:58     Subbox length         = 8.415 Mpc/h
 2021-04-30 09:16:58     n                     = 512
 2021-04-30 09:16:58     dx                    = 0.016435546875
 2021-04-30 09:16:58     Zoom factor           = 2
 2021-04-30 09:16:58     Num particles         = 1670998
 2021-04-30 09:16:58     Low-left corner in parent grid = (65, 66, 66)
 2021-04-30 09:16:58     Low-left corner (h**-1 Mpc)    = 13.6086328125, 54.43453125, 20.77453125
 2021-04-30 09:16:58   Adding real grid with resolution 256
 2021-04-30 09:16:58   Adding real grid with resolution 512
 2021-04-30 09:16:58   Adding real grid with resolution 1024
 2021-04-30 09:16:58   Adding virtual grid with effective resolution 2048
 2021-04-30 09:16:58   Adding real grid with resolution 4096
 2021-04-30 09:16:58     Total particles       = 201326592
 2021-04-30 09:18:44   Loading ./halo318_5rvir.txt
 2021-04-30 09:18:44     -> total number of particle IDs loaded is 19767
 2021-04-30 09:18:44        total number of accessible cells flagged is 23329
 2021-04-30 09:18:44   Centre of region is 20.5155946842 55.0619393569 25.4002363884
Assertion failed: (this->containsPoint(result)), function getCentroidFromCoordinate, file /Users/martinrey/Documents/genetIC/genetIC/src/simulation/grid/grid.hpp, line 646.

The assertion fail arises from the getCentroidFromCoordinate in this line:

genetIC/genetIC/src/ic.hpp

Lines 440 to 449 in 07dbc11

auto lci = zoomWindow.getLowerCornerInclusive();
// The zoom grid is opened on top of the previous actual grid, not the output grid,
// so we now need to convert. We set the domain size to be the grid above, so that we
// can ensure the new zoom window is fully contained within that (unless the grid above
// is the base level, in which case the wrapping is already done and no clamping should
// be applied).
Window<int> windowOnActualGrid(actualGridAbove.size,
actualGridAbove.getCoordinateFromPoint(outputGridAbove.getCentroidFromCoordinate(lci)), lci+n_user);

when opening the final zoom, but the problem is generated further up here

genetIC/genetIC/src/ic.hpp

Lines 425 to 426 in 07dbc11

if (n_required < n_user)
zoomWindow.expandSymmetricallyToSize(n_user);

The bug is similar in logic to #76 -- the window symmetric expansion always assume that the zoom window can be wrapped around the base grid, which is true for first-level zooms, but not for double or triple zoom setups. Specifically, in my case, my object's fourth grid is near the edge of the third zoom, and the lower left corner gets wrapped during the symmetric expansion by these lines:

void expandSymmetricallyToSize(T newSize) {
assert(getMaximumDimension() <= newSize);
auto centre = getCurrentCentre();
auto oldLCI = lowerCornerInclusive;
auto oldUCE = upperCornerExclusive;
lowerCornerInclusive = wrap(centre - newSize / 2);
if (newSize % 2 == 0)
upperCornerExclusive = wrap(centre + newSize / 2 - 1) + 1;
else
upperCornerExclusive = wrap(centre + newSize / 2 ) + 1;

leading to assert error later down the line

I am unsure how to solve this, and I am not super familiar with the code introduced in #76. It feels like it requires the window to have two mechanics (i) to expand symmetrically as it currently does and (ii) to expand within the bouding box of the parent grid. But both these modes have to correctly wrap in the overall simulation domain, even if they cannot wrap from the parent grid. Any input @apontzen ?

Cheers,
Martin

Tests with baryons fail

Hi,
When running the test suite on the current version of master (4726988), all tests pass except the following two:

  • test_12c_modification_with_baryons
  • test_12d_modification_with_baryons_multilevel

The failure comes from a disagreement between the answers. I attached below the outputs of the failing tests

$ ./run_tests.sh test_12c_modification_with_baryons
Running test on test_12c_modification_with_baryons   # Baryons, single level, with modification
Testing output
Traceback (most recent call last):
  File "compare.py", line 156, in <module>
    default_comparisons()
  File "compare.py", line 151, in default_comparisons
    compare(pynbody.load(output_file[0]),pynbody.load(sys.argv[1]+"/reference_output"))
  File "compare.py", line 26, in compare
    npt.assert_almost_equal(f1['vel'],f2['vel'],decimal=compare_decimal)
  File "/home/ccc/anaconda3/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 584, in assert_almost_equal
    return assert_array_almost_equal(actual, desired, decimal, err_msg)
  File "/home/ccc/anaconda3/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 1029, in assert_array_almost_equal
    precision=decimal)
  File "/home/ccc/anaconda3/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 841, in assert_array_compare
    raise AssertionError(msg)
AssertionError: 
Arrays are not almost equal to 4 decimals

Mismatch: 0.0797%
Max absolute difference: 0.00048828
Max relative difference: 0.01034109
 x: SimArray([[  13.8939,  -58.7497,   12.8005],
          [  17.2224,  -72.6177,   23.3705],
          [  20.9602,  -81.5143,  -14.5985],...
 y: SimArray([[  13.8939,  -58.7497,   12.8005],
          [  17.2224,  -72.6177,   23.3705],
          [  20.9602,  -81.5143,  -14.5985],...

--> TEST FAILED

The IC generator output follows

# GM ICs code, compiled Oct  9 2019 15:04:31
# git HEAD:4726988
# Runtime: 2019-10-09.15:11:49

Note: 4 FFTW Threads (determined by OpenMP) were initialised
Using post 2015 CAMB transfer function
Drawing random numbers...done
overdensity: calculated value = 0.00161546
BEFORE modifications chi^2 = 263093
Delta chi^2 from linear modifications = 11442.2
AFTER  modifications chi^2 = 274535
         Total delta chi^2 = 11442.2
Propagating modification to field 1
. BEFORE modifications chi^2 = 263093
AFTER  modifications chi^2 = 274535
         Total delta chi^2 = 11442.2

Operations applied to all fields. Writing to disk...

Write, ndm=262144, ngas=262144
+ AddGasMapper, DM first:
| + MassScaledGrid of side 64 address 0x55c0177f84f0 referencing Grid of side 64 address 0x55c0177b3a50; 1 cells marked
+ AddGasMapper, gas second:
| + MassScaledGrid of side 64 address 0x55c0177f85f0 referencing Grid of side 64 address 0x55c0177b3a50; 1 cells marked
+ AddGasMapper ends
Gadget output preliminary scan...
Particle type 0: 262144 particles
Min and max particle mass : 1.3484 1.3484
Particle type 1: 262144 particles
Min and max particle mass : 6.0595 6.0595

overdensity: calculated value = 1
Clearing modification list

$ ./run_tests.sh test_12d_modification_with_baryons_multilevel 
Running test on test_12d_modification_with_baryons_multilevel   # Baryons, single level, with modification
Testing output
Traceback (most recent call last):
  File "compare.py", line 156, in <module>
    default_comparisons()
  File "compare.py", line 151, in default_comparisons
    compare(pynbody.load(output_file[0]),pynbody.load(sys.argv[1]+"/reference_output"))
  File "compare.py", line 26, in compare
    npt.assert_almost_equal(f1['vel'],f2['vel'],decimal=compare_decimal)
  File "/home/ccc/anaconda3/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 584, in assert_almost_equal
    return assert_array_almost_equal(actual, desired, decimal, err_msg)
  File "/home/ccc/anaconda3/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 1029, in assert_array_almost_equal
    precision=decimal)
  File "/home/ccc/anaconda3/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 841, in assert_array_compare
    raise AssertionError(msg)
AssertionError: 
Arrays are not almost equal to 4 decimals

Mismatch: 0.244%
Max absolute difference: 0.00024414
Max relative difference: 0.01499623
 x: SimArray([[  466.4706,   194.4837,    38.7908],
          [  468.8281,   194.5704,    12.4436],
          [  470.2868,   194.9073,   -18.8642],...
 y: SimArray([[  466.4706,   194.4837,    38.7908],
          [  468.8281,   194.5704,    12.4436],
          [  470.2868,   194.9073,   -18.8642],...

--> TEST FAILED

The IC generator output follows

# GM ICs code, compiled Oct  9 2019 15:04:31
# git HEAD:4726988
# Runtime: 2019-10-09.15:13:19

Note: 4 FFTW Threads (determined by OpenMP) were initialised
Using post 2015 CAMB transfer function
WARNING: Opening a zoom where flagged particles are within 3 pixels of the edge. This is prone to numerical errors.
Initialized a zoom region:
  Subbox length         = 21.3333 Mpc/h
  n                     = 64
  dx                    = 0.333333
  Zoom factor           = 3
  Num particles         = 2969
  Low-left corner in parent grid = (22, 22, 22)
  Low-left corner (h**-1 Mpc)    = 22, 22, 22
  Total particles = 419501
WARNING: maximum k in CAMB input file is insufficient
         extrapolating using naive Meszaros solution
WARNING: maximum k in CAMB input file is insufficient
         extrapolating using naive Meszaros solution
Initialized a zoom region:
  Subbox length         = 7.11111 Mpc/h
  n                     = 64
  dx                    = 0.111111
  Zoom factor           = 3
  Num particles         = 1791
  Low-left corner in parent grid = (21, 21, 21)
  Low-left corner (h**-1 Mpc)    = 29, 29, 29
  Total particles = 434261
Drawing random numbers...done
Drawing random numbers...done
Drawing random numbers...done
overdensity: calculated value = 0.00369876
BEFORE modifications chi^2 = 326922
Delta chi^2 from linear modifications = 6442.46
AFTER  modifications chi^2 = 333005
         Total delta chi^2 = 6083.32
Propagating modification to field 1
. BEFORE modifications chi^2 = 326922
AFTER  modifications chi^2 = 333005
         Total delta chi^2 = 6083.32

Operations applied to all fields. Writing to disk...Zeldovich approximation on successive levels...



Interpolating low-frequency information into zoom regions...
done.
Zeldovich approximation on successive levels...



Interpolating low-frequency information into zoom regions...
done.
Write, ndm=385904, ngas=48357
+ AddGasMapper, DM first:
| + TwoLevelParticleMapper, n_hr_per_lr=27, firstLevel2Particle=337547
| +                       , zoom.size=1791, zoomed.size=48357
| | + TwoLevelParticleMapper, n_hr_per_lr=27, firstLevel2Particle=259175
| | +                       , zoom.size=2969, zoomed.size=80163
| | | + Grid of side 64 address 0x56139986e660; 1 cells marked
| | + TwoLevelParticleMapper continues with high-res particles:
| | | + Grid of side 64 address 0x56139986f640; 1 cells marked
| | + TwoLevelParticleMapper ends
| + TwoLevelParticleMapper continues with high-res particles:
| | + MassScaledGrid of side 64 address 0x56139986f3e0 referencing Grid of side 64 address 0x561399cb8fd0; 27 cells marked
| + TwoLevelParticleMapper ends
+ AddGasMapper, gas second:
| + TwoLevelParticleMapper, n_hr_per_lr=27, firstLevel2Particle=0
| +                       , zoom.size=1791, zoomed.size=48357
| + low-res part will be skipped but notionally is:
| | + TwoLevelParticleMapper, n_hr_per_lr=27, firstLevel2Particle=259175
| | +                       , zoom.size=2969, zoomed.size=80163
| | | + Grid of side 64 address 0x56139986e660; 1 cells marked
| | + TwoLevelParticleMapper continues with high-res particles:
| | | + Grid of side 64 address 0x56139986f640; 1 cells marked
| | + TwoLevelParticleMapper ends
| + TwoLevelParticleMapper continues with high-res particles:
| | + MassScaledGrid of side 64 address 0x561399cb6df0 referencing Grid of side 64 address 0x561399cb8fd0; 27 cells marked
| + TwoLevelParticleMapper ends
+ AddGasMapper ends
Gadget output preliminary scan...
Particle type 0: 48357 particles
Min and max particle mass : 0.00184966 0.00184966
Particle type 1: 385904 particles
Min and max particle mass : 0.00831207 7.4079

Using variable-mass format
overdensity: calculated value = 0.903737
Clearing modification list
Writing to .//grid-0.txt.//grid-0.txt
Writing to .//grid-1.txt.//grid-1.txt
Writing to .//grid-2.txt.//grid-2.txt

Reversed initial conditions leads to large errors in Fourier space

I seem to have come across a quite significant problem with the reverse function.

When comparing the Fourier space overdensity fields in both the reversed and non-reversed cases, the fields should add to zero in Fourier space (and real space), but currently it appear that they do not. The errors don't seem to be small, either, but appear only on certain slices of the Fourier space grid (with most slices adding to exactly zero as expected).

The problem seems to occur with the attached parameter file for example (comparing with and without the "reverse" line).

paramfile_unreversed.txt

The attached image shows the sum of the density fields for two different slices (slice 0 and slice 255 if you import them directly into numpy). Most of them are like the top slice (0) - all exactly zero everywhere as expected. Some look more like the bottom slice (255).

fourier_slices

There doesn't seem to be anything in the reverse code (multilevelfield.hpp, line 229) that would cause this, as it's simply flipping the sign. I suspect that it could be an error related to performing the Fourier transform (because the code does this back and forth between real and Fourier space in few places).

In real space, the fields don't appear to be reversals of each other at all as the differences lead to large fluctuations, and this seems to affect the resulting particle offsets in the Zeldovich approximation quite significantly.

Document the IC class

This class is basically the user interface. It would be good to have an explanation of user parameters that method takes etc. I have started doing this but this is unfinished.

Photogenic file for tipsy output

Hi Andrew,

I have encountered a question from @mtremmel relating to how genetIC produces the photogenic file for tipsy.

Currently, the particles flagged and outputted into the photogenic iords are chosen as the one with the lowest mass in the zoom simulation:
https://github.com/ucl-cosmoparticles/genetIC/blob/cb399aca2b15780274acc82fc1a2a5764c5277f2/genetIC/src/io/tipsy.hpp#L130-L136

In the case of baryon runs, zoomed gas particles have slightly lower mass than zoomed DM, such that only gas is contained in the photogenic file. Michael would have expected to rather have only DM iords as the normal behaviour.

Is the current behaviour what was anticipated when coding this functionality? Has tracking the gas rather than DM led to problems in your previous use of genetIC+photogenic+gasoline?

Martin

FFTW3 file referenced in a FFTW2 compilation

Hi,

I want to compile genetIC with FFTW2 support. Following the documentation, I commented the FFTW3 flags and instead turned on the FFTW2 ones:

FFTW    = -DFFTW_TYPE_PREFIX=1
FFTWLIB = -ldfftw -ldrfftw

but at compilation time I get

In file included from src/ic.hpp:16:0,
                 from src/main.cpp:9:
src/tools/numerics/fourier.hpp:6:10: fatal error: fftw3.h: No such file or directory
 #include <fftw3.h>
          ^~~~~~~~~
compilation terminated.
make: *** [src/main.o] Error 1

It seems that fourier.hpp, which is included in main.cpp through ic.hpp, calls fftw3.h even with the FFTW2 flags on. Is there something else I am not configuring properly?

adapt_mask ineffective if combined with subsampling

Hello,

I've been having a little trouble getting the adapt_mask option to work as intended when combined with subsampling.
As an example, I might have a zoom geometry like:

subsample 8

# Specify the base-level grid:
base_grid 100 192

# Zoom level 1:
centre 50 50 50
select_cube 10
zoom_grid 4 48

# Zoom level 2:
centre 50 50 50
select_sphere 3
zoom_grid 2 96

adapt_mask 5
autopad 5

In my mind, this is describing a central high-resolution sphere, enclosed within a central base-resolution cube, with a base that is subsampled by 8 times. Then, the adapt_mask and autopad options should adapt the particles in zoom level 1 so that they form a sphere wrapped around the particles in zoom level 2, and every autopad level should also be rounded.

However, what I find is that the resolution levels are only spherical out until the borders of the zoom level 1 particles, beyond which they become cube shaped.

Perhaps I am misunderstanding how the options interact, or what they are supposed to do.

All the best

Multiple output files number estimation for GADGET format

Hi,

Is there a way of estimating the number of output files "gadget_num_files" so that the resulting files don't get corrupted?
I'm getting the following message

One of the output Fortran fields is too large for the file format.
Writing will proceed, but the resulting file may appear corrupt
Try using gadget_num_files <n> to split your output into multiple files or try using a larger number of files <n>.

for a hydro zoom simulation with

# Specify the base-level grid. Basegrid 75 Mpc/h, 1024^3
base_grid 75.0 1024
gadget_particle_type 5

# Zoom region #1 37.5 Mpc/h grid enclosing 30 Mpc/h worth of high-res particles, 2048^3
select_cube 30
zoom_grid 2 2048
gadget_particle_type 1

and I'm currently using gadget_num_files 16. Each file is around 20 Gb.

I see that the message is coded around the size of a Fortran integer, i.e. 4 bytes. If I understood it correctly, this parameter file creates a zoom region that uses 2048^3 cells in a cube of 37.5 Mpc/h side-length. Still, only the basegrid particles within a cube of 30 Mpc/h side-length in the middle of the 75 Mpc/h box are actually going to be treated as high-resolution particles, with an effective resolution of 4096^3 because of the size ratio between the zoom region and the box.

What's the number of high-resolution particles in the end? I assume I should use 2048^3 for the memory estimation, thus 4 bytes * 2048^3 ~ 275 Gb per "field". Is this estimation correct? What file size should I aim for to avoid this message in the future?

Resolution of genetIC ICs compared to MUSIC ICs

Hi,

I am trying to debug a zoom hydrodynamical run and for that, I want to test the genetIC ICs I generated against some resolution-equivalent MUSIC ICs. For simplicity, I only consider 1 zoom region and ignore the differences from white-noise fields seeds, as I only care about the location and the resolution of the runs.

  • In genetIC, the resolution of the zoom region is set by the size-ratio of the base grid and the zoom grid and the number of particles in that region. However, only the previously flagged particles in the base grid would be promoted to high-resolution particles. Let's consider the following setup:
# Specify the base-level grid. Basegrid 100 Mpc/h, 128^3
base_grid 100.0 128
gadget_particle_type 2

# Specify the zoom level. 
# First flag the base grid particles within a cube of 40 Mpc/h side length and centred in the middle of the box.
centre 50 50 50
select_cube 40
# Accomodate those particles within a zoom grid half the base grid size (100 Mpc/h / 2 = 50 Mpc/h) with 256^3 grid.
# The "effective" spatial resolution is the same as using 512^3 particles in a 100 Mpc/h box. 
zoom_grid 2 256
gadget_particle_type 1

autopad 8

centre_output
  • In MUSIC, the resolution of the zoom region is set by the refinement level of the zoom region and the extent of the zoom region in units of the base grid, such that (extent_zoom_region * 2^refinement_zoom_region) ^3 should be the number of particles in the zoom region. I tried generating the previous setup with the following parameters:
boxlength          = 100 #in Mpc/h
levelmin            = 7   #2^levelmin sets the coarse effective resolution, i.e. 2^7 = 128 cells/dimension
levelmin_TF      = 7
levelmax           = 9   #2^levelmax sets the fine effective resolution
padding             = 8
overlap              = 4
ref_center         = 0.5, 0.5, 0.5
ref_extent         = 0.4, 0.4, 0.4
force_equal_extent = yes
align_top          = no
baryons            = yes
use_2LPT        = yes
use_LLA          = yes
periodic_TF     = yes

While the spatial resolution matches...

genetIC:

Initialized a zoom region:
Subbox length         = 50 Mpc/h
n                     = 256
dx                    = 0.195312
Zoom factor           = 2
Num particles         = 140608
Low-left corner in parent grid = (31, 31, 31)
Low-left corner (h**-1 Mpc)    = 24.2188, 24.2188, 24.2188
Total particles       = 19954368

MUSIC:

 - Finest level :
                   extent =  40.2344 x 40.2344 x 40.2344 h-3 Mpc**3
                 mtotgrid =  5.62323e+15 h-1 M_o
            particle mass =  5.42147e+08 h-1 M_o
         baryon mass/cell =  1.0111e+08 h-1 M_o
                       dx =  0.195312 h-1 Mpc

...the location of the zoom region and the number of particles does not:

genetIC:

 Writing output; number dm particles=12172224, number gas particles=8998912
 Particles by gadget type:
 Particle type 0: 8998912 particles
 Particle type 1: 8998912 particles
 Particle type 2: 3173312 particles

MUSIC:

 - Gadget2 : writing 20000057 particles to file...
 -       type   0 :      8741816 [m=0.010111]
 -       type   1 :      8741816 [m=0.0542147]
 -       type   2 :      2516425 [m=0]

I think I am missing something fundamental in my setup but I cannot figure out how to get similar results. Would you mind elaborating on how is genetIC setting up the zoom region and how could I recreate as close as possible the same setup in MUSIC, if you happen to know how?

Chi square calculation

Following our discussion, I have been looking at the chi square calculation.

The method historically used for chi^2 calculation looked like this:
https://github.com/ucl-cosmoparticles/genetIC/blob/a04d1188450880974ef48a63e02532240bd94b61/genetIC/src/simulation/field/multilevelfield.hpp#L323-L341

I refactored it into the ability to transform a vector to convector ( by dividing by the covariance matrix in fourier space) to look like this:
https://github.com/ucl-cosmoparticles/genetIC/blob/3b337c37075ca50b91d76519e1d4ba9f5b017162/genetIC/src/simulation/field/multilevelfield.hpp#L325-L334

The two method differ for the following reasons:

  • The norm factor in the old method. Why is the chi^2 divided by the size of the field ? (this is the reason for the huge difference in order of magnitude)
  • Handling of the zero-th mode. How many elements of spectrum[] would you expect to be zero ? I find that the first two are always zero.

Even though the recent method is neater in mathematical term, it creates an unneeded copy of the field so it might be worth going back to the old version.

Flagging and grafic mapper

When flagging cells from a file of IDs, the grafic mapper only flags and does not propagate them through the hierarchy of levels, contrary to usual SPH mappers.

The question is how should this be implemented to restore consistency, and allow for example to flag coarse cells from a purely zoomed output.

Since any cell at any level can be flagged from a file with grafic, there is a need to propagate cells "up" (from zoom to coarse) AND "down" (from coarse to zoom). Do you agree with this or do you think that only the "up" route is relevant for most applications ? Are there use cases where you would want to flag coarse cells but not the corresponding zoomed cells ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.