danieljprice / phantom Goto Github PK
View Code? Open in Web Editor NEWPhantom Smoothed Particle Hydrodynamics and Magnetohydrodynamics code
Home Page: https://phantomsph.github.io
License: Other
Phantom Smoothed Particle Hydrodynamics and Magnetohydrodynamics code
Home Page: https://phantomsph.github.io
License: Other
I do not know if other people have the same issue, but the testsuite does not pass with the last version of gfortran 10.2.0.
I have got this error:
`--> testing sink particle creation (sin)
s t r e t c h m a p p i n g <<<<<<
stretching to match density profile in r direction
density at r = 0.0000000000000000 is 1.0000000000000000
total mass = 6.9604063530626938E-004
Program received signal SIGBUS: Access to an undefined portion of a memory object.
Backtrace for this error:
/bin/bash: line 1: 90091 Segmentation fault: 11 ../bin/phantomtest
make[1]: *** [test] Error 139
make: *** [test] Error 2`
The interactive setup, when selecting 1-fluid only and 11 grain sizes, produces a .setup
file that includes a non-zero np_dust
and 12 grains sizes! However, it correctly has dust_method = 1
.
The interactive setup, when asking for the 1-fluid method with 11 (small) grain sizes and 1 million particles produces a .setup
file with
np = 1000000 ! number of gas particles
np_dust = 16666 ! number of large dust particles
(why this value, 1/60th of 1 million ? mystery...) and
ndusttypesinp = 12 ! number of grain sizes
I edited this file to reset ndusttypesinp
to 11 and reran phantomsetup
. It deleted the line with np_dust
, I guess because the method is 1-fluid, but then why was it put there in the first place? More surprisingly, it created 2 discs! First
Setting up disc mixture containing 90909 gas/empty particles
confirmed later in the disc parameters by
n = 90909 ! number of particles in the disc
and second
Setting up disc mixture containing 909090 gas/empty particles
followed by
n = 909090 ! number of particles in the disc
So, one disc with 90909 particles, i.e. 1/11th of 1 million, and another one with 909090 particles, i.e. the remaining 10/11th of 1 million, for a total of 999999 particles, as confirmed by the showheader
command:
nparttot 999999
...
ndustlarge 0
ndustsmall 11
This can be fixed by recompiling with MAXDUSTSMALL=11 MAXDUSTLARGE=0
. The interactive setup produces the same .setup
file as before. After resetting ndusttypesinp
to 11 and rerunning phantomsetup
, this time it works as expected:
Setting up disc mixture containing 1000000 gas/empty particles
and
n = 1000000 ! number of particles in the disc
confirmed by showheader
on the tmp dump file:
nparttot 1000000
...
ndustlarge 0
ndustsmall 11
This is not the desirable behaviour and one should not have to reset ndusttypesinp
in the .setup
file.
there is a small issue in the way that artificial conductivity is implemented, namely that the conductivity acts on del^u rather than del^2 T, with the assumption that T = cv1*u, where cv1 is a constant. For some equations of state this assumption does not hold (i.e. cv is variable) and therefore the conductivity should be slightly modified, essentially so that the cv term is brought outside the del^2 term.
See discussion on entropy with artificial conductivity in appendix B of: http://adsabs.harvard.edu/abs/2004MNRAS.348..123P
the store_temperature option is now redundant (temperature is always stored) and can be removed from the code.
Since the equation of state also now returns the temperature for all choices of ieos, we should also remove all superfluous "get_temperature" routines.
Suggest that temperature should only be written to the dump file if a non-ideal equation of state is used, to avoid wasting disk space
Originally posted by @danieljprice in #86 (comment)
icooling is 1 by default with make SETUP = sgdisc
Phantom fails to build on macOS with HDF5=yes. The failure is in building the binary in the linking stage.
My machine is macOS 10.15.6. I have gfortran (version 10.2.0) and HDF5 (version 1.12.0_1) installed via Homebrew.
I am using Phantom from the master branch plus the changes implemented in #58. I.e. @dliptai's fork.
To reproduce, assuming you have the latest gfortran and HDF5 installed from Homebrew:
$ GIT_LFS_SKIP_SMUDGE=1 git clone [email protected]:dliptai/phantom
$ cd phantom
$ make SETUP=disc SYSTEM=gfortran HDF5=yes HDF5ROOT=/usr/local/opt/hdf5
The output, at the stage where it fails:
gfortran -O3 -Wall -Wno-unused-dummy-argument -frecord-marker=4 -gdwarf-2 -finline-functions-called-once -finline-limit=1500 -funroll-loops -ftree-vectorize -std=f2008 -fall-intrinsics -fPIC -fopenmp -fdefault-real-8 -fdefault-double-8 -I/usr/local/opt/hdf5/include -o ../bin/phantom physcon.o config.o kernel_cubic.o io.o units.o boundary.o mpi_utils.o dtype_kdtree.o utils_omp.o utils_cpuinfo.o utils_allocate.o icosahedron.o utils_mathfunc.o part.o mpi_domain.o utils_timing.o mpi_balance.o setup_params.o timestep.o utils_dumpfiles.o utils_indtimesteps.o utils_infiles.o utils_sort.o utils_supertimestep.o utils_tables.o utils_gravwave.o utils_sphNG.o utils_vectors.o utils_datafiles.o datafiles.o gitinfo.o random.o eos_mesa_microphysics.o eos_mesa.o eos_shen.o eos_helmholtz.o eos_idealplusrad.o eos.o cullendehnen.o nicil.o nicil_supplement.o inverse4x4.o metric_minkowski.o metric_tools.o utils_gr.o cons2primsolver.o checkoptions.o viscosity.o options.o radiation_utils.o cons2prim.o centreofmass.o extern_corotate.o extern_binary.o extern_spiral.o extern_lensethirring.o extern_gnewton.o lumin_nsdisc.o extern_prdrag.o extern_Bfield.o extern_densprofile.o extern_staticsine.o extern_gwinspiral.o externalforces.o damping.o checkconserved.o partinject.o utils_inject.o utils_filenames.o utils_summary.o fs_data.o mol_data.o utils_spline.o h2cooling.o h2chem.o cooling.o dust.o growth.o dust_formation.o ptmass_radiation.o mpi_dens.o mpi_force.o stack.o mpi_derivs.o kdtree.o linklist_kdtree.o memory.o utils_hdf5.o utils_dumpfiles_hdf5.o readwrite_dumps_common.o readwrite_dumps_fortran.o readwrite_dumps_hdf5.o readwrite_dumps.o quitdump.o ptmass.o readwrite_infile.o dens.o force.o utils_deriv.o deriv.o energies.o sort_particles.o evwrite.o step_leapfrog.o writeheader.o step_supertimestep.o mf_write.o evolve.o checksetup.o initial.o phantom.o -L/usr/local/opt/hdf5/lib -lhdf5 -lhdf5_fortran
Undefined symbols for architecture x86_64:
"_GOMP_loop_maybe_nonmonotonic_runtime_next", referenced from:
___timestep_sts_MOD_sts_init_step._omp_fn.0 in utils_supertimestep.o
___densityforce_MOD_densityiterate._omp_fn.0 in dens.o
___forces_MOD_force._omp_fn.0 in force.o
"_GOMP_loop_maybe_nonmonotonic_runtime_start", referenced from:
___timestep_sts_MOD_sts_init_step._omp_fn.0 in utils_supertimestep.o
___densityforce_MOD_densityiterate._omp_fn.0 in dens.o
___forces_MOD_force._omp_fn.0 in force.o
"__gfortran_os_error_at", referenced from:
___dump_utils_MOD_read_array_real8arr in utils_dumpfiles.o
___dump_utils_MOD_read_array_real8 in utils_dumpfiles.o
___table_utils_MOD_diff in utils_tables.o
___table_utils_MOD_flip_array in utils_tables.o
___mesa_microphysics_MOD_get_eos_constants_mesa in eos_mesa_microphysics.o
___mesa_microphysics_MOD_read_opacity_mesa in eos_mesa_microphysics.o
___mesa_microphysics_MOD_get_opacity_constants_mesa in eos_mesa_microphysics.o
...
ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status
make[1]: *** [phantom] Error 1
make: *** [phantom] Error 2
This is coming from the building the phantom binary stage, in the following lines in the Makefile:
phantom: checksystem checkparams $(OBJECTS) phantom.o
$(FC) $(FFLAGS) -o $(BINDIR)/$@ $(OBJECTS) phantom.o $(LDFLAGS)
I do not have this problem on my linux machine with gfortran (version 9.3.0) and HDF5 (version 1.10.5).
Particle waking was disabled for MPI in 331970b. No explanation provided, but presumably because it could not be successfully implemented. This causes results to differ between MPI and non-MPI runs.
Behaviour should not differ when MPI is disabled. Particle waking should be re-enabled and remaining issues fixed.
MPI benchmarking depends on this.
In SETUP=star with phantom setup, want a reasonable stellar profile out of setup_star even if relax_star is set to false, but currently we are using random-symmetric particle distribution by default.
This affects e.g. the evrard collapse calculation where there is no real need to "relax" the star, but with the random distribution the initial density profile is very noisy
Error message:
FATAL ERROR! step_extern: dt <= 0 in sink-gas substepping : dt = 0.000E+00
Steps to reproduce:
Run phantomsetup (SETUP=star) on the attached .setup file in a directory containing the stellar profiles mysoftenedstar2.dat and profileCHeB2.data
Sink particles vanish in the simulation when running across two MPI nodes on OzSTAR. For those with access to oz101, the relevant files are in /fred/oz101/mlau/500k_12/artcond_0_1_MPI
Originally posted to the mailing list, basic issue is that particle IDs are not currently tracked in MPI, this needs implementing:
I'm trying to plot the trajectory of some gas particles in my phantom simulation. If I understand the code correctly, the indices of the particles serve as the particle IDs, so I could use the index to select a certain particle at different timestep (I think splash does the same way). This works fine if I only use OPENMP parallelization. However, if I use the hybrid MPI+OPENMP parallelization, the trajectory looks quite strange (see the attached plot). I suspect the indices might have been changed while the processors communicate by MPI. I wonder whether this is a bug in Phantom, or it is a feature to accelerate the simulations with MPI enabled.
If the indices have to be changed with MPI enabled, then I have to switch back to OPENMP, but it is known that OPENMP parallelization is not very scalable in general. For my problem: I am studying the effects of stellar bars/spirals on the galactic gaseous disk, and the bar usually drives a lot of gas into the galactic center, making a high-density ring there which cause the code runs quite slow (takes > 1 month using ~5 million particles with ISOTHERMAL, OPENMP and INDEPENDENT TIMESTEP, CPU: 32 X Intel® Xeon® Gold 6136 processor, Compiler: ifort 18). I am not sure whether there is a better way to make the code runs faster with or without MPI (e.g. how many threads per physical core is the most optimized configuration in Phantom?).
I think it would be good to write up in the documentation a "schema" for the file format.
At the moment the only way to know what should be written to a file is by reading the source code of readwrite_dumps.F90
or readwrite_dumps_hdf5.F90
. I think we should write something in the documentation that outlines what should be written to file given a set of compile-time and run-time options.
E.g. if the compile-time option DUST=yes
then dustfrac
and tstop
are written to file. In addition, if using the 1-fluid method (use_dustfrac == .true.
which is set by reading the initial conditions dump file) then deltav
is written to file.
from @jameswurster:
for the particle splitting, we will need to introduce ghost particles for each of the two types of gas particles, where the ghosts will be similar but not identical to boundary particles; one could also see a test where we would want proper boundary particles for each type.
To cleanly implement the multiple gas types, and to even clean up the existing boundary types, I wonder what you think about introducing a new integer*1 array where the value is 0 for normal particle, -1 for boundary and -2 for ghost. This would greatly reduce the number of itype's
Cons: we get a new array. There may be some difficulty making this backwards compatible, and getting this to work properly in splash.
Pro: we naturally allow for boundaries of all particle types, current and future; checking for a boundary particle will require checking only the sign rather than comparing an itype against several possible values; remove the need to calculate ibasetype
@becnealon and I are still pondering the cleanest/safest way of implementing live particles splitting/merging. (As I write this, I am really wondering if we should just stick to adding new itypes given the potential issues with backwards compatibility and how many modifications would be required...).
Dear Professor Daniel Price
I was trying to create an accretion disk (SETUP=disc) around a Jupiter mass planet, with the outer radius of Callisto (this kind of argument is very poor in the literature).
The disk is been created successfully, however the simulation doesn't start. It gives me the error:
Initial mass and box size (in code units):
Total mass: 1.000207E+00
x-Box size: 2.671967E-02
y-Box size: 2.671620E-02
z-Box size: 1.032527E-02
Setting initial values (in code units) to verify conservation laws:
Initial total energy: 1.932075E-01
Initial angular momentum: 1.717967E-05
Initial linear momentum: 4.131455E-24
--------> TIME = 0.000 : full dump written to file giove_00000 <--------
input file giove.in written successfully.
---> DELETING temporary dump file giove_00000.tmp <---
FATAL ERROR! maketree: no particles or all particles dead/accreted
I leave you there the zip with the simulation settings
I think that this happens because Phantom is been built to create larger accretion disks and that normally this density create a sink particle? Is there a way to change that? Because in the .in file i can't see this option...
Let me know and thank you for your wonderful work!
Francesco
Reported by @themikelau, I can reproduce this, output is corrupted for sink particle arrays when using the showarrays utility on small dumps:
ST:Phantom (hydro): 18/07/2020 20:22:11.5
:: nblocks = 1 array lengths per block = 2
Block 1 array block 1: size 1000000
x real [ 127.394668642202 484548.688319085 144552.467542698 -146443360.281561 -36917512.0077721 ...
y real [ -9897930.01942807 1.06090903733117 -8.60160256573759 114649104.202799 -32248.0586635346 ...
z real [ 9.853782096057617E-003 2.570411436433794E-008 2.011951786073985E-005 -8.43346971341713 2.578014948799434E-008 ...
h real*4 [ 0.1097630 0.2656673 0.2354337 0.1494501 6.4005889E-02 ...
Block 1 array block 2: size 2
x real [ 1.609828643276672E-010 6.013470016999186E-154 ]
N?=M?z real [ 9.324021306496487E-046 6.013470016999171E-154 ]
???m?=h real [ 1.192093173063569E-007 6.013949561859223E-154 ]
maccretereal [ 1.791534939968517E-037 6.013972500207556E-154 ]
>x`????1spiny real [ -1.893948175092820E-077 6.013983880732353E-154 ]
T?I5??X6tlast real [ 0.000000000000000E+000 6.013983880732353E-154 ]
T?I5??X6tlast real [ 0.000000000000000E+000 6.013983880732353E-154 ]
T?I5??X6tlast real [ 0.000000000000000E+000 6.013983880732353E-154 ]
T?I5??X6tlast real [ 0.000000000000000E+000 6.013983880732353E-154 ]
T?I5??X6tlast real [ 0.000000000000000E+000 6.013983880732353E-154 ]
T?I5??X6tlast real [ 0.000000000000000E+000 6.013983880732353E-154 ]
In many of the setup routines, phantomsetup exits after asking the interactive questions. This is to ensure that the setups are written in such a way that the .in and .setup files can be passed to other researchers and the will be able to reproduce the results. However, there are some setups (e.g. setup_shock or setup_sphereinbox, and possibly others) where the interactive part of the code either asks about values to be written to the .in file, or calls part of the code that defines values for the .in file. This results in many of the values in .in being the code default values rather than values appropriate for the given setup.
Therefore, an elegant solution must be determined, and then several of these setup files will need to be modified; I am only aware of setup_shock and setup_sphereinbox, but please add to this list as appropriate.
Issue seems to be with static allocation of listneigh array in density and force routines. This needs to be allocated dynamically:
forrtl: severe (408): fort: (2): Subscript #1 of the array LISTNEIGH has value 1200001 which is greater than the upper bound of 1200000
Image PC Routine Line Source
libifcoremt.so.5 0000151ADCE93F12 for_emit_diagnost Unknown Unknown
phantom 00000000007E49DB kdtree_mp_getneig 1104 kdtree.F90
phantom 00000000007F2FC0 linklist_mp_get_n 251 linklist_kdtree.F90
phantom 000000000090CFF9 densityforce_mp_d 304 dens.F90
To reproduce the error, use the attached setup file and the data.
A 500k particle star was softened on OzSTAR using isoftcore = 2 with the MESA EoS. The output and error message is
--> ALLOCATING PART ARRAYS
------------------------------------------------------------
Total memory allocated to arrays: 2.738 GB n = 10000000
------------------------------------------------------------
nprocs = 1
opening database from newstar.setup with 17 entries
setup_star: Reading setup options from newstar.setup
Using MESA star from file profile
Reading output_DE_z0.00x0.00.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.00x0.00.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.00x0.20.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.00x0.40.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.00x0.60.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.00x0.80.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.02x0.00.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.02x0.20.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.02x0.40.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.02x0.60.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.02x0.80.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.04x0.00.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.04x0.20.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.04x0.40.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.04x0.60.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading output_DE_z0.04x0.80.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading opacs.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Reading opacs.bindata in /home/mlau/phantom/data/eos/mesa/ (from PHANTOM_DIR setting)
Obtained core mass of 3.16776 Msun
placed 500000 particles in random-but-symmetric sphere
>>>>>> s t r e t c h m a p p i n g <<<<<<
stretching to match tabulated density profile in r direction
density at r = 1.479584740283066E-004 is 2.303693864902408E-005
total mass = 9.48704796586597
>>>>>> done
rstar = 500.621021625117 mstar = 9.48704796589679 tdyn = 4039.27065329865
Centre of mass is at (x,y,z) = (-7.247E-16, -9.229E-18, -1.478E-17)
Particle setup OK
forrtl: severe (174): SIGSEGV, segmentation fault occurred
This error does not appear when running with a lower number of particles (50k, 100k).
Error:
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
libifcoremt.so.5 00007F027A185F0C for__signal_handl Unknown Unknown
libpthread-2.27.s 00007F0277CD78A0 Unknown Unknown Unknown
phantom 000000000092B464 energies_mp_compu 170 energies.F90
phantom 000000000094CA2D evwrite_mp_write_ 335 evwrite.F90
phantom 00000000009F31C0 initial_mp_startr 592 initial.F90
phantom 00000000009F92B4 MAIN__ 65 phantom.F90
phantom 000000000040256E Unknown Unknown Unknown
libc-2.27.so 00007F02776F1B97 __libc_start_main Unknown Unknown
phantom 000000000040245A Unknown Unknown Unknown
with gfortran v10.1.10, src/main/stretchmap.f90 fails to compile with the following error:
../src/setup/stretchmap.f90:231:0:
231 | if (present(rhotab)) then
|
Error: 'rhotab' not specified in enclosing 'parallel'
../src/setup/stretchmap.f90:226:0:
226 | if (xold > xmin) then ! if x=0 do nothing
|
Error: enclosing 'parallel'
make[1]: *** [stretchmap.o] Error 1
make: *** [phantomtest] Error 2
the problem appears to be a gcc/gfortran bug in the latest version related to the if (present(rhotab)) statement and the default(none) in the openmp clause. My version of gfortran is:
% gfortran -v
Using built-in specs.
COLLECT_GCC=gfortran
COLLECT_LTO_WRAPPER=/usr/local/Cellar/gcc/10.1.0/libexec/gcc/x86_64-apple-darwin19/10.1.0/lto-wrapper
Target: x86_64-apple-darwin19
Configured with: ../configure --build=x86_64-apple-darwin19 --prefix=/usr/local/Cellar/gcc/10.1.0 --libdir=/usr/local/Cellar/gcc/10.1.0/lib/gcc/10 --disable-nls --enable-checking=release --enable-languages=c,c++,objc,obj-c++,fortran --program-suffix=-10 --with-gmp=/usr/local/opt/gmp --with-mpfr=/usr/local/opt/mpfr --with-mpc=/usr/local/opt/libmpc --with-isl=/usr/local/opt/isl --with-system-zlib --with-pkgversion='Homebrew GCC 10.1.0' --with-bugurl=https://github.com/Homebrew/homebrew-core/issues --disable-multilib --with-native-system-header-dir=/usr/include --with-sysroot=/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk SED=/usr/bin/sed
Thread model: posix
Supported LTO compression algorithms: zlib
gcc version 10.1.0 (Homebrew GCC 10.1.0)
The .setup file for SETUP=star currently asks for mean molecular weight as well as composition (X,Z) if the MESA equation of state is employed:
There are several issues with this:
The right procedure should be:
the changes on this are an improvement, so thanks for the efforts. I will merge to avoid the broken phantom2hdf5 but there are still a number of issues here:
As mentioned, I think there is a much better way to fix (3), which is the ultimate problem here, as per #43
Originally posted by @danieljprice in #39 (comment)
I am having some problems with phantom2hdf5.
The new HDF5 dump produced using phantom2hdf5 does not seem to read neither tstop nor deltav for one fluid simulations. The arrays are there, so that in the hdf5 dump both “dump[‘particles’]’[’tstop’]” and “dump[‘particles’][‘deltavxyz’]” can be accessed but both arrays are entirely initialised to 0 (they also have correct dimensions). Splash can read them properly from the dump, so it looks like a problem only when reading and creating the hdf5 dump.
I tried to dig a bit to try to fix it by my self but I did not manage to do so.
I think that the problem is that when running phantom2hdf5, the code does not read tstop and deltav from the dump, so they remain initialiased to 0, but then they are effectively written in the .h5 dump.
I noted that in the subroutine read_phantom_arrays, no explicit command to load tstop or deltav seems to be there (while write_fulldump_fortran explicitly writes them). I tried adding
call read_array(tstop(ndustfraci,:),tstop_label(ndustfraci),got_tstop(ndustfraci),&
ik,i1,i2,noffset,idisk1,tag,match,ierr)
after line 1197 of readwrite_dumps_fortran.F90, where “dustfrac” is read (after adding declaration of the variable got_tstop and updating the “use” statements with tstop, tstop_label etc.).
The compiler does not complain and the phantom2hdf5 works properly but there is no effect on the reading of tstop which remains =0.
Is the command I used wrong?
I am not entirely familiar with read_write dump routines, I would definitely appreciate some guidance from someone more familiar with them.
Thanks!
Error:
Program received signal SIGBUS: Access to an undefined portion of a memory object.
Backtrace for this error:
Segmentation fault: 11
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x7ffeefbfd847)
* frame #0: 0x00007ffeefbfd847
frame #1: 0x000000010020f6da phantomsetup`__stretchmap_MOD_get_mass at stretchmap.f90:410
frame #2: 0x000000010021457b phantomsetup`__stretchmap_MOD_set_density_profile._omp_fn.0 at stretchmap.f90:237
frame #3: 0x0000000101fa0bd2 libgomp.1.dylib`GOMP_parallel + 66
frame #4: 0x00000001002122ea phantomsetup`__stretchmap_MOD_set_density_profile at stretchmap.f90:224
frame #5: 0x0000000100219a3c phantomsetup`__unifdis_MOD_set_unifdis at set_unifdis.f90:558
frame #6: 0x000000010021aac7 phantomsetup`__setshock_MOD_set_shock at set_shock.f90:83
frame #7: 0x00000001001ed175 phantomsetup`__setup_MOD_setpart at setup_shock.F90:228
frame #8: 0x0000000100203cef phantomsetup`MAIN__ at phantomsetup.F90:134
frame #9: 0x0000000100204713 phantomsetup`main at phantomsetup.F90:22
frame #10: 0x00007fff6ef05cc9 libdyld.dylib`start + 1
In line 338 of energies.F90 in the compute_energies subroutine,
if (gravity) epot = epot + poten(i)
,
the total potential energy appears to be calculated by summing the "poten" of each particle, which would double-count. We should check whether double-counting is also present in other instances where total potential energies are calculated.
Hi,
I've run a simulation of an isolated galaxy using Phantom, but surprisingly the energy and the angular momentum of the system are not conserved! The simulation setup (phantom/build/Makefile) is as follows:
"ifeq ($(SETUP), galdisc)
IND_TIMESTEPS=yes
H2CHEM=no
ISOTHERMAL=no
GRAVITY=yes
MHD=no
SETUPFILE= setup_galdisc.f90
KNOWN_SETUP=yes
endif"
The Phantom input file is attached:
PHANTOM_galdisc.pdf
The initial conditions are generated using DICE with the following input file:
DICE_isolated_galaxy.pdf
And these plots show the evolution of the galaxy's energy and angular momentum magnitude:
The left panels show the evolution of the mentioned parameters vs time, and the right panels show the evolution of the relative difference of them vs time.
I can't figure out what is the cause of this problem. I'll greatly appreciate any comments on this issue.
Thanks,
Laya
Hi, everyone,
As we know, Lagrangian method can solve problems that contain different kinds of materials in computing domain. So I hope to know if Phantom can support multi-material problems by modifying some code.
raised by @jameswurster:
The good news is that all my commits now pass the MPI testsuite.
The bad news is that I think we are hiding a bug rather than actually fixing it. Specifically, I had to undo the change where dtclean -> 0.5*dtclean only if we are using overcleaning. It seems that this extra factor of two is inexplicably required in MPI (suggesting a bug somewhere).
While looking, I found another issue (not sure how frequently it appears since it was previously only displayed in a conditional print statement (for now, I have forced the print statement to be always on)). Specifically, after the sedov test, we have npart = 2048, nptot = 4096 so there are -2048 dead particles. Since I'm was running with 2 MPI processes, it appears that both processes are doing all of energies and then summing the results. This doubling of values may account for the lack of momentum conservation when using a slightly larger timestep.
energies.F90 seems to be called by each process, but the limits (for non-sink particles) is do 1 = 1,npart. Is this correct, or should this range vary with the number of threads, or should energies be called only by id==master? On a similar note, I see there is a do i = 1,npart if force.F90 when we are searching for gas particles to become sink candidates. Since there is no summing in that case, there should be no problem, even if we are duplicating the search.
On ozstar, with module intel/2018.1.163-gcc-6.4.0 loaded, phantom appears to hang after a little bit.
phantom was compiled with:
"
export SYSTEM=ifort
make MCFOST=no MAXP=20000000 phantomsetup phantom
"
phantom hangs after 39 dumps (~ 1h) with the following setup file :
"
np = 1000000 ! number of gas particles
dist_unit = au ! distance unit (e.g. au,pc,kpc,0.1pc)
mass_unit = solarm ! mass unit (e.g. solarm,jupiterm,earthm)
icentral = 1 ! use sink particles or external potential (0=potential,1=sinks)
nsinks = 1 ! number of sinks
m1 = 2.000 ! star mass
accr1 = 0.500 ! star accretion radius
isetgas = 0 ! how to set gas density profile (0=total disc mass,1=mass within annulus,2=surface density normalisation,3=surface density at reference radius,4=minimum Toomre Q)
itapergas = F ! exponentially taper the outer disc profile
ismoothgas = F ! smooth inner disc
iwarp = F ! warp disc
R_in = 1.000 ! inner radius
R_ref = 100. ! reference radius
R_out = 100. ! outer radius
disc_m = 0.010 ! disc mass
pindex = 0.000 ! p index
qindex = 0.350 ! q index
posangl = 0.000 ! position angle (deg)
incl = 0.000 ! inclination (deg)
H_R = 0.100 ! H/R at R=R_ref
alphaSS = 0.001 ! desired alphaSS
setplanets = 0 ! add planets? (0=no,1=yes)
norbits = 100 ! maximum number of orbits at outer disc
deltat = 0.001 ! output interval as fraction of orbital period
"
João Rocha
P.S. Attached the output of compilation
out_phantomtest_mpi.txt
there is a nightly build failure with SETUP=gfortran in "make test", but I am having trouble reproducing it, even on the build machine.
https://phantomsph.bitbucket.io/nightly/build/20200721.html
where the error is:
--> TESTING STEP MODULE / boundary crossing
---------------- particles set on 50 x 50 x 50 uniform cubic lattice --------------
x: -0.500 -> 0.500 y: -0.500 -> 0.500 z: -0.500 -> 0.500
dx: 2.000E-02 dy: 2.000E-02 dz: 2.000E-02
-----------------------------------------------------------------------------------------
t = 0.20000000000000001 dt = 0.20000000000000001
constructing node 3 : found 62500 particles, expected: 125000 particles for this node
FATAL ERROR! maketree: expected number of particles in node differed from actual number
which suggests a NaN in the particle positions.
If anyone can reproduce this I would be most grateful for some simple steps. Works fine on my laptop, ozstar and monarch with "make test SYSTEM=gfortran", so I am a bit puzzled.
To test MPI, I modelled the gravitational collapse of a magnetised sphere. My runs were pure openMP (mislabelled as 'hybrid' on the line plots) and hybrid MPI+openMP (labelled as MPI). The attached graphs show the kinetic and potential energies over the early stages of the collapse, and the density at dump 10 as plotted by splash (red = openMP. black = hybrid). Clearly the results are not the same, suggesting there is a bug in the MPI implementation.
both Passy and growth benchmarks are failing because the results differ slightly from reference solution:
phantomSPH/phantom-benchmarks#1
phantomSPH/phantom-benchmarks#2
needs to be understood why this occurred. Errors are small (10^-8), but should be understood.
Passy failure occurred when we moved the equation of state calculation into cons2prim_everything and stored pressure instead of computing it on the fly.
Growth failure needs further investigation.
Aim is to get compile time warnings down to zero on the master branch, then strictly enforce no-new-warnings on pull requests to keep the number of compile time warnings to zero.
something in the initial conditions is screwy here, the initial density profile is not correct when setup with default answers to phantomsetup (which should run the Sod problem).
Looks like an issue with the "smoothed initial conditions" stretch mapping being activated by default, even though no smoothing should occur for this problem.
Hi!
I'm new to phantom and when running the testsuite, the "sink particle/point mass" module fails due to a segmentation error. This was not as expected since the documentation of phantom explains that the following variables should be defined in the bash/zhs-profile in order to prevent this error:
export OMP_SCHEDULE="dynamic"
export OMP_STACKSIZE=512M
ulimit -s unlimited
More specifically, we get the following error:
--> testing sink particle creation (sin)
>>>>>> s t r e t c h m a p p i n g <<<<<<
stretching to match density profile in r direction
density at r = 0.0000000000000000 is 1.0000000000000000
total mass = 6.9604063530626938E-004
Program received signal SIGBUS: Access to an undefined portion of a memory object.
Backtrace for this error:
Program received signal SIGBUS: Access to an undefined portion of a memory object.
Backtrace for this error:
zsh: segmentation fault ./bin/phantomtest ptmass
After doing some research, it has been found that ulimit -s unlimited does not work on Mac: Apple has set a hardlimit on the stacksize (= 65532) and until now we don't know how to circumvent this limit.
For example, we have tried to use the launchctl command to raise the stack size but to no avail sadly, not even in serverperfmode. We were, therefore, wondering whether a solution exists to this problem or a workaround at least. I'm not an expert in IT so I hope that I have given enough information for the problem to be clarified. Thanks in advance!
Cheers,
Dion
#ifdefs look horrible in the code, and make testing all possible compile-time combinations very difficult. We should avoid introducing any new #ifdefs, preferencing use of the logical flags from the config module where possible, e.g. if (mhd), if (radiation) etc.
Many of these could also be made into runtime options, since they are mainly there to determine whether memory is allocated for certain arrays. This would need a bit more thought.
For now, we should try to keep as many files as .f90 as possible, rather than .F90, to disallow the use of ugly and unnecessary #ifdef statements. The only allowed use case for #ifdefs should be for linking with external libraries, e.g. MPI, or when the performance really matters (e.g. use of #ifdef PERIODIC in dens, force or kdtree)
I'm working on a project about using Regression Forest in SPH and I need to reproduce the results of the galaxy merger simulation example in the Phantom paper. I've read the documentation files, but I'm not sure about the right configuration parameters. I wonder if someone can help me with the code configurations to simulate this problem.
After sink formation in a cluster simulation (setup_cluster; using default values), there is a slow increase in linear momentum until Phantom stops due to lack of conservation. This increase occurs with only one sink, so it is a result of sink-gas interaction.
First, are the gas & sinks properly synced? A comment in the code suggests that momentum will not be conserved if they are not properly synced during accretion events. In step_extern,
gas velocity is updated to the half-step & position is updated to the full step
using this, the new sink-gas acceleration is calculated
sink velocity is updated to the half-step
gas velocity is updated to the full-step using the new external forces
particles are acceted
sink position and velocity is updated to the full step
repeat the above as subcycling requires
If this is correct, then isn't the sink position & velocities a step behind when accreting?
Also, doesn't this mean that the accreted particles affect the sink twice before accretion? Once in calculating the new sink position & velocity through the accretion process and once from their contribution to the sink-gas acceleration calculation?
Second, is the omp lock around dptmass in ptmass_accrete necessary? Each thread should have its own version that is summed once the parallel loop is complete. The lock is still required for iosum_ptmass, but this could/should be rewritten to avoid the lock. This is not the cause of the lack of momentum.
Third, it looks like any accretable particles can be accreted. Shouldn't only active particles be allowed to be accreted?
In practice, I don't suspect many/any inactive particles to be within the accretion radius of a sink, and I don't think that this can cause the lack of conservation.
This problem was raised by C. Dobbs and her PhD student; they used their own initial turbulent field rather than the default field. I tested the problem using the default field and confirmed the increasing momentum.
Potential energy is not corrected calculated, which may due to the way the code is calculating the sink particle potential for gas particles inside softening length.
I cannot build Phantom on macOS (10.15.6).
gfortran version: 10.2.0 installed via Homebrew
I try with:
make SETUP=empty SYSTEM=gfortran
It fails with:
gfortran -O3 -Wall -Wno-unused-dummy-argument -frecord-marker=4 -gdwarf-2 -finline-functions-called-once -finline-limit=1500 -funroll-loops -ftree-vectorize -std=f2008 -fall-intrinsics -fPIC -fopenmp -fdefault-real-8 -fdefault-double-8 -o ../bin/phantom physcon.o config.o kernel_cubic.o io.o units.o boundary.o mpi_utils.o dtype_kdtree.o utils_omp.o utils_cpuinfo.o utils_allocate.o icosahedron.o utils_mathfunc.o part.o mpi_domain.o utils_timing.o mpi_balance.o setup_params.o timestep.o utils_dumpfiles.o utils_indtimesteps.o utils_infiles.o utils_sort.o utils_supertimestep.o utils_tables.o utils_gravwave.o utils_sphNG.o utils_vectors.o utils_datafiles.o datafiles.o gitinfo.o random.o eos_mesa_microphysics.o eos_mesa.o eos_shen.o eos_helmholtz.o eos_idealplusrad.o eos.o cullendehnen.o nicil.o nicil_supplement.o inverse4x4.o metric_minkowski.o metric_tools.o utils_gr.o cons2primsolver.o checkoptions.o viscosity.o options.o radiation_utils.o cons2prim.o centreofmass.o extern_corotate.o extern_binary.o extern_spiral.o extern_lensethirring.o extern_gnewton.o lumin_nsdisc.o extern_prdrag.o extern_Bfield.o extern_densprofile.o extern_staticsine.o extern_gwinspiral.o externalforces.o damping.o checkconserved.o partinject.o utils_inject.o utils_filenames.o utils_summary.o fs_data.o mol_data.o utils_spline.o h2cooling.o h2chem.o cooling.o dust.o growth.o dust_formation.o ptmass_radiation.o mpi_dens.o mpi_force.o stack.o mpi_derivs.o kdtree.o linklist_kdtree.o memory.o readwrite_dumps_common.o readwrite_dumps_fortran.o readwrite_dumps.o quitdump.o ptmass.o readwrite_infile.o dens.o force.o utils_deriv.o deriv.o energies.o sort_particles.o evwrite.o step_leapfrog.o writeheader.o step_supertimestep.o mf_write.o evolve.o checksetup.o initial.o phantom.o
Undefined symbols for architecture x86_64:
"_GOMP_loop_maybe_nonmonotonic_runtime_next", referenced from:
___timestep_sts_MOD_sts_init_step._omp_fn.0 in utils_supertimestep.o
___densityforce_MOD_densityiterate._omp_fn.0 in dens.o
___forces_MOD_force._omp_fn.0 in force.o
"_GOMP_loop_maybe_nonmonotonic_runtime_start", referenced from:
___timestep_sts_MOD_sts_init_step._omp_fn.0 in utils_supertimestep.o
___densityforce_MOD_densityiterate._omp_fn.0 in dens.o
___forces_MOD_force._omp_fn.0 in force.o
"__gfortran_os_error_at", referenced from:
___dump_utils_MOD_read_array_real8arr in utils_dumpfiles.o
___dump_utils_MOD_read_array_real8 in utils_dumpfiles.o
___table_utils_MOD_diff in utils_tables.o
___table_utils_MOD_flip_array in utils_tables.o
___mesa_microphysics_MOD_get_eos_constants_mesa in eos_mesa_microphysics.o
___mesa_microphysics_MOD_read_opacity_mesa in eos_mesa_microphysics.o
___mesa_microphysics_MOD_get_opacity_constants_mesa in eos_mesa_microphysics.o
...
ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status
make[1]: *** [phantom] Error 1
make: *** [phantom] Error 2
for example:
~/phantom/scripts/writemake.sh shock > Makefile
make setup
./phantomsetup shock
produces a krome.setup file
In eos.f90, equationofstate subroutine does not calculate temperature for ieos=10, so temperature in dump files is wrong.
To reproduce, setup and run a sedov blast with the default setup. make analysis
and ./phantomanalysis dump_00000
.
Error message:
>>./phantomanalysis blast_00000
Phantom analysis (headerdt): You data, we analyse
opening database from blast.in with 32 entries
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x1091388cd
#1 0x109137cdd
#2 0x7fff6fc23b5c
[1] 61706 segmentation fault ./phantomanalysis blast_00000
Hello,
I am new to Phantom. I was trying to get it running on my system/cluster here.
I followed the instructions on bitbucket documentation. I did the following :
git clone https://github.com/danieljprice/phantom
cd phantom
git checkout stable
I get the following : error: pathspec 'stable' did not match any file(s) known to git.
Anyway, I tried to compile the code on two different systems, one with fortran and the other with ifort
and with both - make test - failed at seemingly the same point. With the gfortran, I got the following error:
gfortran -c -O3 -Wall -Wno-unused-dummy-argument -frecord-marker=4 -gdwarf-2 -finline-functions-called-once -finline-limit=1500 -funroll-loops -ftree-vectorize -std=f2008 -fall-intrinsics -mcmodel=medium -fPIC -fopenmp -fdefault-real-8 -fdefault-double-8 -DPERIODIC -DMHD -DDUST -DCONST_ARTRES -DCURLV -DRADIATION ../src/main/eos.F90 -o eos.o
../src/main/eos.F90:390.79:
Can I please get some help on this?
Best,
Pallavi
Illustrative example:
With the following options in the infile:
logfile = L3/dumps/L301.log ! file to which output is directed
dumpfile = L3/dumps/dump_00100 ! dump file to start from
When running Phantom v2021.0.0, the sink evfiles are named in the form of, e.g., dumpSink0001N.ev
, instead of, e.g., dumpSink0001N01.ev
for the first file and etc. This resulted in data loss where the subsequent runs replace dumpSink0001N.ev
instead of creating dumpSink0001N02.ev
and dumpSink0001N03.ev
, etc.
However, the log files and ev files are named correctly, i.e., L3/dumps/L301.log, L3/dumps/L302.log, ...
and L3/dumps/L301.ev, L3/dumps/L302.ev, ...
I cannot checkout Phantom due to exceeding GitHub git-lfs quota.
To reproduce:
git clone [email protected]:danieljprice/phantom
Here is the output:
Cloning into 'phantom'...
remote: Enumerating objects: 36, done.
remote: Counting objects: 100% (36/36), done.
remote: Compressing objects: 100% (24/24), done.
remote: Total 27519 (delta 21), reused 25 (delta 12), pack-reused 27483
Receiving objects: 100% (27519/27519), 39.50 MiB | 4.60 MiB/s, done.
Resolving deltas: 100% (21182/21182), done.
Updating files: 100% (662/662), done.
Downloading data/cooling/cooltable.dat (28 KB)
Error downloading object: data/cooling/cooltable.dat (79d3e01): Smudge error: Error downloading data/cooling/cooltable.dat (79d3e017deb76c90eb17c1f5784c31bd7947e9932fc17339f617f72df35753c0): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
Errors logged to /Users/daniel/repos/phantom/.git/lfs/logs/20200801T104918.246695.log
Use `git lfs logs last` to view the log.
error: external filter 'git-lfs filter-process' failed
fatal: data/cooling/cooltable.dat: smudge filter lfs failed
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'
Hello,
I am new to Phantom. I have managed to successfully run the simulation. The output files I got after running the simulation are the standard output files. However, I need them in the hdf5 format.
I followed the instructions on the bitbucket documentation for converting to the hdf5 format using the phantom2hdf5 utility.
I managed to successfully install the hdf5 package (version: 1.12.1) at the location: HDF5_DIR=/usr/lib/x86_64-linux-gnu/hdf5/serial
I used these parameters:
export CC=gcc
export F9X=gfortran
export CXX=g++
Then I followed these steps:
make HDF5=yes HDF5ROOT=$HDF5_DIR
export LD_LIBRARY_PATH=/usr/local/hdf5/lib:$LD_LIBRARY_PATH
Then in the simulation directory:
make SYSTEM=gfortran HDF5=yes PHANTOM2HDF5=yes HDF5ROOT=/usr/lib/x86_64-linux-gnu/hdf5/serial phantom2hdf5
After following these steps the following error is occuring while compiling for the phantom2hdf5 utility:
../src/main/utils_dumpfiles_hdf5.f90:975:10:
975 | if (got) got_arrays%got_orig = .true
| 1
Error: Cannot assign to a named constant at (1)
make[2]: *** [Makefile:1711: utils_dumpfiles_hdf5.o] Error 1
make[2]: Leaving directory '/home/auratrik/phantom-master/build'
make[1]: *** [Makefile:17: phantom2hdf5] Error 2
make[1]: Leaving directory '/home/auratrik/phantom-master'
Can I please get some help on this?
Cheers
Auratrik
The sedov test seems to broken by recent changes to the div B cleaning. For example, running the following command on my laptop:
make SETUP=testkd && ./bin/phantom test sedov
gives the output:
--> testing Sedov blast wave
---------------- particles set on 16 x 16 x 16 uniform cubic lattice --------------
x: -0.500 -> 0.500 y: -0.500 -> 0.500 z: -0.500 -> 0.500
dx: 6.250E-02 dy: 6.250E-02 dz: 6.250E-02
-----------------------------------------------------------------------------------------
npart = 4096 particle mass = 2.4414062500000000E-004
WARNING! force: radiation may become negative, limiting timestep
WARNING! force: radiation may become negative, limiting timestep
WARNING! force: radiation may become negative, limiting timestep
WARNING! force: radiation may become negative, limiting timestep
WARNING! force: radiation may become negative, limiting timestep
WARNING! force: radiation may become negative, limiting timestep
WARNING! force: radiation may become negative, limiting timestep
WARNING! force: radiation may become negative, limiting timestep
B-cleaning controlling timestep on 56 particles
WARNING! force: radiation may become negative, limiting timestep
(buffering remaining warnings... radiation may become negative, limiting timestep)
B-cleaning controlling timestep on 56 particles
> step 1 / 256 t = 0.3906250E-03 dt = 3.91E-04 moved 4096 in 0.11 cpu-s < | np = 4096 |
B-cleaning controlling timestep on 56 particles
> step 2 / 256 t = 0.7812500E-03 dt = 3.91E-04 moved 4096 in 0.12 cpu-s <
WARNING! evolve: N gas particles with energy = 0 : N = 1664
B-cleaning controlling timestep on 56 particles
> step 3 / 256 t = 0.1171875E-02 dt = 3.91E-04 moved 56 in 0.0064 cpu-s <
radiation diffusion controlling timestep on 72 particles
B-cleaning controlling timestep on 56 particles
> step 4 / 256 t = 0.1562500E-02 dt = 3.91E-04 moved 4096 in 0.11 cpu-s <
B-cleaning controlling timestep on 56 particles
> step 5 / 256 t = 0.1953125E-02 dt = 3.91E-04 moved 56 in 0.0071 cpu-s <
radiation diffusion controlling timestep on 8 particles
B-cleaning controlling timestep on 56 particles
> step 6 / 256 t = 0.2343750E-02 dt = 3.91E-04 moved 400 in 0.018 cpu-s <
B-cleaning controlling timestep on 56 particles
> step 7 / 256 t = 0.2734375E-02 dt = 3.91E-04 moved 64 in 0.0066 cpu-s <
B-cleaning controlling timestep on 64 particles
> step 8 / 256 t = 0.3125000E-02 dt = 3.91E-04 moved 4096 in 0.11 cpu-s <
WARNING! evolve: N gas particles with energy = 0 : N = 160
B-cleaning controlling timestep on 56 particles
> step 9 / 256 t = 0.3515625E-02 dt = 3.91E-04 moved 56 in 0.0065 cpu-s <
B-cleaning controlling timestep on 64 particles
> step 10 / 256 t = 0.3906250E-02 dt = 3.91E-04 moved 408 in 0.016 cpu-s <
B-cleaning controlling timestep on 32 particles
> step 11 / 256 t = 0.4296875E-02 dt = 3.91E-04 moved 32 in 0.0047 cpu-s <
radiation diffusion controlling timestep on 120 particles
B-cleaning controlling timestep on 64 particles
> step 12 / 256 t = 0.4687500E-02 dt = 3.91E-04 moved 2488 in 0.070 cpu-s <
WARNING! evolve: N gas particles with energy = 0 : N = 160
> step 49 / 1024 t = 0.4785156E-02 dt = 9.77E-05 moved 24 in 0.0048 cpu-s <
radiation diffusion controlling timestep on 48 particles
> step 50 / 1024 t = 0.4882812E-02 dt = 9.77E-05 moved 1128 in 0.035 cpu-s <
WARNING! evolve: N gas particles with energy = 0 : N = 160
> step 401 / 8192 t = 0.4895020E-02 dt = 1.22E-05 moved 10 in 0.0051 cpu-s <
radiation diffusion controlling timestep on 12 particles
> step 402 / 8192 t = 0.4907227E-02 dt = 1.22E-05 moved 517 in 0.016 cpu-s <
WARNING! evolve: N gas particles with energy = 0 : N = 160
radiation diffusion controlling timestep on 74 particles
> step 202 / 4096 t = 0.4931641E-02 dt = 2.44E-05 moved 2203 in 0.056 cpu-s <
WARNING! evolve: N gas particles with energy = 0 : N = 160
> step 809 / 16384 t = 0.4937744E-02 dt = 6.10E-06 moved 3 in 0.0045 cpu-s <
radiation diffusion controlling timestep on 4 particles
B-cleaning controlling timestep on 12 particles
> step 810 / 16384 t = 0.4943848E-02 dt = 6.10E-06 moved 137 in 0.0082 cpu-s <
WARNING! evolve: N gas particles with energy = 0 : N = 160
> step 103681 / 2097152 t = 0.4943895E-02 dt = 4.77E-08 moved 1 in 0.0037 cpu-s <
radiation diffusion controlling timestep on 1 particles
B-cleaning controlling timestep on 12 particles
> step 103682 / 2097152 t = 0.4943943E-02 dt = 4.77E-08 moved 59 in 0.0053 cpu-s <
WARNING! evolve: N gas particles with energy = 0 : N = 160
> step 414729 / 8388608 t = 0.4943955E-02 dt = 1.19E-08 moved 1 in 0.0036 cpu-s <
B-cleaning controlling timestep on 16 particles
> step 414730 / 8388608 t = 0.4943967E-02 dt = 1.19E-08 moved 57 in 0.0052 cpu-s <
WARNING! evolve: N gas particles with energy = 0 : N = 160
radiation diffusion controlling timestep on 8 particles
B-cleaning controlling timestep on 57 particles
> step 207366 / 4194304 t = 0.4943991E-02 dt = 2.38E-08 moved 350 in 0.014 cpu-s <
(buffering remaining warnings... N gas particles with energy = 0 : N = 160)
radiation diffusion controlling timestep on 43 particles
B-cleaning controlling timestep on 64 particles
> step 103684 / 2097152 t = 0.4944038E-02 dt = 4.77E-08 moved 1111 in 0.031 cpu-s <
> step 26543105 / 536870912 t = 0.4944039E-02 dt = 1.86E-10 moved 1 in 0.0043 cpu-s <
get_newbin: dt_ibin(0) = 1.0000000E-01
get_newbin: dt_ibin(max) = 1.8626451E-10
get_newbin: dt = dt_radiation
FATAL ERROR! get_newbin: step too small: bin would exceed maximum : dt = 4.852E-11
currently the test suite is failing with a seg fault on "make testkd":
https://phantomsph.bitbucket.io/nightly/build/20200731.html
The run with DEBUG=yes fails because particle positions are NaN, but the corrupt output also suggests some memory overflow has occured:
ERROR: density iteration failed after 50 iterations
hnew = 218.350461619599 hi_old = 2.399340097922070E-002 nneighi = 1
rhoi = 3.170171611510350E-012 gradhi = 1.22580131836267
error = 1516.73969166702 tolh = 1.000000000000000E-005
itype = 1
x,y,z = NaN NaN NaN
vx,vy,vz = NaN NaN NaN
ERROR: density iteration failed after 50 iterations
FATAL ERROR! densityiterate: could not converge in density on particle 62: error = 1.517E+03
ERROR: density iteration failed after 50 iterations
0Õ �����∏á�Âj+������������������ 0Õ �����∏á�Âj+������������������ 0Õ �����∏á�Âj+������������������ ERROR: density iteration failed after 50 iterations
Also, GitHub pipelines are failing because of a seg fault in the test suite with MPI, likely these are related.
Both these issues need to be identified and fixed before any further merges occur!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.