Coder Social home page Coder Social logo

princetonuniversity / athena Goto Github PK

View Code? Open in Web Editor NEW
200.0 36.0 112.0 29.01 MB

Athena++ radiation GRMHD code and adaptive mesh refinement (AMR) framework

Home Page: https://www.athena-astro.app

License: BSD 3-Clause "New" or "Revised" License

C++ 88.28% Python 10.00% C 0.70% Makefile 0.08% Shell 0.94%

athena's Introduction

athena

Project Status: Active – The project has reached a stable, usable state and is being actively developed. DOI codecov License Contributor Covenant

Athena++ radiation GRMHD code and adaptive mesh refinement (AMR) framework

Please read our contributing guidelines for details on how to participate.

Citation

To cite Athena++ in your publication, please use the following BibTeX to refer to the code's method paper:

@article{Stone2020,
	doi = {10.3847/1538-4365/ab929b},
	url = {https://doi.org/10.3847%2F1538-4365%2Fab929b},
	year = 2020,
	month = jun,
	publisher = {American Astronomical Society},
	volume = {249},
	number = {1},
	pages = {4},
	author = {James M. Stone and Kengo Tomida and Christopher J. White and Kyle G. Felker},
	title = {The Athena$\mathplus$$\mathplus$ Adaptive Mesh Refinement Framework: Design and Magnetohydrodynamic Solvers},
	journal = {The Astrophysical Journal Supplement Series},
}

Additionally, you can add a reference to https://github.com/PrincetonUniversity/athena in a footnote.

Finally, we have minted DOIs for each released version of Athena++ on Zenodo. This practice encourages computational reproducibility, since you can specify exactly which version of the code was used to produce the results in your publication. 10.5281/zenodo.4455879 is the DOI which cites all versions of the code; it will always resolve to the latest release. Click on the Zenodo badge above to get access to BibTeX, etc. info related to these DOIs, e.g.:

@software{athena,
  author       = {Athena++ development team},
  title        = {PrincetonUniversity/athena: Athena++ v24.0},
  month        = jun,
  year         = 2024,
  publisher    = {Zenodo},
  version      = {24.0},
  doi          = {10.5281/zenodo.11660592},
  url          = {https://doi.org/10.5281/zenodo.11660592}
}

athena's People

Contributors

alwinmao avatar apbailey avatar bcaddy avatar beiwang2003 avatar c-white avatar changgoo avatar dgagnier avatar doraemonho avatar dradice avatar felker avatar forrestglines avatar jmshi avatar jmstone avatar jzuhone avatar kahoooo avatar luet avatar msbc avatar munan avatar pdmullen avatar pgrete avatar russellgoke avatar sanghyukmoon avatar swdavis avatar tomidakn avatar tomo-ono avatar venkat-1 avatar yanfeij avatar zhuzh1983 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

athena's Issues

Add code structure diagrams to Wiki

This is just a notification that I've added a wiki page with a number of diagrams detailing the organizational structure of the code. They only cover the main branch, but I can keep them updated as new branches are merged into it.

These are intended to be broad overviews of objects and tasks, perhaps useful to new developers starting to work with the code.

(Sorry about having to download PDFs. Github overrides the HTML to force them to be downloaded rather than viewed. I might wrestle them into another format at some point.)

ParameterInput should be generally accessible

It would be good to be able to access the ParameterInput object whenever desired, specifically to get or set the values of parameters within UserWorkInLoop or something like that.

The use case I had in mind would be a parameter that changes during the course of the simulation that one would want to store in a restart file. For example, I have a moving gravitational potential that has a center that updates its location during the course of the run, but I would need to have the value of its location stored in the restart file.

I had an implementation of this that works for me, where I added a reference to it from the Mesh class, but in case anyone had a better idea I thought I would make an issue and solicit ideas before submitting a pull request.

Problem with boundary functions in 1D hydro shock tube

The following simple test segfaults:

python configure.py --prob=shock_tube --coord=cartesian -debug
make clean
make
cd bin
./athena -i ../inputs/hydro/athinput.sod

Using a debugger, it seems the problem is in BoundaryValues::ReceiveAndSetFluidBoundary(), line 423 of bvals.cpp, when dir is inner_x2. The function pointer is bad. This is due to the conditionals if (pmb->block_size.nx2 > 1) and if (pmb->block_size.nx3 > 1) (lines 124 and 186) of the BoundaryValues constructor being skipped in 1D problems, so the x2 and x3 boundary functions are never set.

Removing these conditionals does not solve the problem. Things first go wrong when bvals.cpp:423 is forced to actually call PeriodicInnerX2() rather than a dead pointer. If the j dimension is singleton, then js and je will be 0, and line 90 of periodic_fluid.cpp will write out of the array bounds (causing hard-to-diagnose errors where, for example, the Coordinates object's pmy_block pointer will be nulled).

Looking back at older versions of the code, the equivalent of bvals.cpp:423 used to be protected by an extra conditional. Perhaps this line should be changed to

{
  if ((dir == inner_x1 or dir == outer_x1)
      or ((dir == inner_x2 or dir == outer_x2) and pmb->block_size.nx2 > 1)
      or ((dir == inner_x3 or dir == outer_x3) and pmb->block_size.nx2 > 1))
    FluidBoundary_[dir](pmb,dst);
}

Restart output problem due to next_time=0

In restart files, the "next_time" and "file_number" for each output block are always 0 for some reason. This leads to a problem that, no matter from which rst file I restart, the code would output everything at every timestep at the beginning, because it satisfies "current time > next_time". These files would overwrite previous output before the restart time.

1D Newtonian MHD broken with periodic boundary

The newtonian/mhd_convergence_1d test no longer terminates in reasonable time, since the timesteps get very small. They should stay roughly the same, since this is a linear wave. The problem can be traced to this commit that fixed the fix to the EMF correction. Something goes wrong due to the second condition in the if statement.

2D MHD, 1D hydro, and 2D hydro all still work.

Add boundary functions based on primitive variables

I have committed the new boundary functions using primitive variables. Users must rewrite their own boundary functions so that they use and return primitive variables. Please note that the boundary functions now take a pointer to a Coordinate class as well.

There are some leftover tasks which should be cleaned up shortly.

  1. Merge Hydro and Field boundary functions
  2. Modify all the problem generators which have their own boundary functions (except dmr and rt)
  3. GR many need additional treat for SMR boundaries
  4. Tests

I am going to work on AMR for a while, so meanwhile I want users to test the code.

Compilation failure when using HDF5 without MPI

The failure occurs in src/outputs/athena_hdf5.cpp:

src/outputs/athena_hdf5.cpp:208:40: error: use of undeclared identifier 'num_blocks_active'
    num_blocks_global=num_blocks_local=num_blocks_active;

Context (lines 196-210):

#ifdef MPI_PARALLEL
    int *n_active=new int[Globals::nranks];
    MPI_Allgather(&nba, 1, MPI_INT, n_active, 1, MPI_INT, MPI_COMM_WORLD);
    num_blocks_local=n_active[Globals::my_rank];
    first_block=0;
    for(int n=0; n<Globals::my_rank; n++)
      first_block+=n_active[n];
    num_blocks_global=0;
    for(int n=0; n<Globals::nranks; n++)
      num_blocks_global+=n_active[n];
    delete [] n_active;
#else
    num_blocks_global=num_blocks_local=num_blocks_active;
    first_block=0;
#endif

It appears that num_blocks_active is never defined anywhere.

Conflicting restart and input parameters for writing output

I'm not entirely sure if this is a bug or a feature. If you start a simulation from a restart file and provide a new athinput file that changes the cadence (dt) of the outputs then something unexpected happens. The next_time for that output block is read from the restart file. This was computed using the old dt, so it appears as if the dt setting from the new input file was ignored (until next_time) is reached. The work around is to specify next_time manually in the athinput file, but I think it would be a good idea to modify next_time automatically if dt is in conflict between the restart and athinput files.

Chip bugs

Does anyone have a good handle on how the recent patches to Meltdown/Spectre affect speed? I'm wondering whether there are any changes to how to balance more cores+small meshblocks vs communication cost or any changes to the best compilers / options to use when compiling.

Ultimately I'm wondering whether there's anything we can do other than wait for Intel to release better compilers. At least on Perseus I've already noticed a ~50% slowdown in zone-cycles/second.

Add double precision option for HDF5 output

Currently our HDF5 outputs only support single precision for most data (see the usage of H5T_IEEE_F32BE in athena_hdf5.cpp. The old TODO list had a suggestion for implementing a double precision option. If we do not want this, this issue can be closed.

Formatted table output has column ordering inconsistent with internal storage and Athena 4.2

Is there a reason for the pgas data column to be written before the velocity components in the formatted table output files? E.g., for a primitive data .tab file for a 1D adiabatic MHD simulation, the file header is

# i       x1v      j       x2v         rho          press          vel          Bcc

while IPR=4 for all storage in memory of primitive hydrodynamic variables within the code.

Further, the corresponding Athena 4.2 .tab file has a header with

# [1]=i-zone [2]=x1 [3]=d [4]=V1 [5]=V2 [6]=V3 [7]=P [8]=B1c [9]=B2c [10]=B3c

I was performing some 1D and 2D test comparisons to Athena 4.2, and the inconsistency required some additional book keeping in analysis scripts. Is there a reason for the change?

Regression test suite with MPI causes nonblocking IO write errors on macOS

This is a subtle problem that I have noticed on my MacBook Pro and when using Travis CI with osx VM builds over the past few months. I have just now diagnosed the issue, but it would be good if others can reproduce this and discuss a workaround (if full macOS support is desired).

Bug report

Bug summary

The make() command in tst/regression/scripts/utils/athena.py, invoked by a regression test module's prepare() step fails in the linker with

make: write error: stdout

being written in the middle of the stdout stream.

The challenge in tracking down this bug is that it manifests in any regression test only after an MPI-enabled regression test (which itself passes). Currently, the only regression test scripts that use MPI are:

  • mpi/mpi_linwave.py
  • grav/jeans_3d.py

Code for reproduction

cd tst/regression
python run_tests.py mpi/mpi_linwave hydro/sod_shock

Or, another example:

cd tst/regression
python run_tests.py grav/jeans_3d outputs/all_outputs

The error only occurs in the make commands invoked in the separate test scripts following the MPI regression test. In other words, even though the mpi_linwave.py first compiles an MPI binary and then a serial binary in the same module, there is no write error in that test.

Furthermore, the bug only occurs either when:

  1. Both tests are run in the same Python command, as above
  2. The multiple python run_tests.py testname commands are executed in the same process/script.

So, command line execution of

cd tst/regression
python run_tests.py grav/jeans_3d
python outputs/all_outputs

works fine, but running those commands in a Bash script will produce the error. Hence, in VM environments like Travis CI, which wraps the user's commands in a command called travis_run_script, this issue may appear.

Actual outcome

...

g++  -O3 -o /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/bin/athena /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/main.o /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/globals.o /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/parameter_input.o /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/get_boundary_flag.o /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/reflect.o /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/bvals_mg.o /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/bvals_cc.o /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/polarwedge.o /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/bvals_base.o /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/flux_correction_fc.o /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/bvals_grav.o /Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/flux_correction_cc.o /Users/kfelker/Desktop/athena-trunk-clean/tsmake: write error: stdout
Traceback (most recent call last):
  File "run_tests.py", line 76, in main
    module.prepare()
  File "/Users/kfelker/Desktop/athena-trunk-clean/tst/regression/scripts/tests/hydro/hydro_linwave.py", line 21, in prepare
    athena.make()
  File "/Users/kfelker/Desktop/athena-trunk-clean/tst/regression/scripts/utils/athena.py", line 46, in make
    .format(err.returncode,' '.join(err.cmd)))
AthenaError: Return code 1 from command 'make -j EXE_DIR:=/Users/kfelker/Desktop/athena-trunk-clean/tst/regression/bin/ OBJ_DIR:=/Users/kfelker/Desktop/athena-trunk-clean/tst/regression/obj/'
---> Error in scripts/tests/hydro/hydro_linwave.py

Results:
    mpi.mpi_linwave: passed
    hydro.sod_shock: failed -- unexpected failure in prepare() stage
    hydro.hydro_linwave: failed -- unexpected failure in prepare() stage

Summary: 1 out of 3 tests passed

I have also observed:

IOError: [Errno 35] Resource temporarily unavailable

instead of make: write error when using alternative subprocess commands; see below.

Version Information

My current MacBook Pro environment is:

  • Operating System: macOS Sierra 10.12.6
  • Python Version: Python 2.7.13, installed by Homebrew
  • C++ compiler version: macOS system clang
Apple LLVM version 9.0.0 (clang-900.0.38)
Target: x86_64-apple-darwin16.7.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
  • MPI version: MPICH 3.2.0, installed by Homebrew. mpicxx -show:
clang++ -Wl,-flat_namespace -Wl,-commons,use_dylibs -I/usr/local/Cellar/mpich/3.2_3/include -L/usr/local/Cellar/mpich/3.2_3/lib -lmpicxx -lmpi -lpmpi

corresponding to one of these Homebrew-built binaries (bottles):

...
commit d414b50c6744a47b1cbfa72f716bf8b39720684d
Author: BrewTestBot <[email protected]>
Date:   Mon Sep 18 18:37:15 2017 +0000

    mpich: update 3.2_3 bottle.

commit 54c585fbd1d02b4271a5e154e1cda4458944cfb0
Author: BrewTestBot <[email protected]>
Date:   Thu May 4 02:36:36 2017 +0000

    mpich: update 3.2_3 bottle.
...

It the process of debugging, I have tried many different versions of compilers, Python environments, build options, and macOS versions. It also occurs with:

  • gcc 7.1 or 4.9 installed by Homebrew
  • System-managed or user Homebrew-managed Python 2.7 installations. Also tried:
    • virtualenv for Python.
    • Starting Python in unbuffered binary stdout and stderr mode, via python -u
  • OpenMPI or MPICH installed by Homebrew or source, compiled with gcc or clang
  • GNU Make versions 4.2.1, 3.8.1, either macOS system or Homebrew managed
    • Serial or parallel Make
    • Tried --output-sync option for GNU Make version > 4.0 to ensure that parallel make output is buffered and well-ordered.
  • Various releases of macOS 10.10, 10.11, 10.12

Explanation

I have encountered related bug reports in the Travis CI, GNU Make, and macOS communities, but never found a complete explanation until recently. The 12/1/17 reply on travis-ci/travis-ci#4704 explains that this bug is caused by the EAGAIN signal, "try again/ data not ready" from nonblocking socket.

To query if O_NONBLOCK is set, you can use:

python -c 'import os,sys,fcntl; flags = fcntl.fcntl(sys.stdout, fcntl.F_GETFL); print(flags&os.O_NONBLOCK);'

Embedding this command in the regression test driver returns 0 (O_NONBLOCK disabled) after every test until an MPI-enabled test executes, then it returns a nonzero number. So, the open question is: why do the MPI-enabled regression test scripts turn on nonblocking IO, and why does it occur only after the overall completion of the test?

Possible fixes

  • Disable nonblocking IO after all regression tests, or after an MPI-enabled regression test. I am currently placing the following command in my driver script:
python -c 'import os,sys,fcntl; flags = fcntl.fcntl(sys.stdout, fcntl.F_GETFL); fcntl.fcntl(sys.stdout, fcntl.F_SETFL, flags&~os.O_NONBLOCK);'
  • Redirect the athena.py output of make() command to a file by replacing the subprocess.check_call() commands with subprocess.Popen() and pipes. Or, suppress the output by replacing with subprocess.check_output() (without ever communicating the stream to stdout).
  • Figure out how to prevent the MPI-enabled regression test from switching to nonblocking IO in the first place.

Standardize sqrt() and cbrt() calls for C++11 compliance and portability

This note is a reminder that we are inconsistent and technically non-compliant when it comes to square root and cube root functions, though so far no harm has come to us.

Pre-C++11, <cmath> was required to have std::sqrt(), and this was not allowed to be included in the global namespace. There was no cube root at all.

With C++11, <cmath> is allowed to also include sqrt() in the global namespace. std::cbrt() is also included.

Our configure script does not specify a standard, so on all supported compilers we end up using a pre-C++11 standard. Thus the bare sqrt() calls and all the cbrt() calls only work due to the compilers' good graces. Note also that empirically C++11 compilers, while guaranteed to have std::cbrt(), tend not to support bare cbrt().

Restart regression test fails due to FFT input parameter parsing

Travis CI shows that the outputs/all_outputs.py regression test fails since the FFT PR.

The initial Orszag-Tang athena.run() works fine with the mhd/athinput.test_outputs file, but then the first restart fails.

cycle=80 time=2.06115326723512e-01 dt=2.41836688314429e-03
Terminating on cycle limit
time=2.06115326723512e-01 cycle=80
tlim=1.00000000000000e+00 nlim=80
cpu time used  = 4.42829012870789e-01
zone-cycles/cpu_second = 7.39969562500000e+05
### FATAL ERROR in function [ParameterInput::GetReal]
Parameter name 'dedt' not found in block 'problem'
### FATAL ERROR in function [IOWrapper:Open]
Input file 'TestOutputs.00004.rst' could not be opened

Should the src/fft/ directory be conditionally compiled depending on the configure.py options, or should the GetReal() calls in fft/turbulence.cpp be changed to GetOrAddReal()?

I don't know enough about restart files to understand why this doesn't occur in the normal run, but does occur for the restart.

B-field prolongation does not preserve curl B

I do not think this is a serious problem, but I create this ticket just as a reminder. The current implementation of the magnetic field prolongation (based on Toth & Roe) preserves div B (if it's initially 0) but does not guarantee that rot B is preserved in curvilinear coordinates. Because the rot B preservation is not a neccessary condition of MHD, I guess it is OK but I'm a bit concerned. Just keep it in mind in case anything happens.

Cell centered magnetic fields

Because the definition of the cell centered magnetic fields is perfect, it looks non-uniform in curvilinear coordinates. This is especially prominent with SMR. The error is relatively small and probably does not affect the calculation badly (at least the face-centered magnetic fields are correct), I believe it is not serious but I post this here just in case.

Problem with fields at physical boundaries in 1D

In 1D SR shock tubes (possibly also Newtonian; have not checked), something is going wrong with the B-fields at the boundary. Using outflowing boundary conditions (so the boundary area should not change at all for many timesteps), face-centered Bz values are wrong after the first half step (at least with MUB shock tube 2, though all MHD shock tubes are failing regression tests for presumably similar reasons). The error is in the two ghost cells and the first active cell at this point.

This is all done in one mesh block on one core.

Multigrid gravity NGHOST > 2 causes artifacts

Kengo is already aware of the issue but as a warning for anyone else trying to use higher order (PPM) or even just PLM with NGHOST > 2 with multigrid gravity there is a bug where artifacts appear along meshblock boundaries and cause the simulation to be wrong (at short times) and NaNs eventually appear in momentum or the density all vacates the box (at long times).

Need warnings for improper use of polar boundaries

The caveats with polar boundaries are now enumerated in the wiki. The code does not necessarily check for these. In particular, we should probably

  1. Check that the angle limits are the exact values of 0, pi, 2*pi.
  2. Check that nx3 is even, and that nrbx3 comes out to either 1 or an even number.
  3. Check that all blocks along a pole are at the same level of refinement, in the case of SMR.
  4. Prohibit running the code in polar+AMR until these are made compatible.

FFTBlock.norm_factor_ may be uninitialized

If the set_norm argument to FFTDriver::InitializeFFTBlock is false and FFTBlock.SetNormFactor is not called then FFTBlock.norm_factor_ remains uninitialized. I suggest either initializing FFTBlock.norm_factor_ to 1, or set it to 0 and raise an error if the value remains zero.

Need better default behavior when setting start_time

If a user specifies the time/start_time input parameter, then the the initial output*/next_time parameters still default to 0 if omitted. Because these are only incremented by dt, next_time will always be start_time less than the current time, so output will be written every timestep.

Now a user can always set all the next_time parameters manually, but I can't imaging ever wanting the default to be 0 when start_time is not 0. I think changing the linked line will fix this, but I wanted to double check.

Add support for counterrotation in GR torus

The GR torus problem generator currently only allows for corotating flows. Counterrotation could be achieved by modifying the CalculateLFromRPeak() and CalculateRPeakFromL() functions.

GR-AMR incompatibility

This is a known issue I keep forgetting to address. When AMR creates or destroys blocks, or even when it is just shuffling blocks around, the conserved variables are copied but the primitives and half-step primitives are not. Currently GR uses these values as part of its variable inversion fallback, and having only zero-initialized primitives can lead to nan values.

The fix should be relatively straightforward.

User physical src term cannot be enrolled when restarting

With the new boundary condition interface, I see that the new “Mesh::InitUserMeshProperties” function is added to problem generators which is called when we start a new run or restart. This is nice, since user boundary conditions in this function can be enrolled when we restart.

However, in this case, user physical source term still cannot be enrolled when we restart. The “EnrollSrcTermFunction” is under MeshBlock. It’s not a property for the whole mesh. If we call this function in “Mesh::InitUserMeshProperties”, we have to iterate over every meshblock. But if we call this function in “ProblemGenerator”, it is not called when we restart a run.

Before the new BC interface was written, my old way to solve this problem was adding an extra function of "RestartInitialization" to the problem generator where I enroll user src term function (like athena 4.2). I let the main function call this "RestartInitialization" function if res_flag==1. This way of hacking is not good for future users.

Including the pole in cylindrical coordinates

Users have started to bring up use cases for allowing cylindrical coordinates to go all the way to the R=0 pole and treating the pole correctly. This can certainly be done, given that R=0 in cylindrical coordinates behaves exactly like theta=0,pi in spherical coordinates, which we support. Of course the modifications would be extensive. Still, this should be done eventually.

AMR doesn't copy ghost zones?

It looks like when new coarser or finer blocks are created with AMR, or when blocks are moved from one mesh to another, only the active values are set. Is this the case?

This can be an issue if someone has implemented "do-nothing" boundary conditions that rely on the physical ghost zones to stay at their initial values, even when the blocks at the physical boundary are never refined.

Is it possible in theory to simply extend the communication to include these ghost values? Unless/until this is done we should document somewhere that doing the naive thing of defining a no-op function is not compatible with AMR.

On a side note, we should probably make a do-nothing BC available as a default choice.

Blast problem missing header needed to compile

The blast.cpp problem setup (src/pgn/blast.cpp) appears to be missing a header need for compilation. Adding:

#include <stdexcept>  // runtime_error   

to line 28 fixes the issue for me.

I'm new to this (still working through the tutorial pages) so I'm not sure what the working process to commit fixes is.

Morgan

Table reader/loader

Some of the developmental branches require loading tables of data (e.g. EOS tables, opacity tables for radiation). We should centralize this in some way so that each branch/person isn't writing their own unique loader. I suggest we make an "inputs" directory with options to read various input files (e.g. HDF5 files, plain text tables). I also suggest moving parameter_input.[hc]pp to this inputs directory. At the very least, I think it would be nice to have some sort of guideline for loading input tables.

Alfven wave test fails due to incorrect error thresholds

The mhd/cpaw regression test currently fails with "error in L-going fast wave too large". The error is about 2e-4, while the test calls for 4e-8. Is this really a failure, or just too strict of a test? For what it's worth, the test shows convergence in the R-going wave (error goes down by a factor of 4.75 when resolution is doubled), and the R-going and L-going waves have the same error.

(I was notified of this by a new user trying out the public version, worrying if there was something broken in the code.)

`athena_read.py` HDF5 reader and file-level attributes

athdf() does not have the capability to return file-level HDF5 attributes in the outputs dictionary , correct?

It would be convenient to directly query such attributes, like Time without referring to the .hst file.

Need OMP-ready scratch array for relativistic MHD Riemann solver

The relativistic Riemann solvers use a 1D array, b_normal_, to store the normal magnetic field. In SR this is simply extracted from the 3D array given to the solver. It is more useful in GR, where it stores transformed fields (since the 3D array cannot be overwritten).

The other similar scratch arrays allocated by fluid_integrator.cpp are portioned into chunks by van_leer2.cpp, so that the Riemann solver called by each thread has its own chunk to work with. In this way the Riemann solver does not need to know about multithread parallelism.

As written, b_normal_ is shared by all threads, so multithreaded runs should fail. (They happen to not fail in 1D problems where the only variation is in the i-direction, since each Riemann solver is doing the same thing.)

This should be dealt with after the final form of the Riemann solver is settled. (There is still an issue with how to do frame transformations.)

UserWorkAfterLoop is called before final outputs

Right now this section of code

pmesh->UserWorkAfterLoop(pinput);

comes just before this

athena/src/main.cpp

Lines 402 to 413 in 13808e7

// make the final outputs
try {
pouts->MakeOutputs(pmesh,pinput,true);
}
catch(std::bad_alloc& ba) {
std::cout << "### FATAL ERROR in main" << std::endl
<< "memory allocation failed during output: " << ba.what() <<std::endl;
#ifdef MPI_PARALLEL
MPI_Finalize();
#endif
return(0);
}

The problem occurs when users de-allocate arrays in UserWorkAfterLoop() that are needed in creating output files.

Using the provided ruser_mesh_data arrays and such is a workaround, since these are de-allocated even later when the Mesh is deleted. However, these arrays bloat restart files and cannot be changed in size or number between restarts, so it's nice to have the option to manually allocate and de-allocate arrays in the problem generator.

Unless there are objections I can swap the order of the above lines of code.

Can one output file have multiple slices?

My understanding is that each object of the "OutputType" class corresponds to a single output file (per output time step, per block). It also seems that each OutputType can only have a single slice. If I want to save multiple (~30) adjacent slices to file, is there a way to have all the slices that are in the same block be written to a single file (per output time step)?

Create Slack workspace for Athena++

Would anyone be interested / would try Slack as a platform for discussion of Athena++ development? Issues and pull requests on the GitHub repository would still be the main forum to discuss development details, but Slack could be a useful centralized place to share early results, ask general questions, and learn what others are working on and developing.

I think it would be much better than point-to-point email discussions for many topics. The Free Slack plan stores 10,000 messages of searchable history. Those who already use Slack (which I suspect is many of you) know that it is a very simple but flexible chat platform. You can easily attach plots, files, and inline code in your messages.

I have a private workspace setup under athena-pp.slack.com that I have been using to monitor the commit activity and test results of my forked repository, but this content is in a separate channel. The workspace could be easily extended to serve a more general role.

Compilation error -- accessing invalid member of RegionSize

The following basic compilation results in an error

python configure.py -b --prob shock_tube --coord cartesian
make clean
make

The error with gcc includes

src/mesh.cpp: In constructor ‘Mesh::Mesh(ParameterInput*, int)’:
src/mesh.cpp:414: error: ‘struct RegionSize’ has no member named ‘x1size’
src/mesh.cpp:415: error: ‘struct RegionSize’ has no member named ‘x1size’

Indeed, struct RegionSize defined in mesh.hpp has no x1size parameter.

Allow the regression test suite to accept platform-specific compiler and execution options

I am unable to get the current version of the code to pass the mpi/mpi_linwave.py regression test with Intel compilers, either with OpenMPI or Intel MPI. The code errors after the first MPI run in the test, with:

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 148996 RUNNING AT perseus
=   EXIT CODE: 11
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================

On the Perseus cluster at PICSciE, this occurs with the following compilers loaded via modules:

  • intel-mpi/intel/2017.4/64
  • intel-mpi/intel/2018.1/64
  • intel-mpi/gcc/2018.1/64
  • openmpi/intel-16.0/1.10.2/64
  • openmpi/intel-17.0/1.10.2/64

However, it passes with:

  • openmpi/gcc/1.10.2/64

If anyone can quickly test some of these compilers with the MPI regression test on other platforms, it would be much appreciated.

Setting NGHOST at runtime

Currently NGHOST is hard coded in defs.hpp.in as a constant fixed at cofiguration, but for higher order reconstruction this must be flexible. As the reconstruction order can be set at runtime, I think it is better to make NGHOST a normal variable initialized at the beginning of a simulation according to the reconstruction order and use of SMR/AMR.

Note that for SMR/AMR NGHOST must be even. For the third order reconstruction, it is OK to use NGHOST=3 on a uniform grid but it must be 4 with SMR/AMR because of the prolongation algorithm.

Add optional runtime parameter for reducing/suppressing per-cycle summary info

For especially long simulations, having each cycle's time and timestep printed to stdout can be prohibitive for remote runs that save the output stream to an ASCII file. E.g. running the full regression test suite quickly hits the maximum log file size for Travis CI.

The user may still desire some summary info, say every 10,000 timesteps, so a flag that completely turns off the output stream is sub-optimal. It would be simple to add an optional runtime parameter, say dcycle_out=n that controls the interval, with 0 corresponding to complete suppression. But which input block would it best fit: <time>,<job>...?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.