Coder Social home page Coder Social logo

amrvac / amrvac Goto Github PK

View Code? Open in Web Editor NEW
71.0 9.0 43.0 210.89 MB

MPI-AMRVAC: A Parallel Adaptive Mesh Refinement Framework

Home Page: https://amrvac.org/

License: GNU General Public License v3.0

Makefile 1.26% Perl 0.48% C 0.01% Shell 0.26% Fortran 89.55% Python 2.88% Assembly 2.96% D2 2.61%

amrvac's People

Contributors

99xc41 avatar alj-mhd avatar bartripperda4188 avatar beevageeva avatar chverbeke avatar diondonne avatar fabsilfab avatar finmacdov avatar floriandriessen avatar hmeheut avatar ileyk avatar ileyk-zz avatar jannisteunissen avatar jorishermans96 avatar kiradorn avatar n-claes avatar nanami0721 avatar neutrinoceros avatar nicolasbrughmans avatar njuguoyang avatar nmoens avatar oporth avatar robbedhondt avatar ronykeppens avatar tikob avatar veronika553 avatar wzruankul avatar xiaohongli0122 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amrvac's Issues

VACPP fails when file does not end with \n

When the last line in the mod_usr.t file does not end in a newline character, the VAC preprocessor ignores this line. Minimal working example:

$ echo    "module mod_usr"      > mod_usr.t
$ echo    "contains"           >> mod_usr.t
$ echo -n "end module mod_usr" >> mod_usr.t # -n flag: "do not append a newline"
$ $AMRVAC_DIR/src/vacpp.pl -d=1 mod_usr.t
module mod_usr
contains
$ echo "" >> mod_usr.t # now it does end in a newline
$ $AMRVAC_DIR/src/vacpp.pl -d=1 mod_usr.t
module mod_usr
contains
end module mod_usr

In an actual simulation, the first version of the file of course produces an error (make returns Error: Unexpected end of file in 'mod_usr.f'). Note that many editors seem to automatically add a newline character upon saving (e.g. Vim, nano, gedit). I got this bug with Visual Studio Code, where the default is somehow to not insert a newline character at the end of the file upon saving.

It does seem like this issue is broader than AMRVAC, see this stackoverflow post. The POSIX standard implies that we should end files with a newline character. A possible fix for the preprocessor (if you decide to support non-POSIX compliant files) would be to add an artificial newline to the end of input files.

hd_gamma set to 1 when no hd_energy

We have many test cases where the energy equation is not solved (i.e. hd_energy=.false.) but where the thermal pressure is deduced from a polytropic approximation (which is different from the isothermal approximation). Consequently, the new line in mod_hd_phys.t at the beginning of subroutine hd_phys_init which sets hd_gamma to 1 when hd_energy=.false. prevents many configurations from being ran. Is there any reason why this was modified, for instance because of compatibility issues with other modules?

Make sure all the tests run with ARCH=debug

Some of the tests currently have problems, mostly with HLLC it seems, for example:

amrvac/tests/mhd/rotor_2D: Program received signal SIGFPE: Floating-point exception - erroneous arithmetic operation. Backtrace for this error: #3 0xD916D9 in __mod_mhd_hllc_MOD_mhd_get_lcd at mod_mhd_hllc.f:133 (discriminator 1)

Missing automated CI/CD on this repo

I find that running the whole test suite by hand can be a massive source of lag in development.
I'm currently in search for a solution to automate the test pipeline on github.

[future compatibility] gcc 10 needs consistent behaviour

I'm flagging this issue for future reference.

Because of changes in the most recent version of GCC it's possible that compilation fails due to changes in old behaviour to support new Fortran standards.
See also https://gcc.gnu.org/gcc-10/porting_to.html and https://gcc.gnu.org/gcc-10/changes.html
Quite relevant is this part:

Mismatches between actual and dummy argument lists in a single file are now rejected with an error.

One of the examples is a current compilation error I'm having (compiling with gcc 10.2), due to the fact that mpistop is called in two separate ways in the same module: call mpistop() and call mpistop("error message"). This compiles fine for versions gcc < 10, but errors out because of the argument inconsistency starting from gcc 10 onwards, throwing
Error: More actual than formal arguments in procedure call

Setting the additional compilation flag -fallow-argument-mismatch (or -std=legacy, which implies this) transforms this error into a warning and allows for successful compilation.

In order to avoid this we'll have to explicitly take this into account from now on when doing subroutine calls.

hd_angmomfix not compatible with dust module

this is probably connected to a similar issue I recently solved with the centrifugal force not working with dust.
As of right now, angmomfix in polar_1.5D geometry does not work on dust fluids.

IMEX_Midpoint

Hello,

Is it possible that there is a typo in the advect subroutine in mod_advance.t? In the IMEX_mipoint scheme, I think the implicit step is only executed for half a timestep (and it is called only once):

      call global_implicit_update(half,global_time+half*dt,ps2,ps1)

Cheers,
Nicolas Moens

Strange behaviour with Intel compiler on VSC cluster

When compiling using setup.pl -d=2 followed by make and mpirun, everything works fine on the VSC cluster.
However, when compiling with the intel option setup.pl -d=2 -arch=intel for the simple tests/rho/vac test case, strange behaviour occurs when running the executable. The startup info gets printed multiple times (as many times as there are processors selected, actually) after which program execution terminates following a forrtl: CLOSE error trying to close a file writen to unit -129 (which I can not find anywhere).

The 0000 .dat and .vtu files are present, the .log file however is not. See attached file for detailed error log and reproduction of the issue.
amrvac_crash_intel.txt

VSC modules loaded are intel/2018a, GCC/6.4.0-2.28 and OpenMPI/2.1.2-GCC-6.4.0-2.28.

No warning message when number of blocks along some dimension >32

Hi!
Took some time to figure out that you apparently cannot have more than 32 blocks along a dimension at level 1 (at least in 3D spherical, along r). The run just stalls without error. I haven't tested it for other dimensions/directions. A warning message would be great!

Cheers,

Norbert

Small dt

Hi everyone,
I am trying to simulate a model with an external force and special boundary conditions, I am getting small values for dt, I changed CFL and made dt_min=1e-20 or low but it runs with very low dt and crashes after that.
Any suggestions please.

Compilation issues due to the VACPP

I'm running into compilation errors when trying to compile the code. The issue is that the VAC preprocessor breaks continuation lines and inserts an additional & on the next line, followed by a line break. This returns an empty line with only an ampersand character &, resulting in the Fortran compiler throwing
f951: Warning: '&' not allowed by itself in line 542
followed by multiple errors and the compilation stops. Below is an example for mod_geometry.f in lib/2d_default:

Screenshot 2020-03-04 at 10 18 35

I can fix it by manually modifying the corresponding line in the .t file through insertion of a continuation character in the proper place, but this issue pops up in quite some files meaning that I would have to go through each one of them, modify them, and do it again if I update the code.

Increasing the maxlen value in the vacpp.pl file also fixes some issues but introduces new ones in other files... Seems to me that this could be fixed by some additional checks in the preprocessor?

@jannisteunissen or @99xc41, any thoughts on this?

Running perl v5.30.1 and gfortran 9.2.0 on macOS. Already tried downgrading perl to 5.18 and gfortran to 8.3 but having the same issue.

bug: dust in non-cartesian geometries

I was having very odd results when using physics/mod_dust.t in polar (1.5D and 2D) geometries.
In hd/mod_hd_phys.t I found that piece of comment :

!> Ileyk : to do :
!>     - address the source term for the dust
subroutine hd_add_source_geom(qdt, ixI^L, ixO^L, wCT, w, x)

indeed, this routine still completely ignores dust. I'll see if I'm able to fix this.

problems with restarting in the newest git-version

Installed a fresh (20 Oct 2019) version of mpi-amrvac from github and observe a problem with restarting from a particular snapshot.

So I run a simulation (in my case it is based on solar_atmoshpere_2.5), then kill it and try to restart it from a given snapshot by either a command line parameter -if, or specifying "restart_from_file" in the used par-file. In both cases it can not start, complaining to the wrong parameters in par-file. I'm pretty sure there was no such behavior with the older (but still v 2.0) version. The output is below, of course the file test/test_mx2_0001.dat is there and I did not change the geometry between the runs.

test_par_dat.zip

sergeis@sergeis-pc1850:~/Projects/Vertical-slab-heating$ mpirun -n 9 ./amrvac -i test.par
 -----------------------------------------------------------------------------
 -----------------------------------------------------------------------------
 |         __  __ ____ ___        _    __  __ ______ ___    ____         |
 |        |  \/  |  _ \_ _|      / \  |  \/  |  _ \ \   / / \  / ___|        |
 |        | |\/| | |_) | |_____ / _ \ | |\/| | |_) \ \ / / _ \| |            |
 |        | |  | |  __/| |_____/ ___ \| |  | |  _ < \ V / ___ \ |___         |
 |        |_|  |_|_|  |___|   /_/   \_\_|  |_|_| \_\ \_/_/   \_\____|        |
 -----------------------------------------------------------------------------
 -----------------------------------------------------------------------------
 Use Colgan & Feldman (2008) cooling curve
 This version only till 10000 K, beware for floor T treatment
 Reading test.par

 Output type | dtsave    | ditsave | itsave(1) | tsave(1)
         log | 0.100E+00 | ******  |      0    | *********
      normal | 0.200E+01 | ******  |      0    | *********
       slice | ********* | ******  | ******    | *********
   collapsed | ********* | ******  | ******    | *********
    analysis | ********* | ******  | ******    | *********

                 typelimited: predictor
         Domain size (cells): 24 48
                Level one dx: 0.833E-02 0.208E-01
           Refine estimation: Lohner's scheme
 Note, Grid is reconstructed once every           3 iterations
           restart_from_file:  test/test_mx2_0001.dat
                  converting: F

 minra  0.56631503347258905
 rhob   7277.0929422705231
 pb   58.216745042170217
 ERROR for processor           0 :
 change in geometry in par file
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 128.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------

depedencies should be automatically updated in makefile

I encountered this issue several times where compilation is broken on a fresh environment because dependencies are not up to date.
This is already semi automatic thanks to the script src/list_module_deps.sh but this feels like an incomplete task.

I also note that the travis continuous integration pipeline currently doesn't catch this kind of problem, probably because the runner keeps cache files instead of recompiling from scratch as it should.

broken mhd test prominence_Rayleigh_Taylor_2.5D

While running the whole test suite, I noticed that only one test currently fails on master:

tests/mhd/prominence_Rayleigh_Taylor_2.5D/

I checked the history and got the following info:
d5809d4 fails to build the test (make fails)
06e9083 build correctly but fails the test itself

those commits are consecutive so I could not get closer to the issue without inspecting the commits themselves. As this concerns a part of the code I'm not at all familiar with, I'll put this issue in the hands of @99xc41 as he seems to be the most recent author on this test.

Issues open the vtk files with meshio python module

For details and the vtu file I used (it's just the first file of vaclogo test), see the bug report in meshio repo: nschloe/meshio#1135

There are two issues. The first one is the spaces in the "offset" attributes, which causes problems for parsing with python's xml parser.

The second issue has not been figured out. I don't know where the bug is, meshio or amrvac.

As posted in the issue above, There are some DataArray in Points Tags that gives a list of data but could not be reshaped into (-1, NumberOfComponents), i.e. the length of data is not an integer multiple of NumberOfComponents. I tried to print the data at offset 427932 parsed by meshio (where meshio throws an error, see the other issue), there are 100 integers, but the NumberOfComponents is 3:

[0.5 0.5 0.5 2.  2.  2.  2.  2.  2.  2.  0.5 0.5 0.5 2.  2.  2.  2.  2.
 2.  2.  0.5 0.5 2.  2.  2.  2.  2.  2.  2.  2.  0.5 0.5 2.  2.  2.  2.
 2.  2.  2.  2.  0.5 0.5 2.  2.  2.  2.  2.  2.  2.  2.  0.5 0.5 2.  2.
 2.  2.  2.  2.  2.  2.  0.5 2.  2.  2.  2.  2.  2.  2.  2.  2.  0.5 2.
 2.  2.  2.  2.  2.  2.  2.  2.  0.5 2.  2.  2.  2.  2.  2.  2.  2.  2.
 2.  2.  2.  2.  2.  2.  2.  2.  2.  2. ] (100,) {'type': 'Float32', 'NumberOfComponents': '3', 'format': 'appended', 'offset': '427932'}

Most of the data is parsed without any errors. The other Points tags contain exactly 363 data points, which is 3 times the NumberOfPoints in this vtu file, e.g.:

[0.15  0.65  0.    0.155 0.65  0.    0.16  0.65  0.    0.165 0.65  0.
 0.17  0.65  0.    0.175 0.65  0.    0.18  0.65  0.    0.185 0.65  0.
 0.19  0.65  0.    0.195 0.65  0.    0.2   0.65  0.    0.15  0.655 0.
 0.155 0.655 0.    0.16  0.655 0.    0.165 0.655 0.    0.17  0.655 0.
 0.175 0.655 0.    0.18  0.655 0.    0.185 0.655 0.    0.19  0.655 0.
 0.195 0.655 0.    0.2   0.655 0.    0.15  0.66  0.    0.155 0.66  0.
 0.16  0.66  0.    0.165 0.66  0.    0.17  0.66  0.    0.175 0.66  0.
 0.18  0.66  0.    0.185 0.66  0.    0.19  0.66  0.    0.195 0.66  0.
 0.2   0.66  0.    0.15  0.665 0.    0.155 0.665 0.    0.16  0.665 0.
 0.165 0.665 0.    0.17  0.665 0.    0.175 0.665 0.    0.18  0.665 0.
 0.185 0.665 0.    0.19  0.665 0.    0.195 0.665 0.    0.2   0.665 0.
 0.15  0.67  0.    0.155 0.67  0.    0.16  0.67  0.    0.165 0.67  0.
 0.17  0.67  0.    0.175 0.67  0.    0.18  0.67  0.    0.185 0.67  0.
 0.19  0.67  0.    0.195 0.67  0.    0.2   0.67  0.    0.15  0.675 0.
 0.155 0.675 0.    0.16  0.675 0.    0.165 0.675 0.    0.17  0.675 0.
 0.175 0.675 0.    0.18  0.675 0.    0.185 0.675 0.    0.19  0.675 0.
 0.195 0.675 0.    0.2   0.675 0.    0.15  0.68  0.    0.155 0.68  0.
 0.16  0.68  0.    0.165 0.68  0.    0.17  0.68  0.    0.175 0.68  0.
 0.18  0.68  0.    0.185 0.68  0.    0.19  0.68  0.    0.195 0.68  0.
 0.2   0.68  0.    0.15  0.685 0.    0.155 0.685 0.    0.16  0.685 0.
 0.165 0.685 0.    0.17  0.685 0.    0.175 0.685 0.    0.18  0.685 0.
 0.185 0.685 0.    0.19  0.685 0.    0.195 0.685 0.    0.2   0.685 0.
 0.15  0.69  0.    0.155 0.69  0.    0.16  0.69  0.    0.165 0.69  0.
 0.17  0.69  0.    0.175 0.69  0.    0.18  0.69  0.    0.185 0.69  0.
 0.19  0.69  0.    0.195 0.69  0.    0.2   0.69  0.    0.15  0.695 0.
 0.155 0.695 0.    0.16  0.695 0.    0.165 0.695 0.    0.17  0.695 0.
 0.175 0.695 0.    0.18  0.695 0.    0.185 0.695 0.    0.19  0.695 0.
 0.195 0.695 0.    0.2   0.695 0.    0.15  0.7   0.    0.155 0.7   0.
 0.16  0.7   0.    0.165 0.7   0.    0.17  0.7   0.    0.175 0.7   0.
 0.18  0.7   0.    0.185 0.7   0.    0.19  0.7   0.    0.195 0.7   0.
 0.2   0.7   0.   ] 363 {'type': 'Float32', 'NumberOfComponents': '3', 'format': 'appended', 'offset': '564692'}

There is an error when I run the first test. I don't know how to solve it

I use brew install gcc on my mac to download gfortran
gcc version 8.2.0 (Homebrew GCC 8.2.0)
When i run make -j 4
There is an error :
Undefined symbols for architecture x86_64:
"mpi_type_struct", referenced from:
___mod_particle_base_MOD_init_particles_com in libamrvac.a(mod_particle_base.o)
ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status
make: *** [amrvac] Error 1
rm mod_usr.o amrvac.f amrvac.o
I don't know how to solve it

Python coding style

I have a question with regards to changes to python tools. When people are suggesting edits do you want us to follow a certain coding style like PEP 8 or use tools that automatically format code like Black?

yt docs: images are broken

In e78809a, @n-claes added documentation for yt+AMRVAC. The automatic build last night seems to have worked except for images.

@jannisteunissen, do you know what's wrong with the way images were stored ? If it helps, could you increase the build frequency for today so we can iterate a bit ? Thanks

Discussion: AMRVAC 2.2 and branch separations

Today @jannisteunissen commited a "hotfix" to the master branch, fixing a just-discovered bug with the restart feature. The bug however is still present in the 2.2-testing branch.

I would think that merging master into 2.2-testing is okay, but if I remember correctly, @jannisteunissen mentioned that this could lead to undesirable overhead last time we discussed it.

In a fully developped branching model, hotfixes would look like this
https://nvie.com/posts/a-successful-git-branching-model/#hotfix-branches

In that model, the master branch should always contain production ready code (which is desirable), and hotfixes can easily be propagated to development branches (2.2-testing is such a branch) because they originally happen on separate branches of their own.

In time, I believe enforcing such a development discipline would be beneficial.
For now, I'm sure there are ways we can propagate this specific hotfix to the development branch without causing potential conflicts in the future, but I'd like to get some inputs : why is merging master into 2.2-testing a bad idea ?

Deprecated and removed OpenMPI prototypes

I ran into this issue when preparing amrvac for Prof. Keppens' class. This should be the same problem as #19 , but I think it's better to open a new issue here.

The removed prototypes like MPI_TYPE_STRUCT will cause compilation error for openmpi version 4.0.0 (the latest version atm). At first I want to inform you of this, but then I saw the commit here a642cdc . You had changed this before but rolled it back again to support old versions of mpi.

Can this be solved for both old and latest/future versions of openmpi (maybe some preprocessing)?

(For now two workarounds for me is either downgrade to a previous version of openmpi or manually change the source code.)

Slices with interpolation

Currently, no interpolation is performed when saving slices. We should have some functionality to save slices with interpolation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.