Coder Social home page Coder Social logo

mflowcode / mfc Goto Github PK

View Code? Open in Web Editor NEW
124.0 11.0 55.0 458.76 MB

Exascale multiphase flow simulation

Home Page: https://mflowcode.github.io

License: MIT License

Python 7.47% Shell 1.74% MATLAB 0.18% Fortran 88.94% CMake 1.44% Batchfile 0.08% Dockerfile 0.03% Mako 0.12%
multiphase-flow computational-fluid-dynamics gpu openacc hpc-applications exascale fortran amdgpu instinct nvidia-gpu

mfc's Introduction

MFC Banner

Welcome to the home of MFC! MFC simulates compressible multi-component and multi-phase flows, amongst other things. It scales ideally to exascale; tens of thousands of GPUs on NVIDIA- and AMD-GPU machines, like Oak Ridge Summit and Frontier. MFC is written in Fortran and makes use of metaprogramming to keep the code short (about 20K lines).

Get in touch with the maintainers, like Spencer, if you have questions! We have an active Slack channel and development team. MFC has high-level documentation, visualizations, and more on its website.

An example

We keep many examples. Here's one! MFC can execute high-fidelity simulations of shock-droplet interaction (see examples/3d_shockdroplet)

Shock Droplet Example

Another example is the high-Mach flow over an airfoil, shown below.

Airfoil Example

Getting started

You can navigate to this webpage to get started using MFC! It's rather straightforward. We'll give a brief intro. here for MacOS. Using brew, install MFC's modest set of dependencies:

brew install wget python cmake gcc@13 mpich

You're now ready to build and test MFC! Put it to a convenient directory via

git clone https://github.com/mflowcode/MFC.git
cd MFC

and make sure MFC knows what compilers to use by putting the following in your ~/.profile

export CC=gcc-13
export CXX=g++-13
export FC=gfortran-13

and source that file, build, and test!

source ~/.profile
./mfc.sh build -j 8
./mfc.sh test -j 8

And... you're done!

You can learn more about MFC's capabilities via its documentation or play with the examples located in the examples/ directory (some are shown here)!

The shock-droplet interaction case above was run via

./mfc.sh run ./examples/3d_shockdroplet/case.py -n 8

where 8 is the number of cores the example will run on. You can visualize the output data, located in examples/3d_shockdroplet/silo_hdf5, via Paraview, Visit, or your other favorite software.

Is this really exascale

OLCF Frontier is the first exascale supercomputer. The weak scaling of MFC on this machine is below, showing near-ideal utilization.

Scaling

What else can this thing do

MFC has many features. They are organized below, just click the drop-downs!

Physics
  • 1-3D
  • Compressible
  • Multi- and single-component
    • 4, 5, and 6 equation models for multi-component/phase features
  • Multi- and single-phase
    • Phase change via p, pT, and pTg schemes
  • Grids
    • 1-3D Cartesian, cylindrical, axi-symmetric.
    • Arbitrary grid stretching for multiple domain regions available.
    • Complex/arbitrary geometries via immersed boundary methods
    • STL geometry files supported
  • Sub-grid Euler-Euler multiphase models for bubble dynamics and similar
  • Viscous effects (high-order accurate representations)
  • Ideal and stiffened gas equations of state
  • Acoustic wave generation (one- and two-way sound sources)
Numerics
  • Shock and interface capturing schemes
    • First-order upwinding, WENO3 and 5.
    • Reliable handling of high density ratios.
  • Exact and approximate (e.g., HLL, HLLC) Riemann solvers
  • Boundary conditions: Periodic, reflective, extrapolation/Neumann, slip/no-slip, non-reflecting characteristic buffers, inflows, outflows, and more.
  • Runge-Kutta orders 1-3 (SSP TVD)
  • Interface sharpening (THINC-like)
Large-scale and accelerated simulation
  • GPU compatible on NVIDIA (P/V/A/H100, etc.) and AMD (MI200+) hardware
  • Ideal weak scaling to 100% of leadership class machines
  • Near roofline behavior
Software robustness and other features
  • Fypp metaprogramming for code readability, performance, and portability
  • Continuous Integration (CI)
    • Regression test cases on CPU and GPU hardware with each PR. Performed with GNU, Intel, and NVIDIA compilers.
    • Benchmarking to avoid performance regressions and identify speed-ups
  • Continuous Deployment (CD) of website and API documentation

Citation

If you use MFC, consider citing it:

S. H. Bryngelson, K. Schmidmayer, V. Coralic, K. Maeda, J. Meng, T. Colonius (2021) Computer Physics Communications 4655, 107396

@article{Bryngelson_2021,
  title = {{MFC: A}n open-source high-order multi-component, multi-phase, and multi-scale compressible flow solver},
  author = {Spencer H. Bryngelson and Kevin Schmidmayer and Vedran Coralic and Jomela C. Meng and Kazuki Maeda and Tim Colonius},
  journal = {Computer Physics Communications},
  doi = {10.1016/j.cpc.2020.107396},
  year = {2021},
  pages = {107396},
}

License

Copyright 2021-2024 Spencer Bryngelson and Tim Colonius. MFC is under the MIT license (see LICENSE file for full text).

Acknowledgements

Multiple federal sponsors have supported MFC development, including the US Department of Defense (DOD), National Institutes of Health (NIH), Department of Energy (DOE), and National Science Foundation (NSF). MFC computations use OLCF Frontier, Summit, and Wombat under allocation CFD154 (PI Bryngelson) and ACCESS-CI under allocations TG-CTS120005 (PI Colonius) and TG-PHY210084 (PI Bryngelson).

mfc's People

Contributors

anandrdbz avatar anshgupta1234 avatar belericant avatar haochey avatar henryleberre avatar jrchreim avatar js-spratt avatar lee-hyeoksu avatar mrodrig6 avatar rasmitdevkota avatar sbryngelson avatar wilfonba avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mfc's Issues

Derived types

Describe the bug
Some _idx and iXX variables are described using type(bounds_info) instead of the type(int_bounds_info) derived type. Possibly unexpected behavior can occur.

I suspect the first six occurrences below have this problem, plus the last one:

MFC/src $ grep -iR '(bounds_info)' ./*
./post_process/m_global_parameters.f90:    type(bounds_info) :: cont_idx                  !< Indexes of first & last continuity eqns.
./post_process/m_global_parameters.f90:    type(bounds_info) :: mom_idx                   !< Indexes of first & last momentum eqns.
./post_process/m_global_parameters.f90:    type(bounds_info) :: adv_idx                   !< Indexes of first & last advection eqns.
./post_process/m_global_parameters.f90:    type(bounds_info) :: internalEnergies_idx      !< Indexes of first & last internal energy eqns.
./post_process/m_global_parameters.f90:    type(bounds_info) :: stress_idx                !< Indices of elastic stresses
./post_process/m_derived_variables.f90:        type(bounds_info) :: iz1
./pre_process/m_global_parameters.fpp:    type(bounds_info) :: x_domain, y_domain, z_domain !<
./pre_process/m_initial_condition.fpp:    type(bounds_info) :: x_boundary, y_boundary, z_boundary  !<
./simulation/m_global_parameters.fpp:    type(bounds_info) :: stress_idx                !< Indexes of first and last shear stress eqns.

Dead code

Dead code needs to me removed. We seem to have thousands of lines of it. Here is one example:

!$acc enter data create(qL_cons_n(i)%vf(1:sys_size))
!$acc enter data create(qR_cons_n(i)%vf(1:sys_size))
!$acc enter data create(qL_prim_n(i)%vf(1:sys_size))
!$acc enter data create(qR_prim_n(i)%vf(1:sys_size))
!$acc enter data create(myflux_vf(i)%vf(1:sys_size))
!$acc enter data create(myflux_src_vf(i)%vf(1:sys_size))
!!!!! Comment out qL_cons and qL_prim since we have flattened
! do l = 1, sys_size
! allocate (myflux_vf(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
! allocate (myflux_src_vf(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
!!$acc enter data create(myflux_vf(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
!!$acc enter data create(myflux_src_vf(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
! end do
! if (i == 1) then
! do l = 1, cont_idx%end
! allocate (qL_cons_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
! allocate (qR_cons_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
!!$acc enter data create(qL_cons_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
!!$acc enter data create(qR_cons_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
! end do
! if (weno_vars == 1) then
! do l = mom_idx%beg, E_idx
! allocate (qL_cons_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
! allocate (qR_cons_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
!!$acc enter data create(qL_cons_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
!!$acc enter data create(qR_cons_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
! end do
! end if
! do l = mom_idx%beg, E_idx
! allocate (qL_prim_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
! allocate (qR_prim_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
!!$acc enter data create(qL_prim_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
!!$acc enter data create(qR_prim_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
! end do
! if (model_eqns == 3) then
! do l = internalEnergies_idx%beg, internalEnergies_idx%end
! allocate (qL_prim_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
! allocate (qR_prim_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
!!$acc enter data create(qL_prim_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
!!$acc enter data create(qR_prim_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
! end do
! end if
! do l = adv_idx%beg, sys_size
! allocate (qL_cons_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
! allocate (qR_cons_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
!!$acc enter data create(qL_cons_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
!!$acc enter data create(qR_cons_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
! end do
! if (bubbles) then
! do l = bub_idx%beg, bub_idx%end
! allocate (qL_prim_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
! allocate (qR_prim_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
!!$acc enter data create(qL_prim_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
!!$acc enter data create(qR_prim_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
! end do
! end if
! else
! ! i /= 1
! do l = 1, sys_size
! qL_cons_n(i)%vf(l)%sf => &
! qL_cons_n(1)%vf(l)%sf
! qR_cons_n(i)%vf(l)%sf => &
! qR_cons_n(1)%vf(l)%sf
! qL_prim_n(i)%vf(l)%sf => &
! qL_prim_n(1)%vf(l)%sf
! qR_prim_n(i)%vf(l)%sf => &
! qR_prim_n(1)%vf(l)%sf
!!$acc enter data attach(qL_cons_n(i)%vf(l)%sf,qR_cons_n(i)%vf(l)%sf,qL_prim_n(i)%vf(l)%sf,qR_prim_n(i)%vf(l)%sf)
! #ifdef _OPENACC
! call acc_attach(qL_cons_n(i)%vf(l)%sf)
! #endif
! end do
! if (any(Re_size > 0)) then
! if (weno_vars == 1) then
! do l = 1, mom_idx%end
! allocate (qL_cons_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
! allocate (qR_cons_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
!!$acc enter data create(qL_cons_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
!!$acc enter data create(qR_cons_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
! end do
! else
! do l = mom_idx%beg, mom_idx%end
! allocate (qL_prim_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
! allocate (qR_prim_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
!!$acc enter data create(qL_prim_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
!!$acc enter data create(qR_prim_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
! end do
! if (model_eqns == 3) then
! do l = internalEnergies_idx%beg, internalEnergies_idx%end
! allocate (qL_prim_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
! allocate (qR_prim_n(i)%vf(l)%sf( &
! ix%beg:ix%end, &
! iy%beg:iy%end, &
! iz%beg:iz%end))
!!$acc enter data create(qL_prim_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
!!$acc enter data create(qR_prim_n(i)%vf(l)%sf(ix%beg:ix%end,iy%beg:iy%end,iz%beg:iz%end))
! end do
! end if
! end if
! end if
! end if
!if (DEBUG) print*, 'pointing prim to cons!'
! do l = 1, cont_idx%end
! qL_prim_n(i)%vf(l)%sf => &
! qL_cons_n(i)%vf(l)%sf
! qR_prim_n(i)%vf(l)%sf => &
! qR_cons_n(i)%vf(l)%sf
!!$acc enter data attach(qL_prim_n(i)%vf(l)%sf,qR_prim_n(i)%vf(l)%sf)
! end do
! do l = adv_idx%beg, adv_idx%end
! qL_prim_n(i)%vf(l)%sf => &
! qL_cons_n(i)%vf(l)%sf
! qR_prim_n(i)%vf(l)%sf => &
! qR_cons_n(i)%vf(l)%sf
!!$acc enter data attach(qL_prim_n(i)%vf(l)%sf,qR_prim_n(i)%vf(l)%sf)
! end do

We also have two entire RHS functions:

subroutine s_compute_rhs_full(q_cons_vf, q_prim_vf, rhs_vf, t_step) ! -------

subroutine s_compute_rhs(q_cons_vf, q_prim_vf, rhs_vf, t_step) ! -------

The unused one needs to be deleted.

I'd like to see a PR that deletes thousands of lines of unused code.

`cmake` build from binary issues

Two things

  1. If you attempt to build MFC on a compute node, not every compute nodes seems to be able to fetch the cmake binary. Expanse seems to be an example of this. Maybe a ping or something could check if a connection is actually working.
  2. Even if you do get cmake from binary, MFC still issues a warning
exp-1-57: project-bryngelsonPI/MFC $ ./mfc.sh
./mfc.sh: line 82: cmake: command not found

even though it actually works in the end.

Small `venv` quirk

If I clone @henryleberre 's fork and issue ./mfc.sh I immediately get:

[I]shb-m1pro: Downloads/MFC $ ./mfc.sh
./mfc.sh: line 223: /Users/spencer/Downloads/MFC/build/venv/bin/activate: No such file or directory
Collecting pyyaml
  Using cached PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl (173 kB)
Installing collected packages: pyyaml
Successfully installed pyyaml-6.0
Collecting rich
  Using cached rich-12.5.1-py3-none-any.whl (235 kB)
Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /Users/spencer/Library/Python/3.10/lib/python/site-packages (from rich) (2.13.0)
Collecting commonmark<0.10.0,>=0.9.0
  Using cached commonmark-0.9.1-py2.py3-none-any.whl (51 kB)
Installing collected packages: commonmark, rich
Successfully installed commonmark-0.9.1 rich-12.5.1
Collecting fypp
  Using cached fypp-3.1-py3-none-any.whl
Installing collected packages: fypp
Successfully installed fypp-3.1
usage: ./mfc.sh [-h] {run,test,build,clean} ...

Wecome to the MFC master script. This tool automates and manages building, testing, running, and cleaning of MFC in various configurations on
all supported platforms. The README documents this tool and its various commands in more detail. To get started, run ./mfc.sh build -h.

Thus, it still works fine (seemingly, as aside from the 'Wecome' typo), but it lets out a warning on the first stdout line. It didn't do this a few weeks ago, so it's a new change I believe.

It does the same thing on subsequent mfc.sh calls.

Switching to modern and modular precision declaration

MFC is filled with lines like this:

real(kind(0d0)), allocatable, dimension(:, :, :) :: du_dx, du_dy, du_dz
real(kind(0d0)), allocatable, dimension(:, :, :) :: dv_dx, dv_dy, dv_dz
real(kind(0d0)), allocatable, dimension(:, :, :) :: dw_dx, dw_dy, dw_dz

where the precision is declared via kind(0d0).
We also have a ton of this

rho_L = 0d0
gamma_L = 0d0
pi_inf_L = 0d0
rho_R = 0d0
gamma_R = 0d0
pi_inf_R = 0d0
alpha_L_sum = 0d0
alpha_R_sum = 0d0

where inline constants have precision declared in a "hard-coded" way.

What is better is declaring a separate constant that we can change as needed, like this
example, though there are many others.

A fix for this issue would remove all cases of 0d0 and kind(0d0) and replace them with a constant declared in the common/ directory. I think this is a suitable task for @anshgupta1234 .

I realize one can force precision via compiler variables, but I believe we should avoid this because there is an established language standard.

Some subroutines are both manually inlined and still exist as separate subroutines

Issue: Some subroutines are both manually inlined and still exist as separate subroutines.
For example: in m_weno, the s_preserve_monotonicity exists but is never called because it was manually inlined. I suspect there are probably others like this, it is just the one I found.

Question: Is it necessary to manually copy/paste this subroutine into the s_weno subroutine? What are the performance tradeoffs?

Expected action: I suspect the additional cost associated with manually placing these routines into the main code is not worth the longer, more confusing code it creates. If we pursue this strategy, we could theoretically just have one extremely long subroutine that "does everything" and reap some small performance benefit, but I suspect that we agree this isn't a good idea.

`./mfc.sh load` doesn't work on Phoenix

./mfc.sh load doesn't work on Phoenix. It exits in the following fashion:

login-phoenix-4: p-sbryngelson3-0/MFC $ ./mfc.sh load
mfc: Select a system:
mfc: ORNL:    Ascent     (a), Crusher (c), Summit (s), Wombat (w)
mfc: ACCESS:  Bridges2   (b), Expanse (e)
mfc: GaTech:  Phoenix    (p)
mfc: CALTECH: Richardson (r)
mfc: (a/c/s/w/b/e/p/r): p
mfc:
mfc: Select configuration:
mfc:  - CPU (c)
mfc:  - GPU (g)
mfc: (c/g): c
mfc:
mfc: Loading modules for CPU mode:
mfc:  - Load intel/19.0.5 - [SUCCESS]
mfc:  - Load mvapich2/2.3.2 [SUCCESS]
mfc:  - Load python/3.7.4 - [SUCCESS]
mfc:  - Load cmake/3.20.3 - [SUCCESS]
mfc: OK > All modules have been loaded.
./mfc.sh: line 211: return: can only `return' from a function or sourced script
mfc: Found CMake: /storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MFC/build/cmake/bin/cmake.
mfc: OK > (venv) Entered the Python virtual environment.
usage: ./mfc.sh [-h] {run,test,build,clean,bench} ...

Welcome to the MFC master script. This tool automates and manages building,
testing, running, and cleaning of MFC in various configurations on all
supported platforms. The README documents this tool and its various commands
in more detail. To get started, run ./mfc.sh build -h.

positional arguments:
  {run,test,build,clean,bench}
    run                 Run a case with MFC.
    test                Run MFC's test suite.
    build               Build MFC and its dependencies.
    clean               Clean build artifacts.
    bench               Benchmark MFC (for CI).

optional arguments:
  -h, --help            show this help message and exit
mfc: (venv) Exiting the Python virtual environment.

where ./mfc.sh: line 211: return: can only return from a function or sourced script is the relevant problem.

In the end, it does not end up loading any of the modules it purports to.

Module files are simply too long

Our module files are simply too long, no matter how you think about the code abstractions.

I suggest that modules should not be longer than 1000 lines, though much shorter or somewhat longer could be appropriate in specific cases. How things should be broken up exactly should be deliberated before starting the refactor.

Also, this issue should probably not be addressed until the GPU-3D-unmanaged branch is merged to master.

Here are the current line counts in the GPU-3D-unmanaged branch:

   36414 total
    3866 ./simulation_code/m_riemann_solvers.f90
    3827 ./simulation_code/m_rhs.f90
    2433 ./pre_process_code/m_initial_condition.f90
    2396 ./simulation_code/m_data_output.f90
    2099 ./pre_process_code/m_start_up.f90
    1779 ./post_process_code/m_mpi_proxy.f90
    1712 ./simulation_code/m_cbc.f90
    1614 ./simulation_code/m_weno.f90
    1547 ./simulation_code/m_mpi_proxy.f90
    1421 ./simulation_code/m_derived_variables.f90
    1137 ./post_process_code/m_data_output.f90
    1134 ./simulation_code/m_start_up.f90
    1086 ./post_process_code/m_data_input.f90
    1047 ./simulation_code/m_global_parameters.f90
     897 ./post_process_code/m_global_parameters.f90
     876 ./pre_process_code/m_global_parameters.f90
     826 ./pre_process_code/m_mpi_proxy.f90
     774 ./post_process_code/m_derived_variables.f90
     689 ./simulation_code/m_variables_conversion.f90
     576 ./post_process_code/m_start_up.f90
     514 ./post_process_code/p_main.f90
     491 ./simulation_code/m_time_steppers.f90
     484 ./simulation_code/m_bubbles.f90
     456 ./pre_process_code/m_variables_conversion.f90
     405 ./post_process_code/m_variables_conversion.f90
     400 ./pre_process_code/m_data_output.f90
     398 ./simulation_code/m_qbmm.f90
     324 ./pre_process_code/m_grid.f90
     301 ./master_scripts/m_silo_proxy.f90
     216 ./master_scripts/m_mpi_proxy.f90
     184 ./simulation_code/p_main.f90
     131 ./pre_process_code/m_derived_types.f90
     125 ./pre_process_code/p_main.f90
      96 ./simulation_code/m_derived_types.f90
      63 ./post_process_code/m_derived_types.f90
      30 ./simulation_code/m_compile_specific.f90
      30 ./pre_process_code/m_compile_specific.f90
      30 ./post_process_code/m_compile_specific.f90

Unnecessary flags in directory deletion

Current Behavior
Removing a directory on *nix systems through a call to the s_delete_directory subroutine invokes rm -rf to remove a directory along with any files contained within. Calling the wrong path with elevated permissions can be dangerous. Only directories created by the program should be deleted, which should not necessitate the use of the force flag with correct filesystem permissions. If there are incorrect filesystem permissions, it is not the responsibility of the subroutine to rectify them.

Proposed Change
Change the system call to rm -r

Doxygen broken with addition of fypp everywhere.

Describe the bug
Doxygen doesn't recognize fypp files and thus doesn't build documentation for them.

To Reproduce
Steps to reproduce the behavior:

  1. Navigate here

Expected behavior
Documentation should be produced for all source files.
If fypp and Doxygen do not play nicely, one can autogenerate the code and use Doxygen on those files.

Installing MFC in ubuntu

Hi MFC team,
I am new to MFC and I have some difficulties in installing MFC into me Ubuntu. It would be very helpful, if you could provide instructions to install MFC in a linux system. I have read the documentation, but installation was unsuccessful. Looking forward to hear from you. Thank you.

MFC fickle when building on Phoenix-Slurm

Not really sure what's going on but builds on Phoenix seem somewhat broken.

Here are the modules

atl1-1-02-009-34-0: p-sbryngelson3-0/MFC $ module list

Currently Loaded Modules:
  1) pace-slurm/2022.06               7) bzip2/1.0.8-z5cmka   (H)  13) gettext/0.19.8.1-yz6qtc       (H)  19) xz/5.2.2-kbeci4       (H)
  2) zlib/1.2.7-s3gked          (H)   8) libmd/1.0.4-wdkbs3   (H)  14) libffi/3.4.2-bvfjil           (H)  20) libxml2/2.9.13-d4fgiv (H)
  3) nvhpc/22.11                      9) libbsd/0.11.5-j4ccxs (H)  15) sqlite/3.38.5-sweldt          (H)  21) cuda/11.6.0-u4jzhg
  4) ncurses/6.2-qhoz4g         (H)  10) expat/2.4.8-kng6xl        16) util-linux-uuid/2.36.2-6u5eni (H)
  5) openssl/1.0.2k-fips-xbtc42 (H)  11) readline/8.1-v3ivmo  (H)  17) python/3.9.12-rkxvr6
  6) cmake/3.23.1-327dbl             12) gdbm/1.19-54ea7n     (H)  18) libiconv/1.16-pbdcxj          (H)

and here is the error

  $ cmake -DMFC_SIMULATION=ON -Wno-dev -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DCMAKE_BUILD_TYPE=Release
-DCMAKE_PREFIX_PATH=/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/install
-DCMAKE_FIND_ROOT_PATH=/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/install
-DCMAKE_INSTALL_PREFIX=/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/install -DMFC_MPI=ON -DMFC_OpenACC=OFF -S
/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/ -B /storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/simulation

-- The C compiler identification is NVHPC 22.11.0
-- The Fortran compiler identification is NVHPC 22.11.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/local/pace-apps/manual/packages/nvhpc/Linux_x86_64/22.11/compilers/bin/nvc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Check for working Fortran compiler: /usr/local/pace-apps/manual/packages/nvhpc/Linux_x86_64/22.11/compilers/bin/nvfortran - skipped
-- Performing Test SUPPORTS_MARCH_NATIVE
-- Performing Test SUPPORTS_MARCH_NATIVE - Success
-- Enabled IPO / LTO
-- Found MPI_Fortran: /storage/pace-apps/manual/packages/nvhpc/Linux_x86_64/22.11/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi_usempif08.so (found version "3.1")
-- Found MPI: TRUE (found version "3.1") found components: Fortran
-- Found CUDAToolkit: /usr/local/pace-apps/spack/packages/linux-rhel7-x86_64/gcc-4.8.5/cuda-11.7.0-7sdye3id7ahz34mzhyzzqbxowjxgxkhu/include (found version "11.7.64")
-- Looking for pthread.h
-- Looking for pthread.h - not found
CMake Error at /storage/pace-apps/spack/packages/linux-rhel7-x86_64/gcc-4.8.5/cmake-3.23.1-327dblnbramviejdezocehqsujhu7yyg/share/cmake-3.23/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
  Could NOT find Threads (missing: Threads_FOUND)
Call Stack (most recent call first):
  /storage/pace-apps/spack/packages/linux-rhel7-x86_64/gcc-4.8.5/cmake-3.23.1-327dblnbramviejdezocehqsujhu7yyg/share/cmake-3.23/Modules/FindPackageHandleStandardArgs.cmake:594 (_FPHSA_FAILURE_MESSAGE)
  /storage/pace-apps/spack/packages/linux-rhel7-x86_64/gcc-4.8.5/cmake-3.23.1-327dblnbramviejdezocehqsujhu7yyg/share/cmake-3.23/Modules/FindThreads.cmake:238 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
  /storage/pace-apps/spack/packages/linux-rhel7-x86_64/gcc-4.8.5/cmake-3.23.1-327dblnbramviejdezocehqsujhu7yyg/share/cmake-3.23/Modules/FindCUDAToolkit.cmake:910 (find_package)
  CMakeLists.txt:259 (find_package)


-- Configuring incomplete, errors occurred!
See also "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/build/simulation/CMakeFiles/CMakeOutput.log".
See also "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/build/simulation/CMakeFiles/CMakeError.log".

This sometimes occurs for regular MFC, sometimes not.

move checks

Move most start_up.f90 checks to the Python parser. Some will probably have to remain at runtime, but most can be moved.

Equation of state is not DRY

The equation of state is used to compute the pressure from the variables. This EOS changes slightly for different models. The EOS implementation is repeated several places in the code, including in m_variables_converison, m_data_output (including serial data output and probe outputs), and likely more. Following the DRY principle (don't repeat yourself), we should fix this. It has also caused multiple false bugs in the past.

Reading pre_process.inp fails with malformed file

Describe the bug
Some malformed case.py will be processed into a malformed pre_process.inp file that will fail to be successfully read into the namelist by read in s_read_input_file(). An undescriptive error message and a stack trace are shown upon program abort.

To Reproduce
One example of a malformed file:

  1. Replace model_eqn in case.py with a non-numeric string
  2. Run mfc.sh run path-to-case-py -t pre_process

Expected behavior
A clear error message informing the user of the malformed pre_process.inp file.

Proposed Fix
Check iostat flag and print an useful error message before aborting. E.g.:

if (iostatus /= 0) then
    backspace(1)
    read(1, fmt='(A)') line
    print '(A)', 'Invalid line in pre_process.inp around: '//trim(line)
    print '(A)', 'Exiting ...'
    call s_mpi_abort()
end if

Helper functions?

It seems like there are "helper functions" peppered around the codebase. These are doing things like computing finite difference coefficients, applying divergence theorems, etc. etc.

Can we (1) make a list of these and then, after review, (2) move them all to a common "helper" module?

`./mfc.sh test` sometimes hangs

./mfc.sh test sometimes hangs, doing nothing. This occurs when I test the GPU build on various servers. Using -b XXXX can often fix the problem. However, a new user will have no idea what to do and assume the tests are very slow (or have a bug). At the very least we should kill the test if it isn't going to work and then recommend certain things to try.

Unclear which variables are which

This has been a persisting issue for years. It is unclear what q_prim_vf(i) corresponds to what variable for what cases for each i. Likewise, what q_cons_vf(i) gets paired with it. Need a resolution for this and add it to the docs. Probably want a simple table with two columns that list the variables in the order they would appear, should they exist. For example, I believe sub-grid bubble variables come before hypoelastic ones, but I'm not 100% sure. What about the 6-equation model?

Prosed new modules

I propose moving all monopole source terms in m_rhs.f90 into a new module (m_monopole.f90) and viscous terms (which right now are in a separate subroutine in m_rhs.f90 to yet another new module (m_viscous.f90).

Error on READE.md

The command in README.md should be ./input.py pre_process, not python pre_process.

Testing doesn't appear to work on GPUs on Expanse

There appear to be some problems with testing using release-gpu on Expanse. I'm not sure what the correct build and test procedure is. But I've tried all the different -b options and ensured there are enough GPUs on hand. With srun it hangs, and with mpirun and mpiexec it fails.

Refactor for new variable conversion and modularize sound speed

From here:
#13 (comment)

We see that we compute the primitive variables "by hand" several times throughout the code.
This happens in m_data_output and some other places as well.
Now that @anshgupta1234 has made the routines nice and simple for this, we should call those instead.

Also, the speed of sound c is computed by hand in several places, though this could be moved out to common/. c is also sometimes called things like c_avg or c_L or c_R.

Some examples don't work

[I]lawn-100-70-34-65: Downloads/MFC $ ./mfc.sh run ./examples/2D_mixing_nobubble/case.py
      ___            ___          ___
     /__/\          /  /\        /  /\       [email protected] [Darwin]
    |  |::\        /  /:/_      /  /:/       --------------------------------------------------
    |  |:|:\      /  /:/ /\    /  /:/
  __|__|:|\:\    /  /:/ /:/   /  /:/  ___
 /__/::::| \:\  /__/:/ /:/   /__/:/  /  /\   --jobs:    1
 \  \:\~~\__\/  \  \:\/:/    \  \:\ /  /:/   --mode:    release-cpu
  \  \:\         \  \::/      \  \:\  /:/    --targets: pre_process, simulation, and post_process
   \  \:\         \  \:\       \  \:\/:/
    \  \:\         \  \:\       \  \::/
     \__\/          \__\/        \__\/       $ ./mfc.sh [build, run, test, clean] --help

Run
  Acquiring ./examples/2D_mixing_nobubble/case.py...
  Configuration:
    Input               ./examples/2D_mixing_nobubble/case.py
    Job Name      (-#)  unnamed
    Engine        (-e)  interactive
    Nodes         (-N)  1
    CPUs (/node)  (-n)  1
    GPUs (/node)  (-g)  0
    MPI Binary    (-b)  mpirun

  Running pre_process:
    Building pre_process:

      $ cd "/Users/spencer/Downloads/MFC/build/pre_process" && cmake --build . -j 1 --target pre_process --config Release

ninja: no work to do.

      $ cd "/Users/spencer/Downloads/MFC/build/pre_process" && cmake --install .

-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/pre_process

    Running pre_process:

      $ mpirun -np 1  "/Users/spencer/Downloads/MFC/build/install/bin/pre_process"

 Final Time  0.10148899999999997

      Done (in 0:00:05.448766)

  Running simulation:
    Building simulation:

      $ cd "/Users/spencer/Downloads/MFC/build/simulation" && cmake --build . -j 1 --target simulation --config Release

ninja: no work to do.

      $ cd "/Users/spencer/Downloads/MFC/build/simulation" && cmake --install .

-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/simulation

    Running simulation:

      $ mpirun -np 1  "/Users/spencer/Downloads/MFC/build/install/bin/simulation"

At line 114 of file /Users/spencer/Downloads/MFC/src/simulation_code/autogen/m_start_up.f90 (unit = 1, file = './simulation.inp')
Fortran runtime error: Cannot match namelist object name .0t_step_save

Error termination. Backtrace:

Could not print backtrace: executable file is not an executable
#0  0x103753187
#1  0x103753d37
#2  0x103754613
#3  0x103838f9b
#4  0x1038413d3
#5  0x103841687
#6  0x102ebe227
#7  0x102ecd7e3
#8  0x102f5a0cf
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[42162,1],0]
  Exit code:    2
--------------------------------------------------------------------------


Error: Failed to execute command "cd "/Users/spencer/Downloads/MFC/examples/2D_mixing_nobubble" && mpirun -np 1
"/Users/spencer/Downloads/MFC/build/install/bin/simulation"".

Intel compilers: warning #5117: Bad # preprocessor line

When building MicroFC on Phoenix (CPU), I ran into the following warnings.
I expect they also show up with MFC.

/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/include/case.fpp(5): warning #5117: Bad # preprocessor line

(there are several of these). This seems to be a known thing for Intel compilers.

login-phoenix-4: p-sbryngelson3-0/MicroFC $ module list

Currently Loaded Modules:
  1) xalt/2.8.4   2) intel/19.0.5   3) mvapich2/2.3.2   4) gcc-compatibility/8.3.0   5) pace/2020.01
Building simulation:

  $ cmake -DMFC_SIMULATION=ON -Wno-dev -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DCMAKE_BUILD_TYPE=Release
-DCMAKE_PREFIX_PATH=/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/install
-DCMAKE_FIND_ROOT_PATH=/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/install
-DCMAKE_INSTALL_PREFIX=/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/install -DMFC_MPI=ON -DMFC_OpenACC=OFF -S
/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/ -B /storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/simulation

-- The C compiler identification is Intel 19.0.5.20190815
-- The Fortran compiler identification is Intel 19.0.5.20190815
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/local/pace-apps/spack/packages/0.13/linux-rhel7-cascadelake/intel-19.0.5/mvapich2-2.3.2-hpgbkqoytbjh35qn2t63rdorepxcezek/bin/mpicc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Check for working Fortran compiler: /usr/local/pace-apps/spack/packages/0.13/linux-rhel7-cascadelake/intel-19.0.5/mvapich2-2.3.2-hpgbkqoytbjh35qn2t63rdorepxcezek/bin/mpif90 - skipped
-- Performing Test SUPPORTS_MARCH_NATIVE
-- Performing Test SUPPORTS_MARCH_NATIVE - Success
-- Enabled IPO / LTO
-- Found MPI_Fortran: /usr/local/pace-apps/spack/packages/0.13/linux-rhel7-cascadelake/intel-19.0.5/mvapich2-2.3.2-hpgbkqoytbjh35qn2t63rdorepxcezek/bin/mpif90 (found version "3.1")
-- Found MPI: TRUE (found version "3.1") found components: Fortran
-- Configuring done
-- Generating done
-- Build files have been written to: /storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/build/simulation

  $ cmake --build /storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/simulation --target simulation -j 4 --config Release

[  6%] Preprocessing (Fypp) p_main.fpp
[  6%] Preprocessing (Fypp) m_data_output.fpp
[ 13%] Preprocessing (Fypp) m_global_parameters.fpp
[ 13%] Preprocessing (Fypp) m_mpi_proxy.fpp
[ 17%] Preprocessing (Fypp) m_rhs.fpp
[ 20%] Preprocessing (Fypp) m_riemann_solvers.fpp
[ 24%] Preprocessing (Fypp) m_start_up.fpp
[ 27%] Preprocessing (Fypp) m_time_steppers.fpp
[ 31%] Preprocessing (Fypp) m_variables_conversion.fpp
[ 41%] Preprocessing (Fypp) m_viscous.fpp
[ 41%] Preprocessing (Fypp) m_weno.fpp
[ 41%] Preprocessing (Fypp) macros.fpp
Scanning dependencies of target simulation
[ 48%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/m_nvtx.f90.o
[ 48%] Building Fortran object CMakeFiles/simulation.dir/src/common/m_derived_types.f90.o
[ 51%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/macros.fpp.f90.o
[ 55%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_global_parameters.fpp.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/include/case.fpp(5): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_global_parameters.fpp" 2
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
[ 58%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_mpi_proxy.fpp.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/include/case.fpp(5): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_mpi_proxy.fpp" 2
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_mpi_proxy.fpp(592): warning #6843: A dummy argument with an explicit INTENT(OUT) declaration is not given an explicit value.   [CCFL_MAX_GLB]
                                                       ccfl_max_glb, &
-------------------------------------------------------^
[ 65%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_variables_conversion.fpp.f90.o
[ 65%] Building Fortran object CMakeFiles/simulation.dir/src/common/m_compile_specific.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/macros.fpp(28): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/m_variables_conversion.fpp" 2
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
[ 79%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_riemann_solvers.fpp.f90.o
[ 79%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_data_output.fpp.f90.o
[ 79%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_start_up.fpp.f90.o
[ 79%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_weno.fpp.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/include/case.fpp(5): warning #5117: Bad # preprocessor line
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/macros.fpp(28): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_start_up.fpp" 2
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_data_output.fpp" 2
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/macros.fpp(28): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_weno.fpp" 2
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
[ 82%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_viscous.fpp.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/macros.fpp(28): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_viscous.fpp" 2
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
[ 86%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_rhs.fpp.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/macros.fpp(28): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_rhs.fpp" 2
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
[ 89%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_time_steppers.fpp.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/macros.fpp(28): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_time_steppers.fpp" 2
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
[ 93%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/m_derived_variables.f90.o
[ 96%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/p_main.fpp.f90.o
[100%] Linking Fortran executable simulation
[100%] Built target simulation

Compiler env variables not set

If cmake finds an issue with your compiler, then throw an additional error message that says to consider what modules you have loaded and to check if the env variables are set, e.g.:

CC=gcc CXX=g++ FC=gfortran

Stop self-hosted CI jobs from running on forks

Currently, if you have a fork of MFC with CI/Workflows enabled, GitHub tries to run the self-hosted job (from the matrix configuration) and stalls for 10+ hours waiting for a self-hosted runner to become available, before ultimately failing.

`np 2` parallel test cases don't work with `--no-mpi` build option

Also, notice this specific line below:

[36m[m: Entering the Python virtual environment (venv).

Fix is just to skip any np=2 parallel test cases if --no-mpi is enabled/built.

[I]shb-m1pro: Downloads/MFC $ ./mfc.sh test -j 8
[mfc.sh]: Entering the Python virtual environment (venv).
      ___            ___          ___
     /__/\          /  /\        /  /\       [email protected] [Darwin]
    |  |::\        /  /:/_      /  /:/       ---------------------------------------
    |  |:|:\      /  /:/ /\    /  /:/
  __|__|:|\:\    /  /:/ /:/   /  /:/  ___
 /__/::::| \:\  /__/:/ /:/   /__/:/  /  /\   --jobs:    8
 \  \:\~~\__\/  \  \:\/:/    \  \:\ /  /:/   --mode:    release-cpu
  \  \:\         \  \::/      \  \:\  /:/
   \  \:\         \  \:\       \  \:\/:/
    \  \:\         \  \:\       \  \::/
     \__\/          \__\/        \__\/       $ ./mfc.sh [build, run, test, clean] --help

Building pre_process:

  $ cd "/Users/spencer/Downloads/MFC/build/pre_process" && cmake --build . -j 8 --target pre_process --config Release

ninja: no work to do.

  $ cd "/Users/spencer/Downloads/MFC/build/pre_process" && cmake --install .

-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/pre_process

Building simulation:

  $ cd "/Users/spencer/Downloads/MFC/build/simulation" && cmake --build . -j 8 --target simulation --config Release

ninja: no work to do.

  $ cd "/Users/spencer/Downloads/MFC/build/simulation" && cmake --install .

-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/simulation

Test | from 5EB1467A to 177B85F6 (136 tests)

   tests/UUID    Summary

    5EB1467A    1D (m=299,n=0,p=0) -> bc=-1
    7633CC50    1D (m=299,n=0,p=0) -> bc=-7
    B20A6EDF    1D (m=299,n=0,p=0) -> bc=-5
    B9B1D51C    1D (m=299,n=0,p=0) -> bc=-2
    BB633DEF    1D (m=299,n=0,p=0) -> bc=-6
    DE580877    1D (m=299,n=0,p=0) -> bc=-9
    C7B4AC8B    1D (m=299,n=0,p=0) -> bc=-8
    A9612DE8    1D (m=299,n=0,p=0) -> bc=-4
    187180D7    1D (m=299,n=0,p=0) -> bc=-10
    A2B1419E    1D (m=299,n=0,p=0) -> bc=-11
    1199FE98    1D (m=299,n=0,p=0) -> bc=-12
    48F2140A    1D (m=299,n=0,p=0) -> bc=-3
    986F8670    1D (m=299,n=0,p=0) -> bc=-3 -> weno_order=3 -> (mapped_weno=F,mp_weno=F)
    DEC0D29F    1D (m=299,n=0,p=0) -> bc=-3 -> weno_order=3 -> (mapped_weno=T,mp_weno=F)
    2CD5CD51    1D (m=299,n=0,p=0) -> bc=-3 -> weno_order=5 -> (mapped_weno=F,mp_weno=F)
    9B2B644F    1D (m=299,n=0,p=0) -> bc=-3 -> weno_order=5 -> (mapped_weno=T,mp_weno=F)
    3DF3CF18    1D (m=299,n=0,p=0) -> bc=-3 -> weno_order=5 -> (mapped_weno=F,mp_weno=T)
    05677938    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=1 -> mixture_err=T
    2594B368    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=1 -> avg_state=1
    CB13F6BE    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=1 -> wave_speeds=2
    17B60C53    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=2 -> mixture_err=T
    70F8A4AE    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=2 -> avg_state=1
    BA7CD3C6    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=2 -> wave_speeds=2
    9E3BD925    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=2 -> model_eqns=3
    29C5E4F8    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=1 -> mixture_err=T
    09C40057    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=1 -> avg_state=1
    93D76284    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=1 -> wave_speeds=2
    FC4F775E    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=1 -> mpp_lim=T
    195C6466    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=2 -> mixture_err=T
    68979A4C    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=2 -> avg_state=1
    14ACFE2E    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=2 -> wave_speeds=2
    9F0CEFA7    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=2 -> model_eqns=3
    93C304EB    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=2 -> alt_soundspeed=T
    EAFB27F8    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=2 -> mpp_lim=T
    C2C2056C    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> Viscous -> weno_Re_flux=F
    D7DBAB59    1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> Viscous -> weno_Re_flux=T
    A85CEC37    1D (m=299,n=0,p=0) -> bc=-3 -> bubbles=T -> Monopole=T -> polytropic=T -> bubble_model=3
    A8D0F761    1D (m=299,n=0,p=0) -> bc=-3 -> bubbles=T -> Monopole=T -> polytropic=T -> bubble_model=2
    FF085677    1D (m=299,n=0,p=0) -> bc=-3 -> bubbles=T -> Monopole=T -> polytropic=F -> bubble_model=2
    1A6B6EB3    1D (m=299,n=0,p=0) -> bc=-3 -> bubbles=T -> Monopole=T -> nb=1
    74FE6AA7    2D (m=49,n=39,p=0) -> bc=-1
    C435F933    1D (m=299,n=0,p=0) -> bc=-3 -> bubbles=T -> Monopole=T -> qbmm=T -> bubble_model=3
    3E60C4D1    2D (m=49,n=39,p=0) -> bc=-2
    06601C13    1D (m=299,n=0,p=0) -> bc=-3 -> bubbles=T -> Monopole=T -> qbmm=T
    F1051537    2D (m=49,n=39,p=0) -> bc=-5
    23FE7630    2D (m=49,n=39,p=0) -> bc=-4
  [36m[m: Entering the Python virtual environment (venv).
        ___            ___          ___
       /__/\          /  /\        /  /\       [email protected] [Darwin]
      |  |::\        /  /:/_      /  /:/       ---------------------------------------
      |  |:|:\      /  /:/ /\    /  /:/
    __|__|:|\:\    /  /:/ /:/   /  /:/  ___
   /__/::::| \:\  /__/:/ /:/   /__/:/  /  /\   --jobs:    1
   \  \:\~~\__\/  \  \:\/:/    \  \:\ /  /:/   --mode:    release-cpu
    \  \:\         \  \::/      \  \:\  /:/    --targets: pre_process and simulation
     \  \:\         \  \:\       \  \:\/:/
      \  \:\         \  \:\       \  \::/
       \__\/          \__\/        \__\/       $ ./mfc.sh  --help

  Run
    Acquiring /Users/spencer/Downloads/MFC/tests/9A665F13/case.py...
    Configuration:
      Input               /Users/spencer/Downloads/MFC/tests/9A665F13/case.py
      Job Name      (-#)  unnamed
      Engine        (-e)  interactive
      Nodes         (-N)  1
      CPUs (/node)  (-n)  2
      GPUs (/node)  (-g)  0
      MPI Binary    (-b)  mpirun

    Running pre_process:
      Running pre_process:

        $ mpirun -np 2  "/Users/spencer/Downloads/MFC/build/install/bin/pre_process"

   s_mpi_bcast_user_inputs not supported without MPI.
   s_mpi_decompose_computational_domain not supported without MPI.
   s_mpi_bcast_user_inputs not supported without MPI.
   s_mpi_decompose_computational_domain not supported without MPI.
  At line 99 of file /Users/spencer/Downloads/MFC/src/pre_process/m_data_output.f90 (unit = 1)
  Fortran runtime error: Cannot open file './p_all/p0/0/x_cb.dat': File exists

  Error termination. Backtrace:
   s_mpi_barrier not supported without MPI.
   Final Time   3.4029999999999993E-003
   s_mpi_finalize not supported without MPI.
  #0  0x102a53187
  #1  0x102a53d37
  #2  0x102a54613
  #3  0x102b3a1e3
  #4  0x102b3a3fb
  #5  0x1025b39cf
  #6  0x1025b93a7
  #7  0x1025d407f
  --------------------------------------------------------------------------
  Primary job  terminated normally, but 1 process returned
  a non-zero exit code. Per user-direction, the job has been aborted.
  --------------------------------------------------------------------------
  --------------------------------------------------------------------------
  mpirun detected that one or more processes exited with non-zero status, thus causing
  the job to be terminated. The first process to do so was:

    Process name: [[60189,1],1]
    Exit code:    2
  --------------------------------------------------------------------------

New issue with wingtip

Documenting this here. Wingtip runner failing to build MFC because it hangs here

/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(510):  function(_CUDAToolkit_find_root_dir )
   Called from: [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(563):  function(_CUDAToolkit_find_version_file result_variable )
   Called from: [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(577):  if(CMAKE_CUDA_COMPILER_LOADED AND NOT CUDAToolkit_BIN_DIR AND CMAKE_CUDA_COMPILER_ID STREQUAL NVIDIA )
   Called from: [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(586):  if(NOT CUDAToolkit_ROOT_DIR AND CUDAToolkit_ROOT )
   Called from: [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(589):  if(NOT CUDAToolkit_ROOT_DIR )
   Called from: [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(590):  _CUDAToolkit_find_root_dir(FIND_FLAGS PATHS ENV CUDA_PATH PATH_SUFFIXES bin )
   Called from: [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(511):  cmake_parse_arguments(arg   SEARCH_PATHS;FIND_FLAGS ${ARGN} )
   Called from: [3]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(513):  if(NOT CUDAToolkit_BIN_DIR )
   Called from: [3]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(514):  if(NOT CUDAToolkit_SENTINEL_FILE )
   Called from: [3]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(515):  find_program(CUDAToolkit_NVCC_EXECUTABLE NAMES nvcc nvcc.exe PATHS ${arg_SEARCH_PATHS} ${arg_FIND_FLAGS} )
   Called from: [3]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(522):  if(NOT CUDAToolkit_NVCC_EXECUTABLE )
   Called from: [3]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(530):  if(EXISTS ${CUDAToolkit_NVCC_EXECUTABLE} )
   Called from: [3]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(534):  execute_process(COMMAND ${CUDAToolkit_NVCC_EXECUTABLE} -v __cmake_determine_cuda OUTPUT_VARIABLE _CUDA_NVCC_OUT ERROR_VARIABLE _CUDA_NVCC_OUT )
   Called from: [3]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [2]	/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
                [1]	/nethome/sbryngelson3/MFC/CMakeLists.txt
^CTerminated

MFC build fails on MacOS due to attempting to fetch aarch64 cmake

MFC build fails on MacOS due to attempting to fetch aarch64 cmake

[I]shb-m1pro: Downloads/MFC $ ./mfc.sh build -j 1
Traceback (most recent call last):
  File "/opt/homebrew/bin/cmake", line 8, in <module>
    sys.exit(cmake())
  File "/opt/homebrew/lib/python3.10/site-packages/cmake/__init__.py", line 46, in cmake
    raise SystemExit(_program('cmake', sys.argv[1:]))
  File "/opt/homebrew/lib/python3.10/site-packages/cmake/__init__.py", line 42, in _program
    return subprocess.call([os.path.join(CMAKE_BIN_DIR, name)] + args)
  File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 345, in call
    with Popen(*popenargs, **kwargs) as p:
  File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 971, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1847, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/opt/homebrew/lib/python3.10/site-packages/cmake/data/CMake.app/Contents/bin/cmake'
[mfc.sh]: CMake is out of date (current:  < minimum: 3.18).
[mfc.sh]: Downloading CMake v3.24.2 for arm64 from https://github.com/Kitware/CMake.
--2022-10-28 21:29:09--  https://github.com/Kitware/CMake/releases/download/v3.24.2/cmake-3.24.2-linux-arm64.sh
Resolving github.com (github.com)... 140.82.114.4
Connecting to github.com (github.com)|140.82.114.4|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2022-10-28 21:29:09 ERROR 404: Not Found.

[mfc.sh]: Error: Failed to download a compatible version of CMake.
CMake is not discoverable or is an older release, incompatible with MFC. Please download
or install a recent version of CMake to get past this step. If you are currently on a
managed system like a cluster, provided there is no suitable environment module, you can
either build it from source, or get it via Spack.
- The minimum required version is currently CMake v3.18.0.
- We attempted to download CMake v3.24.2 from https://github.com/Kitware/CMake/releases/download/v3.24.2/cmake-3.24.2-linux-arm64.sh.

Document undocumented patches

Currently, patch types 14 to 19 are undocumented and they perhaps should be.

While on the topic, wouldn't it be clearer if each patch type had its own derived type within patch_icpp(i)? For example, an STL patch (#83) could be defined as:

patch_icpp(2)%geometry:      20,
patch_icpp(2)%stl%filepath:  'path/to/stl/file`,
patch_icpp(2)%stl%scale(2):  1.0,
patch_icpp(2)%stl%offset(3): 0.25,

This, in a sense, self-documents what properties each patch type can have. Every attribute that is common to all patches would be in the base patch_icpp(i)% scope.

`m_riemann_solvers` refactor made simple

Part of the reason m_riemann_solvers is so long is because, for HLLC, there are two separate calls for model_eqns == 2 and model_eqns == 2 .and. bubbles. These can be nicely condensed, with appropriate if statements if (bubbles) then, though it requires some care.

Bubble code is not in `m_rhs`?

This code used to be in the m_bubbles modules, but now it's also in s_compute_rhs? Also, s_compute_rhs is 1863 lines long.

MFC/src/simulation/m_rhs.f90

Lines 1113 to 1321 in ecbcb72

if (bubbles) then
if (qbmm) then
! advection source
! bubble sources
!$acc parallel loop collapse(3) gang vector default(present)
do l = 0, p
do q = 0, n
do i = 0, m
rhs_vf(alf_idx)%sf(i, q, l) = rhs_vf(alf_idx)%sf(i, q, l) + mom_sp(2)%sf(i, q, l)
j = bubxb
!$acc loop seq
do k = 1, nb
rhs_vf(j)%sf(i, q, l) = &
rhs_vf(j)%sf(i, q, l) + mom_3d(0, 0, k)%sf(i, q, l)
rhs_vf(j + 1)%sf(i, q, l) = &
rhs_vf(j + 1)%sf(i, q, l) + mom_3d(1, 0, k)%sf(i, q, l)
rhs_vf(j + 2)%sf(i, q, l) = &
rhs_vf(j + 2)%sf(i, q, l) + mom_3d(0, 1, k)%sf(i, q, l)
rhs_vf(j + 3)%sf(i, q, l) = &
rhs_vf(j + 3)%sf(i, q, l) + mom_3d(2, 0, k)%sf(i, q, l)
rhs_vf(j + 4)%sf(i, q, l) = &
rhs_vf(j + 4)%sf(i, q, l) + mom_3d(1, 1, k)%sf(i, q, l)
rhs_vf(j + 5)%sf(i, q, l) = &
rhs_vf(j + 5)%sf(i, q, l) + mom_3d(0, 2, k)%sf(i, q, l)
j = j + 6
end do
end do
end do
end do
else
!$acc parallel loop collapse(3) gang vector default(present)
do l = 0, p
do k = 0, n
do j = 0, m
divu%sf(j, k, l) = 0d0
divu%sf(j, k, l) = &
5d-1/dx(j)*(q_prim_qp%vf(contxe + id)%sf(j + 1, k, l) - &
q_prim_qp%vf(contxe + id)%sf(j - 1, k, l))
end do
end do
end do
!$acc parallel loop collapse(3) gang vector default(present) private(Rtmp, Vtmp)
do l = 0, p
do k = 0, n
do j = 0, m
bub_adv_src(j, k, l) = 0d0
!$acc loop seq
do q = 1, nb
bub_r_src(j, k, l, q) = 0d0
bub_v_src(j, k, l, q) = 0d0
bub_p_src(j, k, l, q) = 0d0
bub_m_src(j, k, l, q) = 0d0
end do
end do
end do
end do
ndirs = 1; if (n > 0) ndirs = 2; if (p > 0) ndirs = 3
if (id == ndirs) then
!$acc parallel loop collapse(3) gang vector default(present) private(Rtmp, Vtmp)
do l = 0, p
do k = 0, n
do j = 0, m
!$acc loop seq
do q = 1, nb
Rtmp(q) = q_prim_qp%vf(rs(q))%sf(j, k, l)
Vtmp(q) = q_prim_qp%vf(vs(q))%sf(j, k, l)
end do
call s_comp_n_from_prim(q_prim_qp%vf(alf_idx)%sf(j, k, l), &
Rtmp, nbub(j, k, l))
call s_quad((Rtmp**2.d0)*Vtmp, R2Vav)
bub_adv_src(j, k, l) = 4.d0*pi*nbub(j, k, l)*R2Vav
end do
end do
end do
!$acc parallel loop collapse(3) gang vector default(present) private(myalpha_rho, myalpha)
do l = 0, p
do k = 0, n
do j = 0, m
!$acc loop seq
do q = 1, nb
bub_r_src(j, k, l, q) = q_cons_qp%vf(vs(q))%sf(j, k, l)
!$acc loop seq
do ii = 1, num_fluids
myalpha_rho(ii) = q_cons_qp%vf(ii)%sf(j, k, l)
myalpha(ii) = q_cons_qp%vf(advxb + ii - 1)%sf(j, k, l)
end do
myRho = 0d0
n_tait = 0d0
B_tait = 0d0
if (mpp_lim .and. (num_fluids > 2)) then
!$acc loop seq
do ii = 1, num_fluids
myRho = myRho + myalpha_rho(ii)
n_tait = n_tait + myalpha(ii)*gammas(ii)
B_tait = B_tait + myalpha(ii)*pi_infs(ii)
end do
else if (num_fluids > 2) then
!$acc loop seq
do ii = 1, num_fluids - 1
myRho = myRho + myalpha_rho(ii)
n_tait = n_tait + myalpha(ii)*gammas(ii)
B_tait = B_tait + myalpha(ii)*pi_infs(ii)
end do
else
myRho = myalpha_rho(1)
n_tait = gammas(1)
B_tait = pi_infs(1)
end if
n_tait = 1.d0/n_tait + 1.d0 !make this the usual little 'gamma'
myRho = q_prim_qp%vf(1)%sf(j, k, l)
myP = q_prim_qp%vf(E_idx)%sf(j, k, l)
alf = q_prim_qp%vf(alf_idx)%sf(j, k, l)
myR = q_prim_qp%vf(rs(q))%sf(j, k, l)
myV = q_prim_qp%vf(vs(q))%sf(j, k, l)
if (.not. polytropic) then
pb = q_prim_qp%vf(ps(q))%sf(j, k, l)
mv = q_prim_qp%vf(ms(q))%sf(j, k, l)
call s_bwproperty(pb, q)
vflux = f_vflux(myR, myV, mv, q)
pbdot = f_bpres_dot(vflux, myR, myV, pb, mv, q)
bub_p_src(j, k, l, q) = nbub(j, k, l)*pbdot
bub_m_src(j, k, l, q) = nbub(j, k, l)*vflux*4.d0*pi*(myR**2.d0)
else
pb = 0d0; mv = 0d0; vflux = 0d0; pbdot = 0d0
end if
if (bubble_model == 1) then
! Gilmore bubbles
Cpinf = myP - pref
Cpbw = f_cpbw(R0(q), myR, myV, pb)
myH = f_H(Cpbw, Cpinf, n_tait, B_tait)
c_gas = f_cgas(Cpinf, n_tait, B_tait, myH)
Cpinf_dot = f_cpinfdot(myRho, myP, alf, n_tait, B_tait, bub_adv_src(j, k, l), divu%sf(j, k, l))
myHdot = f_Hdot(Cpbw, Cpinf, Cpinf_dot, n_tait, B_tait, myR, myV, R0(q), pbdot)
rddot = f_rddot(Cpbw, myR, myV, myH, myHdot, c_gas, n_tait, B_tait)
else if (bubble_model == 2) then
! Keller-Miksis bubbles
Cpinf = myP
Cpbw = f_cpbw_KM(R0(q), myR, myV, pb)
! c_gas = dsqrt( n_tait*(Cpbw+B_tait) / myRho)
c_liquid = DSQRT(n_tait*(myP + B_tait)/(myRho*(1.d0 - alf)))
rddot = f_rddot_KM(pbdot, Cpinf, Cpbw, myRho, myR, myV, R0(q), c_liquid)
else if (bubble_model == 3) then
! Rayleigh-Plesset bubbles
Cpbw = f_cpbw_KM(R0(q), myR, myV, pb)
rddot = f_rddot_RP(myP, myRho, myR, myV, R0(q), Cpbw)
end if
bub_v_src(j, k, l, q) = nbub(j, k, l)*rddot
if (alf < 1.d-11) then
bub_adv_src(j, k, l) = 0d0
bub_r_src(j, k, l, q) = 0d0
bub_v_src(j, k, l, q) = 0d0
if (.not. polytropic) then
bub_p_src(j, k, l, q) = 0d0
bub_m_src(j, k, l, q) = 0d0
end if
end if
end do
end do
end do
end do
end if
!$acc parallel loop collapse(3) gang vector default(present)
do l = 0, p
do q = 0, n
do i = 0, m
rhs_vf(alf_idx)%sf(i, q, l) = rhs_vf(alf_idx)%sf(i, q, l) + bub_adv_src(i, q, l)
if (num_fluids > 1) rhs_vf(advxb)%sf(i, q, l) = &
rhs_vf(advxb)%sf(i, q, l) - bub_adv_src(i, q, l)
!$acc loop seq
do k = 1, nb
rhs_vf(rs(k))%sf(i, q, l) = rhs_vf(rs(k))%sf(i, q, l) + bub_r_src(i, q, l, k)
rhs_vf(vs(k))%sf(i, q, l) = rhs_vf(vs(k))%sf(i, q, l) + bub_v_src(i, q, l, k)
if (polytropic .neqv. .true.) then
rhs_vf(ps(k))%sf(i, q, l) = rhs_vf(ps(k))%sf(i, q, l) + bub_p_src(i, q, l, k)
rhs_vf(ms(k))%sf(i, q, l) = rhs_vf(ms(k))%sf(i, q, l) + bub_m_src(i, q, l, k)
end if
end do
end do
end do
end do
end if
end if

MFC testing failing on aarch64 MacOS due to false hang when `-j X` with `X` is the max. number of available threads

[I]shb-m1pro: Downloads/MFC $ ./mfc.sh test -j 8
[mfc.sh]: Entering the Python virtual environment (venv).
      ___            ___          ___
     /__/\          /  /\        /  /\       [email protected] [Darwin]
    |  |::\        /  /:/_      /  /:/       ---------------------------------------
    |  |:|:\      /  /:/ /\    /  /:/
  __|__|:|\:\    /  /:/ /:/   /  /:/  ___
 /__/::::| \:\  /__/:/ /:/   /__/:/  /  /\   --jobs:    8
 \  \:\~~\__\/  \  \:\/:/    \  \:\ /  /:/   --mode:    release-cpu
  \  \:\         \  \::/      \  \:\  /:/
   \  \:\         \  \:\       \  \:\/:/
    \  \:\         \  \:\       \  \::/
     \__\/          \__\/        \__\/       $ ./mfc.sh [build, run, test, clean] --help

Building pre_process:

  $ cmake --build /Users/spencer/Downloads/MFC/build/pre_process --target pre_process -j 8 --config Release

[0/2] Re-checking globbed directories...
ninja: no work to do.

  $ cmake --install /Users/spencer/Downloads/MFC/build/pre_process

-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/pre_process

Building simulation:

  $ cmake --build /Users/spencer/Downloads/MFC/build/simulation --target simulation -j 8 --config Release

[0/2] Re-checking globbed directories...
ninja: no work to do.

  $ cmake --install /Users/spencer/Downloads/MFC/build/simulation

-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/simulation

Test | from D79C3E6F to BDD3411B (142 tests)

   tests/UUID    Summary

    3AE495F4    1D -> bc=-5
    C5B79059    1D -> bc=-9
    70DAE9E8    1D -> bc=-4
    D79C3E6F    1D -> bc=-1
    48CCE072    1D -> bc=-7
    5EC236F2    1D -> bc=-6
    AED93D34    1D -> bc=-8
    8A59E8E6    1D -> bc=-2
    727F72ED    1D -> bc=-10
    A60691E7    1D -> bc=-11
    3FC6FC4A    1D -> bc=-12
    2AB32975    1D -> bc=-3
    B3C85904    1D -> weno_order=3 -> mapped_weno=F -> mp_weno=F
    7077C99F    1D -> weno_order=3 -> mapped_weno=T -> mp_weno=F
    84017671    1D -> weno_order=5 -> mapped_weno=F -> mp_weno=F
    F5890628    1D -> weno_order=5 -> mapped_weno=T -> mp_weno=F
    34580912    1D -> weno_order=5 -> mapped_weno=F -> mp_weno=T
    5527832F    1D -> 1 Fluid(s) -> riemann_solver=1 -> mixture_err
    4AEF478A    1D -> 1 Fluid(s) -> riemann_solver=1 -> avg_state=1
    32D0F235    1D -> 1 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
    18BDCBC8    1D -> 1 Fluid(s) -> riemann_solver=2 -> mixture_err
    F97573DB    1D -> 1 Fluid(s) -> riemann_solver=2 -> avg_state=1
    F4F6AC27    1D -> 1 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
    2F35A1FE    1D -> 1 Fluid(s) -> riemann_solver=2 -> model_eqns=3
    1E738705    1D -> 2 Fluid(s) -> riemann_solver=1 -> mixture_err
    0879E062    1D -> 2 Fluid(s) -> riemann_solver=1 -> avg_state=1
    83EFC30C    1D -> 2 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
    1CCA82F5    1D -> 2 Fluid(s) -> riemann_solver=1 -> mpp_lim
    3A8359F6    1D -> 2 Fluid(s) -> riemann_solver=2 -> mixture_err
    6D24B115    1D -> 2 Fluid(s) -> riemann_solver=2 -> avg_state=1
    461DCB09    1D -> 2 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
    FD891191    1D -> 2 Fluid(s) -> riemann_solver=2 -> model_eqns=3
    9DAC4DDC    1D -> 2 Fluid(s) -> riemann_solver=2 -> alt_soundspeed
    C4907722    1D -> 2 Fluid(s) -> riemann_solver=2 -> mpp_lim
    C79E1D3C    1D -> 2 Fluid(s) -> Viscous
    CD9D3050    1D -> 2 Fluid(s) -> Viscous -> weno_Re_flux
    0FCCE9F1    1D -> 2 MPI Ranks
    EF54219C    1D -> Bubbles -> Monopole -> Polytropic -> bubble_model=3
    7FC6826B    1D -> Bubbles -> Monopole -> Polytropic -> bubble_model=2
    6B22A317    1D -> Bubbles -> Monopole -> bubble_model=2
    59D05DE9    1D -> Bubbles -> Monopole -> nb=1
    9EB947DB    1D -> Hypoelasticity -> 1 Fluid(s)
    AF0BCEE4    1D -> Hypoelasticity -> 2 Fluid(s)
    AF46C382    1D -> Bubbles -> Monopole -> QBMM -> bubble_model=3
    55533234    2D -> bc=-1
    EAA53889    2D -> bc=-2
    46AA7AF8    1D -> Bubbles -> Monopole -> QBMM
    20AE0551    2D -> bc=-4
    A6E65782    2D -> bc=-5
    4129A23A    2D -> bc=-6
    E84967E7    2D -> bc=-7
    5F877BC9    2D -> bc=-8
    16C03D8E    2D -> bc=-9
    B96AC58F    2D -> bc=-10
    8FDEE23A    2D -> bc=-11
    BF46F657    2D -> bc=-12
    D972BA0F    2D -> bc=-3
    E4EFEDB2    2D -> weno_order=3 -> mapped_weno=F -> mp_weno=F
    CD3D9660    2D -> weno_order=3 -> mapped_weno=T -> mp_weno=F
    3974AC7B    2D -> weno_order=5 -> mapped_weno=F -> mp_weno=F
    C04741B4    2D -> weno_order=5 -> mapped_weno=T -> mp_weno=F
    E76D41CE    2D -> weno_order=5 -> mapped_weno=F -> mp_weno=T
    7374E266    2D -> 1 Fluid(s) -> riemann_solver=1 -> mixture_err
    3BFEAC19    2D -> 1 Fluid(s) -> riemann_solver=1 -> avg_state=1
    FBF808BE    2D -> 1 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
  [36m[m: Entering the Python virtual environment (venv).
        ___            ___          ___
       /__/\          /  /\        /  /\       [email protected] [Darwin]
      |  |::\        /  /:/_      /  /:/       ---------------------------------------
      |  |:|:\      /  /:/ /\    /  /:/
    __|__|:|\:\    /  /:/ /:/   /  /:/  ___
   /__/::::| \:\  /__/:/ /:/   /__/:/  /  /\   --jobs:    1
   \  \:\~~\__\/  \  \:\/:/    \  \:\ /  /:/   --mode:    release-cpu
    \  \:\         \  \::/      \  \:\  /:/    --targets: pre_process and simulation
     \  \:\         \  \:\       \  \:\/:/
      \  \:\         \  \:\       \  \::/
       \__\/          \__\/        \__\/       $ ./mfc.sh  --help

  Run
    Acquiring /Users/spencer/Downloads/MFC/tests/043B535A/case.py...
    Configuration:
      Input               /Users/spencer/Downloads/MFC/tests/043B535A/case.py
      Job Name      (-#)  unnamed
      Engine        (-e)  interactive
      Nodes         (-N)  1
      CPUs (/node)  (-n)  1
      GPUs (/node)  (-g)  0
      MPI Binary    (-b)  mpirun

    Running pre_process:
      Ensuring the Interactive Engine works (10s timeout):

  $ mpirun -np 1 hostname



  Error: The Interactive Engine appears to hang or exit with a non-zero status code. This may indicate that the wrong MPI binary is being used
to
  launch parallel jobs. You can specify the correct one for your system using the <-b,--binary> option. For example:
   - ./mfc.sh run <myfile.py> -b mpirun
   - ./mfc.sh run <myfile.py> -b srun
  Reason: Exit code.

  Terminated: 15
  [36m[m: Exiting the Python virtual environment.
  /opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224:
UserWarning: resource_tracker: There appear to be 3 leaked semaphore objects to clean up at shutdown
    warnings.warn('resource_tracker: There appear to be %d '

    3B414AF0    2D -> 1 Fluid(s) -> riemann_solver=2 -> mixture_err


Error: Test tests/043B535A: 2D -> 1 Fluid(s) -> riemann_solver=2 -> model_eqns=3: Failed to execute MFC. You can find the run's output in
/Users/spencer/Downloads/MFC/tests/043B535A/out.txt, and the case dictionary in /Users/spencer/Downloads/MFC/tests/043B535A/case.py.

Terminated: 15
[mfc.sh]: Exiting the Python virtual environment.
[I]shb-m1pro: Downloads/MFC $
[I]shb-m1pro: Downloads/MFC $ mpif90
gfortran: fatal error: no input files
compilation terminated.
[I]shb-m1pro: Downloads/MFC $ mpif90 --version
GNU Fortran (Homebrew GCC 12.2.0) 12.2.0
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

above fails but below works

[I]shb-m1pro: Downloads/MFC $ ./mfc.sh test -j 4
[mfc.sh]: Entering the Python virtual environment (venv).
      ___            ___          ___
     /__/\          /  /\        /  /\       [email protected] [Darwin]
    |  |::\        /  /:/_      /  /:/       ---------------------------------------
    |  |:|:\      /  /:/ /\    /  /:/
  __|__|:|\:\    /  /:/ /:/   /  /:/  ___
 /__/::::| \:\  /__/:/ /:/   /__/:/  /  /\   --jobs:    4
 \  \:\~~\__\/  \  \:\/:/    \  \:\ /  /:/   --mode:    release-cpu
  \  \:\         \  \::/      \  \:\  /:/
   \  \:\         \  \:\       \  \:\/:/
    \  \:\         \  \:\       \  \::/
     \__\/          \__\/        \__\/       $ ./mfc.sh [build, run, test, clean] --help

Building pre_process:

  $ cmake --build /Users/spencer/Downloads/MFC/build/pre_process --target pre_process -j 4 --config Release

[0/2] Re-checking globbed directories...
ninja: no work to do.

  $ cmake --install /Users/spencer/Downloads/MFC/build/pre_process

-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/pre_process

Building simulation:

  $ cmake --build /Users/spencer/Downloads/MFC/build/simulation --target simulation -j 4 --config Release

[0/2] Re-checking globbed directories...
ninja: no work to do.

  $ cmake --install /Users/spencer/Downloads/MFC/build/simulation

-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/simulation

Test | from D79C3E6F to BDD3411B (142 tests)

   tests/UUID    Summary

    3AE495F4    1D -> bc=-5
    70DAE9E8    1D -> bc=-4
    8A59E8E6    1D -> bc=-2
    D79C3E6F    1D -> bc=-1
    5EC236F2    1D -> bc=-6
    48CCE072    1D -> bc=-7
    AED93D34    1D -> bc=-8
    C5B79059    1D -> bc=-9
    727F72ED    1D -> bc=-10
    A60691E7    1D -> bc=-11
    3FC6FC4A    1D -> bc=-12
    2AB32975    1D -> bc=-3
    B3C85904    1D -> weno_order=3 -> mapped_weno=F -> mp_weno=F
    7077C99F    1D -> weno_order=3 -> mapped_weno=T -> mp_weno=F
    84017671    1D -> weno_order=5 -> mapped_weno=F -> mp_weno=F
    F5890628    1D -> weno_order=5 -> mapped_weno=T -> mp_weno=F
    34580912    1D -> weno_order=5 -> mapped_weno=F -> mp_weno=T
    5527832F    1D -> 1 Fluid(s) -> riemann_solver=1 -> mixture_err
    4AEF478A    1D -> 1 Fluid(s) -> riemann_solver=1 -> avg_state=1
    32D0F235    1D -> 1 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
    18BDCBC8    1D -> 1 Fluid(s) -> riemann_solver=2 -> mixture_err
    F97573DB    1D -> 1 Fluid(s) -> riemann_solver=2 -> avg_state=1
    F4F6AC27    1D -> 1 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
    2F35A1FE    1D -> 1 Fluid(s) -> riemann_solver=2 -> model_eqns=3
    1E738705    1D -> 2 Fluid(s) -> riemann_solver=1 -> mixture_err
    0879E062    1D -> 2 Fluid(s) -> riemann_solver=1 -> avg_state=1
    83EFC30C    1D -> 2 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
    1CCA82F5    1D -> 2 Fluid(s) -> riemann_solver=1 -> mpp_lim
    3A8359F6    1D -> 2 Fluid(s) -> riemann_solver=2 -> mixture_err
    6D24B115    1D -> 2 Fluid(s) -> riemann_solver=2 -> avg_state=1
    461DCB09    1D -> 2 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
    FD891191    1D -> 2 Fluid(s) -> riemann_solver=2 -> model_eqns=3
    9DAC4DDC    1D -> 2 Fluid(s) -> riemann_solver=2 -> alt_soundspeed
    C4907722    1D -> 2 Fluid(s) -> riemann_solver=2 -> mpp_lim
    C79E1D3C    1D -> 2 Fluid(s) -> Viscous
    CD9D3050    1D -> 2 Fluid(s) -> Viscous -> weno_Re_flux
    0FCCE9F1    1D -> 2 MPI Ranks
    EF54219C    1D -> Bubbles -> Monopole -> Polytropic -> bubble_model=3
    7FC6826B    1D -> Bubbles -> Monopole -> Polytropic -> bubble_model=2
    59D05DE9    1D -> Bubbles -> Monopole -> nb=1
    6B22A317    1D -> Bubbles -> Monopole -> bubble_model=2
    AF46C382    1D -> Bubbles -> Monopole -> QBMM -> bubble_model=3
    46AA7AF8    1D -> Bubbles -> Monopole -> QBMM
    9EB947DB    1D -> Hypoelasticity -> 1 Fluid(s)
    AF0BCEE4    1D -> Hypoelasticity -> 2 Fluid(s)
    55533234    2D -> bc=-1
    EAA53889    2D -> bc=-2
    20AE0551    2D -> bc=-4
    A6E65782    2D -> bc=-5
    4129A23A    2D -> bc=-6
    E84967E7    2D -> bc=-7
    5F877BC9    2D -> bc=-8
    16C03D8E    2D -> bc=-9
    B96AC58F    2D -> bc=-10
    8FDEE23A    2D -> bc=-11
    BF46F657    2D -> bc=-12
    D972BA0F    2D -> bc=-3
    E4EFEDB2    2D -> weno_order=3 -> mapped_weno=F -> mp_weno=F
    CD3D9660    2D -> weno_order=3 -> mapped_weno=T -> mp_weno=F
    3974AC7B    2D -> weno_order=5 -> mapped_weno=F -> mp_weno=F
    C04741B4    2D -> weno_order=5 -> mapped_weno=T -> mp_weno=F
    E76D41CE    2D -> weno_order=5 -> mapped_weno=F -> mp_weno=T
    7374E266    2D -> 1 Fluid(s) -> riemann_solver=1 -> mixture_err
    3BFEAC19    2D -> 1 Fluid(s) -> riemann_solver=1 -> avg_state=1
    FBF808BE    2D -> 1 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
    3B414AF0    2D -> 1 Fluid(s) -> riemann_solver=2 -> mixture_err
    3C00B89D    2D -> 1 Fluid(s) -> riemann_solver=2 -> avg_state=1
    345A94C0    2D -> 1 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
    043B535A    2D -> 1 Fluid(s) -> riemann_solver=2 -> model_eqns=3
    16FBF4C8    2D -> 2 Fluid(s) -> riemann_solver=1 -> mixture_err
    DC9CB97E    2D -> 2 Fluid(s) -> riemann_solver=1 -> avg_state=1
    A5C93D62    2D -> 2 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
    A6AC2E06    2D -> 2 Fluid(s) -> riemann_solver=1 -> mpp_lim
    5781A4C2    2D -> 2 Fluid(s) -> riemann_solver=2 -> mixture_err
    645A26E3    2D -> 2 Fluid(s) -> riemann_solver=2 -> avg_state=1
    FC4D07B6    2D -> 2 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
    4F2F4ACE    2D -> 2 Fluid(s) -> riemann_solver=2 -> model_eqns=3
    5DAB50B2    2D -> 2 Fluid(s) -> riemann_solver=2 -> alt_soundspeed
    F0F175B2    2D -> 2 Fluid(s) -> riemann_solver=2 -> mpp_lim
    9CB03CEF    2D -> 2 Fluid(s) -> Viscous
    D6BAC936    2D -> 2 Fluid(s) -> Viscous -> weno_Re_flux
    DB670E50    2D -> Axisymmetric -> model_eqns=2
    B89B8C70    2D -> Axisymmetric -> model_eqns=3
    FB822062    2D -> Axisymmetric -> Viscous
    8C7AA13B    2D -> 2 MPI Ranks
    B3AAC9C8    2D -> Axisymmetric -> Viscous -> weno_Re_flux
    34DBFE14    2D -> Bubbles -> Monopole -> Polytropic -> bubble_model=3
    AE37D842    2D -> Bubbles -> Monopole -> nb=1
    14B6198D    2D -> Bubbles -> Monopole -> Polytropic -> bubble_model=2
    CC4F7C44    2D -> Bubbles -> Monopole -> bubble_model=2
    122713AA    2D -> Hypoelasticity -> 1 Fluid(s)
    5281BD7B    2D -> Hypoelasticity -> 2 Fluid(s)
    66CFF8CC    2D -> Bubbles -> Monopole -> QBMM -> bubble_model=3
    6FC6A809    3D -> bc=-1
    09DAFEBA    3D -> bc=-2
    303B925A    2D -> Bubbles -> Monopole -> QBMM
    F99FBB36    3D -> bc=-4
    E09A12D9    3D -> bc=-5
    5010B814    3D -> bc=-6
    730DFD6D    3D -> bc=-7
    ABAC3AE3    3D -> bc=-8
    C93BE9B5    3D -> bc=-9
    D0045756    3D -> bc=-10
    557FF170    3D -> bc=-11
    61FFF3D3    3D -> bc=-12
    6B4B738B    3D -> bc=-3
    E1352143    3D -> weno_order=3 -> mapped_weno=F -> mp_weno=F
    13DFC31D    3D -> weno_order=3 -> mapped_weno=T -> mp_weno=F
    728A2A5B    3D -> weno_order=5 -> mapped_weno=F -> mp_weno=F
    42B169F5    3D -> weno_order=5 -> mapped_weno=F -> mp_weno=T
    19E33853    3D -> weno_order=5 -> mapped_weno=T -> mp_weno=F
    9ACD5174    3D -> 1 Fluid(s) -> riemann_solver=1 -> mixture_err
    73B0539E    3D -> 1 Fluid(s) -> riemann_solver=1 -> avg_state=1
    2A523AC1    3D -> 1 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
    C06849AD    3D -> 1 Fluid(s) -> riemann_solver=2 -> mixture_err
    AB0BE4E4    3D -> 1 Fluid(s) -> riemann_solver=2 -> avg_state=1
    C36F18FB    3D -> 1 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
    6241177B    3D -> 1 Fluid(s) -> riemann_solver=2 -> model_eqns=3
    C4A2FAA3    3D -> 2 Fluid(s) -> riemann_solver=1 -> mixture_err
    851F7AE2    3D -> 2 Fluid(s) -> riemann_solver=1 -> avg_state=1
    BD8004FF    3D -> 2 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
    758D0268    3D -> 2 Fluid(s) -> riemann_solver=1 -> mpp_lim
    AACF1BC5    3D -> 2 Fluid(s) -> riemann_solver=2 -> mixture_err
    B33E256A    3D -> 2 Fluid(s) -> riemann_solver=2 -> avg_state=1
    B8F5F1C8    3D -> 2 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
    7C8F1BA9    3D -> 2 Fluid(s) -> riemann_solver=2 -> alt_soundspeed
    F0E6771E    3D -> 2 Fluid(s) -> riemann_solver=2 -> model_eqns=3
    A0B82851    3D -> 2 Fluid(s) -> riemann_solver=2 -> mpp_lim
    1C0780C8    3D -> 2 Fluid(s) -> Viscous
    301B9153    3D -> Cylindrical -> model_eqns=2
    2060F55A    3D -> 2 Fluid(s) -> Viscous -> weno_Re_flux
    07C33719    3D -> Cylindrical -> Viscous
    CE232828    3D -> 2 MPI Ranks
    939D6718    3D -> Cylindrical -> Viscous -> weno_Re_flux
    36256906    3D -> Bubbles -> Monopole -> Polytropic -> bubble_model=3
    8A341282    3D -> Bubbles -> Monopole -> nb=1
    AD63A4A5    3D -> Bubbles -> Monopole -> Polytropic -> bubble_model=2
    622DEC78    3D -> Bubbles -> Monopole -> bubble_model=2
    7EFBCDAE    3D -> Hypoelasticity -> 1 Fluid(s)
    BDD3411B    3D -> Hypoelasticity -> 2 Fluid(s)
    63850240    3D -> Bubbles -> Monopole -> QBMM -> bubble_model=3
    AB04C64D    3D -> Bubbles -> Monopole -> QBMM

  Tested โœ“
[mfc.sh]: Exiting the Python virtual environment.

All physical quantities become "infinity".

Hi,

I tried to reproduce the case of S.Sembian(2016), but icfl condition in the run_time.inf became infinity . And all the physical quantities including alpha, pressure, velocity also became infinity.

The infinity firstly comes in several grids, and then grows into the whole grids.
image
image
image
image
image

These are my input file.
input.zip

What I am doing wrong? Could you advice, please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.