Coder Social home page Coder Social logo

mom6's People

Contributors

adcroft avatar alex-huth avatar alperaltuntas avatar andrew-c-ross avatar angus-g avatar ashao avatar bmater avatar breichl avatar carolinecardinale avatar deniseworthen avatar gustavo-marques avatar hallberg-noaa avatar herrwang0 avatar hmkhatri avatar jiandewang avatar jkrasting avatar kshedstrom avatar marshallward avatar mjharrison-gfdl avatar nichannah avatar nikizadehgfdl avatar noraloose avatar olgasergienko avatar pjpegion avatar raphaeldussin avatar sdbachman avatar stephengriffies avatar wfcooke avatar wrongkindofdoctor avatar zhi-liang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mom6's Issues

read_netCDF_data leads to cryptic model failure when rescale value argument is not 1

Calls to read_netCDF_data() are failing for some fields when the value of the rescale optional argument is not 1. The error messages that are returned are a cryptic warning about invalid memory in the call to handle%close() at about line 2109 of MOM_io.F90, but I suspect that this the result of a segmentation fault earlier in read_netCDF_data_2d().

Z_INIT_HOMOGENIZE=True uses non-reproducing and potentially incorrect averages

The averaging to generate spatially homogeneous initial conditions that is triggered in the horiz_inter_and_extrap_tracer() routines or in MOM_temp_salt_initialize_from_Z() by setting Z_INIT_HOMOGENIZE=True uses simple sums that are non-reproducing across rotation, PE-count or layout, and does not account for the possibility that some of the values come from land points. Ideally this would be modified to use properly masked and area-weighted reproducing sums, as is done in global_layer_mean(), but this would lead to answer changes at the level of roundoff.

The one known experiment where Z_INIT_HOMOGENIZE=True is used - MOM6-examples/ocean_only/single_column - is only ever run on a single PE and has just 4 columns that are (always?) ocean points, so this might not be too bad of an issue there, but it is easy to envision other cases where the current version of the code could lead to undesirable consequences.

Love number parameter array exceeds Fortran line limit

The Love_data parameter in MOM_load_Love_numbers.F90 is split over ~1440 lines. A standard-compliant compiler is only required to support up to 255 lines. Although not generally a problem, this is preventing us from enabling the -pedantic flag for testing.

Unfortunately, I cannot think of any way to split a parameter across multiple lines. Even if I could, I am not sure if it's in our interest to have these numbers hard-coded into the executable.

We could change the array from a parameter to a variable, and fill in the records. We could also store the numbers as an input file, although I think this would be the first instance of such a large dataset added to the codebase. We don't use namelists very much, but this might be a good use of one.

@herrwang0 We discussed this problem back when it was first merged. Do you have any ideas or thoughts on how we might handle this array?

INPUT_STR_LENGTH is too short for many open boundary segment data specifications

The maximum length of a line in an input parameter file is set in MOM_file_parser:

integer, parameter :: INPUT_STR_LENGTH = 320 !< Maximum line length in parameter file.

For regional models, it is very easy for the necessary OBC_SEGMENT_*_DATA specifications to exceed this length. We have been using cryptically short file names for the segment data to keep the overall length inside the limit. Furthermore, if the length is exceeded the returned error is usually a red herring ("There is a mismatched quote in the parameter file" because the line is not read to the end containing the closing quote).

An easy solution is to increase INPUT_STR_LENGTH in MOM_file_parser.

Is this fine, or is there a better solution? I've seen the discussion at mom-ocean #1254, but I'm not sure if and how that could be implemented for this case.

Also, I thought this has been brought up before, but I was unable to find anything, so apologies if this is a duplicate.

May also be relevant to #74 .

HOMOGENIZE_FORCINGS does not rescale properly for large factors

HOMOGENIZE_FORCINGS does not rescale properly for sufficiently large rescaling factors. In particular, for sufficiently large or small rescaling factors, these can return updated values of 0 or something that is too large to be represented, but for modest rescaling factors it gives the right answers.

The issue is that the homogenize_field() routines (approriately) use the reproducing sums via the global_area_mean() routines to give domain-independence and rotational invariance, and the reproducing_sums() routines in turn use the extended fixed point arithmetic.

The solution would seem to be to temporarily rescale the variables inside global_area_mean() back to mks units while taking the spatial means before restoring the scaling for the result. To accomplish this, I am proposing a new optional argument, tmp_scale, to global_area_mean(), global_area_mean_u() and global_area_mean_v() in MOM_spatial_means.F90, and a similar argument to the various homogenize_field rouitines in MOM_forcing_type. The appropriate rescaling factor would also have to be added to all of the calls to these routines in homogenize_forciing() and homogenize_mech_forcing().

Imperfect restarts

In trying to debug what we thought was an OBC issue, I have found a problem in the vertical viscosity on tile boundaries on restart. Running through an hour on 24 cores vs running after reading the one hour restart on four cores shows differences in visc_rem_u on a tile boundary. I have tracked it as far as a_cpl coming out of find_coupling_coef. More tomorrow, perhaps.

h_new needs a halo update before call to remap_OBC_fields in step_MOM_thermo

With the gnu compiler, the halos are full of zeros, but with intel, they have bad values leading to this stack trace:

forrtl: error (65): floating invalid
Image              PC                Routine            Line        Source                  
libpthread-2.31.s  00007F46708248C0  Unknown               Unknown  Unknown
MOM6               0000000000E8F987  mom_remapping_mp_         549  MOM_remapping.F90
MOM6               0000000000E8142F  mom_remapping_mp_         195  MOM_remapping.F90
MOM6               000000000078F741  mom_open_boundary        5541  MOM_open_boundary.F90
MOM6               0000000002322031  mom_mp_step_mom_t        1597  MOM.F90
MOM6               00000000022FE811  mom_mp_step_mom_          812  MOM.F90
MOM6               0000000002D60006  ocean_model_mod_m         594  ocean_model_MOM.F90
MOM6               0000000004BD31D8  MAIN__                   1062  coupler_main.F90
MOM6               00000000004132CD  Unknown               Unknown  Unknown
libc-2.31.so       00007F467050324D  __libc_start_main     Unknown  Unknown
MOM6               00000000004131FA  Unknown               Unknown  Unknown

Note that this section of MOM_open_boundary is just for the tracer reservoirs.

High resolution MOM6 in ensemble mode mis-labels restart file

When using ensembles with higher resolution MOM6 the restart files that are output are mislabeled.

./MOM.res.ens_01.ens_01_1.nc
./MOM.res.ens_01.ens_01.ens_01_2.nc
./MOM.res.ens_01.ens_01.ens_01.ens_01_3.nc
./MOM.res.ens_01.nc
./MOM.res.ens_02.ens_02_1.nc
./MOM.res.ens_02.ens_02.ens_02_2.nc
./MOM.res.ens_02.ens_02.ens_02.ens_02_3.nc
./MOM.res.ens_02.nc

while I would expect.
./MOM.res.ens_01_1.nc
./MOM.res.ens_01_2.nc
./MOM.res.ens_01_3.nc
./MOM.res.ens_01.nc
./MOM.res.ens_02_1.nc
./MOM.res.ens_02_2.nc
./MOM.res.ens_02_3.nc
./MOM.res.ens_02.nc

I believe this is with FMS2 and should also be testable with an ice/ocean only setup.
In order to run in ensemble mode, you need to use
ensemble_nml : ensemble_size=2

Any initial condition files need to have ens_01 and ens_02 in the file names
e.g. /fv_core.res.ens_02.tile2.nc rather than /fv_core.res.tile2.nc

This might be why using an ice-ocean only and "cold-starting" the model might be easier.
This only seems to be an issue when the restart file exceeds whatever limit is imposed, so lower resolution (1 degree) does not trigger this.

Not all 3-d restart variables are being remapped

The restart files include all of the variables that are required to give a bitwise identical restart between run segments. Because these restarts are typically written immediately after the model has been remapped, they should also give a reliable indicator of all of the 3-d variables that need to be remapped.

An examination of the code shows that while the basic state variables (layer thickness, velocities and tracers) are being remapped, there are a number of other 3-d variables that are not being remapped, but probably should be. These can be grouped into several categories:

  • Layer mass transports, time-step averaged thicknesses, time-step averaged velocities, and horizontal viscosity accelerations that are used in the predictor step momentum equation with the split_RK2 time stepping scheme. The first three variables are combined to give estimates of the Coriolis and momentum self-advection accelerations, so perhaps these estimated accelerations should be remapped, and not the fields themselves.
  • Shear-driven viscosities, as stored in visc%Kv_shear and visc%Kv_shear_Bu. These are defined at interfaces, not as layer averages, and they are not necessarily conserved quantities, so they will probably not be remapped using the same expressions as for other variables.
  • Open boundary condition radiation terms and tracer reservoirs.
  • Previous 3-dimesional Stokes drift velocities with active wave coupling. (This one only seems to be used in a diagnostic, but it may also be a part of the incomplete development of a new capabiltiy.)

redundant points when using oblique_tan OBC option

When using the OBLIQUE_TAN OBC option on a J boundary (J=0 or J=N) there is a bug producing redundant values at u points. Some of these redundant values are round off level errors, but some are comparable to the size of the variable.

I found this issue while testing the ESMG Bering case using FMS2. Note that the version on github is using ORLANSKI and ORLANSKI_TAN OBCs.

The redundant points first occur after the horizontal viscosity update in MOM_dynamics_split_RK2 in CS%diffu and then are incorporated into the u_bc_accel. There are several instances where the OBCs are used in the horizontal_viscosity subroutine that could be the source of this error.

With the OBC options: FLATHER,OBLIQUE,NUDGED, and NUDGED_TAN this issue did not occur, it was only when OBLIQUE_TAN was added that the redundant points occurred.

Other things to note:
(1) only an issue when OBCs were at J boundaries
(2) impacts all layers (only a subset of the errors are shown below)
(3) may be only impacting points in the row i=4 (the PE number us cut off on the error messages)
(4) only impacted u components/velocities
(5) this case used a land mask and variable mixing

Example of the redundant point output:

before corr pre-btstep CS%diff Layer 1 redundant u-components -2.5538E-09 -2.5470E-09 differ by  -6.7866E-12 at i,j =    4   5 on 
before corr pre-btstep CS%diff Layer 1 redundant u-components  2.6922E-08  2.6419E-08 differ by   5.0270E-10 at i,j =    4   5 on 
before corr pre-btstep CS%diff Layer 1 redundant u-components -1.7640E-08 -1.7735E-08 differ by   9.5510E-11 at i,j =    4   5 on 

before corr pre-btstep u_bc_accel Layer 1 redundant u-components  2.7154E-06  2.7153E-06 differ by   9.5510E-11 at i,j =    4   5 
before corr pre-btstep u_bc_accel Layer 1 redundant u-components  2.6223E-06  2.6223E-06 differ by  -6.7866E-12 at i,j =    4   5 
before corr pre-btstep u_bc_accel Layer 1 redundant u-components  6.5393E-06  6.5388E-06 differ by   5.0270E-10 at i,j =    4   5 

Removal of `public post_data_1d_k` is incompatible with ocean_BGC

We recently removed the statement

public post_data_1d_k

from src/framework/MOM_diag_mediator.F90. Unfortunately, ocean_BGC/generic_tracers/generic_tracer_utils.F90 has the line:

    use MOM_diag_mediator, only : post_data_MOM=>post_data, post_data_1d_k

so our BGC models no longer compile.

New diagnostic: Laplacian of velocity fields to detect noise

It has been proposed for OM5 development to diagnose the Laplacian of the velocity components to detect noise in the flow. We could just look at the tendency due to dissipation but that depends on a non-zero viscosity. The idea is that if we set USE_LAPLACIAN=True but with zero viscosity, then we could enable a diagnostic of the Laplacian with little extra code.

Oblique OBCs have problematic restarts in some cases

With oblique open boundary conditions, both u-face (east-west) and v-face (north-south) segments have both segment%rx_norm_obl and segment%ry_norm_obl fields. In order to have reproducing restarts, these are copied into the 3-d arrays OBC%rx_oblique and OBC%ry_oblique. Unfortunately, OBCs at both faces can have the same indices, so there could be cases where both the east-west and north-south segments are active with the same global indices, so these restart fields for OBC segments would be over-writing each other if active OBC segments join in the north-east corner of a tracer cell. A similar situation applies for the OBC%cff_normal fields that are also used for oblique OBCs.

The appropriate solution here would seem to be to use separate restart fields for the oblique OBCs for u- and v-face OBC segments. This would add 3 more restart fields when oblique OBCs are used, but then the staggering of all variables would be unambiguous and cases with pairs of oblique OBCs joining in the northeast corner of tracer cells would reproduce across restarts.

I do not have any examples where this situation arises, but it may be because the oblique OBC are not very widely used.

Note that normal radiation open boundary conditions do not have this problem.

@kshedstrom or @MJHarrison-GFDL, I would particularly appreciate your thoughts on whether this diagnosis seems correct.

Unicode characters in source?

Issue #244 inadvertently discovered three unicode hyphen characters:

  • src/parameterizations/lateral/MOM_load_love_numbers.F90 has a hyphen (U+2010)

  • src/tracer/boundary_impulse_tracer.F90 has two en dashes (U+2013).

There is not necessarily an issue with using them, but the makedep tool fails in some Python versions, and at the least needs to be resolved. We could also just replace them with the ASCII dash - (aka "hyphen-minus", U+002D).

make -j in ac/deps Error: Can't open included file 'mpif-config.h' [Makefile.dep:157: mpp_data.o] Error 1 in cluster env with modules

Using RHEL 8 in a cluster where we can load modules ad hoc however I am getting the below errors from make -j in ~/MOM/ac/deps

 make -j
make -C fms/build libFMS.a
make[1]: Entering directory '/path/to/MOM6/ac/deps/fms/build'
gfortran -DPACKAGE_NAME=\"FMS\" -DPACKAGE_TARNAME=\"fms\" -DPACKAGE_VERSION=\"\ \" -DPACKAGE_STRING=\"FMS\ \ \" -DPACKAGE_BUGREPORT=\"https://github.com/NOAA-GFDL/FMS/issues\" -DPACKAGE_URL=\"\" -DHAVE_SCHED_GETAFFINITY=1 -Duse_libMPI=1 -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_NETCDF_H=1 -Duse_netCDF=1 -g -O2 -I/burg/opt/netcdf-fortran-4.5.3/include -fcray-pointer -fdefault-real-8 -fdefault-double-8 -ffree-line-length-none   -I/burg/opt/netcdf-c-4.7.4/include -c ../src/mpp/mpp_data.F90 -I../src/include -I../src/mpp/include
/usr/mpi/gcc/openmpi-4.0.3rc4/include/mpif.h:56: Error: Can't open included file 'mpif-config.h'
make[1]: *** [Makefile.dep:157: mpp_data.o] Error 1
make[1]: Leaving directory '/path/to/MOM6/ac/deps/fms/build'
make: *** [Makefile:46: fms/build/libFMS.a] Error 2

mpif-config.h is definitely in /usr/mpi/gcc/openmpi-4.0.3rc4/include/mpif-config.h

Is there an environment variable I can use to get around this?

Temp/salinity and thickness bug when initializing with an ice shelf

Hello, I've come across strange features in the initialization of temperature/salinity and layer thickness/interface levels when an ice shelf is present. I initialized the model with a simple triangular ice shelf and TS_CONFIG = "ISOMIP" and THICKNESS_CONFIG = "ISOMIP" (though the same happens with other config options, e.g. TS=linear or Thickness=uniform) which is meant to produce a linear stratification according to given temperature and salinity bounds. However, although it looks mostly linear, there are strange periodic deviations from linearity in the initial conditions file which are highlighted below in this figure, showing the anomaly of the salt initial conditions from the theoretical linear profile.
image

Sea level/layer thicknesses also have very slight perturbations from their expected smooth distributions (for sigma_shelf_zstar coordinates) when TRIM_IC_FOR_P_SURF is not used (turning it on seems to fix those discontinuities but not the T/S ones). When I run the model, horizontal velocities emerge very quickly with the same spatial frequency as the salt anomalies suggesting that the anomalous horizontal density gradients drive this unwanted flow.

A temporary fix: I can get around this issue by using netcdf files as input for the thicknesses (with ice shelf accounted for), and by asking the model to skip the calc_sfc_displacement step in MOM_state_initialization.F90. This somehow results in a linear T/S profile and very low (order 10^(-8) m/s) velocities for the simple triangular ice shelf, however requires running the model twice. If it is helpful, I have linked a github repo that includes the files/MOM_input/override used in my test cases.

Undocumented interfaces in MOM_interpolate.F90 and MOM_interp_infra.F90

All MOM6 interfaces are supposed to have documentation of their purpose and arguments within the MOM6 code. Because of uneven documentation standards of documentation in FMS other external packages, we can not rely on any of them for the documentation of the interfaces as used within the MOM6 code.

The interfaces to time_interp_external_init() and horiz_interp_init() need to be documented in either MOM_interpolate.F90 or MOM_interp_infra.F90 (or both) by eliminating the direct pass-through of the underlying FMS routines, instead using appropriately named wrappers. The direct use of the FMS type horiz_interp_type (which is not documented in MOM6 code either) should similarly be avoided.

In addition, the distinct (and in the case of config_src/infra/FMS2 inconsistent) interfaces to axistype and get_axis_data() in MOM_interp_infra.F90 and MOM_io_infra.F90 could be confusing. Here I would suggest that building MOM_interp_infra upon MOM_io_infra would seem to make sense.

This raises the broader issue that these undocumented interfaces and types were not automatically detected and flagged as problematic. Suggestions for how to automatically detect them would be appreciated.

Runs failing with FMS2

It's just my complicated Arctic and Bering domains which fail, not all the simple tests. They fail with:

FATAL from PE    92: open_file: FMS I/O requires a domain input.

With a stack trace pointing to:

Image              PC                Routine            Line        Source
MOM6               000000000249DCE5  Unknown               Unknown  Unknown
MOM6               0000000001B71E9C  mpp_mod_mp_mpp_er          72  mpp_util_mpi.inc
MOM6               000000000079FAD9  mom_io_infra_mp_o         312  MOM_io_infra.F90
MOM6               00000000007369D8  sis_restart_mp_op         312  SIS_restart.F90
MOM6               000000000073CC30  sis_restart_mp_re        1272  SIS_restart.F90
MOM6               0000000000C86952  ice_model_mod_mp_        2269  ice_model.F90
MOM6               0000000000853F20  coupler_main_IP_c        1719  coupler_main.F90
MOM6               000000000084901E  MAIN__                    589  coupler_main.F90
MOM6               0000000000401BB2  Unknown               Unknown  Unknown
MOM6               00000000025760CF  Unknown               Unknown  Unknown
MOM6               0000000000401A9A  Unknown               Unknown  Unknown

CM4 crashes due to incorrect initialization after reading old ice restart file when compiled with infra/FMS2

We were trying to restart CM4 using old cmip6 picontrol restarts and a recent MOM6 tag.
When compiling with infra/FMS2 we got the following warnings:

WARNING from PE     0: categorize_axes: Failed to identify x- and y- axes in the axis list (xaxis_1, yaxis_1, Time) of a variable being read from INPUT/ice_model.res.nc

Subsequently the model crashed with:

FATAL from PE   321: compute_qs: saturation vapor pressure table overflow, nbad=    231

The same exact model ran fine when compiled with infra/FMS1.

I have a simple fix for this and I'll make a PR shortly for review.

OBC problem in tangential_vel

Some runs with OBCs get differing answers with differing processor counts. The value at the tile boundary for tangential_vel has been seen not to match from one tile to the next - but only at some depths. This leads to a mismatch in q for the Coriolis computations.

get_param() with %blocks not working as expected

The code

  call openParameterBlock(param_file,'ABC') ! Prepend ABC% to all parameters
  call get_param(param_file, mdl, "USE_XYZ", use_xyz, "Comment", default=.false.)
  call closeParameterBlock(param_file)

correctly generates and reads parameters formatted as

ABC%
USE_XYZ = True                  !   [Boolean] default = False
                                ! Comment
%ABC

The above is the normal form but a user can also write a shortcut of the form

ABC%USE_XYZ = True

which will also be read correctly (I do not think we have a mechanism to write in this shortcut form).

A problem arises when we need to read a parameter from another module. Currently if we use call openParameterBlock() the doc file gets a %block inserted which would then be out of place. I thought that

  call get_param(param_file, mdl, "ABC%USE_XYZ", use_xyz, do_not_log=.true., default=.false.)

would work but it is looking for lines with MLE%USE_XYZ and not seeing the non-shortcut form. It seems we need to either

  1. add do_not_log= to openParameterBlock(); or
  2. extend get_param() to parse the MLE% (this seems ugly to me); or
  3. add a block='MLE' optional argument to get_param() which would silently look inside the block.
    The last option seems best to me because we could then also check that we have no "%" characters in parameter names.

Summary:

  • Current bug is that call get_param(..., "ABC%USE_XYZ", ...) does not work as expected.
  • Enhancement suggestion is to catch this and provide a mechanism to read a parameter from outside of the declared block.

Possible race condition in file parser and/or unit tests

In our GitHub Actions CI, the MOM_file_parser unit tests will intermittently produce the following error when run over two PEs:

$ mpirun -n 2 ../../build/unit/MOM_unit_tests
<... output ...>

=== test_open_param_file_no_doc

=== test_open_param_file_no_doc
NOTE from PE     0: open_param_file: TEST_input has been opened successfully.
NOTE from PE     0: close_param_file: TEST_input has been closed successfully.

=== test_read_param_int

=== test_read_param_int
NOTE from PE     0: open_param_file: TEST_input has been opened successfully.
NOTE from PE     0: close_param_file: TEST_input has been closed successfully.

WARNING from PE     0: open_param_file: file TEST_input has already been opened. This should NOT happen! Did you specify the same file twice in a namelist?


FATAL from PE     1: open_param_file: Input file 'TEST_input' does not exist.

application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1

This suggests some race condition related to either this specific test (test_read_param_int), the prior test (test_open_param_file_nodoc), or something more fundamental inside of open_param_file.

! Check that this file has not already been opened
if (CS%nfiles > 0) then
reopened_file = .false.
inquire(file=trim(filename), number=iounit)
if (iounit /= -1) then
do i = 1, CS%nfiles
if (CS%iounit(i) == iounit) then
call assert(trim(CS%filename(1)) == trim(filename), &
"open_param_file: internal inconsistency! "//trim(filename)// &
" is registered as open but has the wrong unit number!")
call MOM_error(WARNING, &
"open_param_file: file "//trim(filename)// &
" has already been opened. This should NOT happen!"// &
" Did you specify the same file twice in a namelist?")
reopened_file = .true.
endif ! unit numbers
enddo ! i
endif
if (any_across_PEs(reopened_file)) return
endif

The code block raising this issue should only trigger if CS%nfiles is positive. This ought to not be possible, since param is a new local variable on the stack of test_read_param_int and the function is only called once. (Each rank does call the function, but CS should be local to the rank.)

There are potential issues inside the code block, since inquire() could detect a file created on the other rank, or an IO unit could be left open from a previous test. But given that only a nonzero nfiles should execute these tests, it is confusing that it is even happening.

I don't yet know how to replicate this error, but would like to start tracking this issue as it happens in our CI.

inconsistency between bug flags

some bugfix runtime parameters use FIX_SOME_BUG = True while the majority use the form SOME_OTHER_BUG = False,
which can lead to confusion/mistakes. The offenders are:

 grep -r FIX * | grep get_param | grep -i bug
config_src/drivers/mct_cap/mom_surface_forcing_mct.F90:  call get_param(param_file, mdl, "FIX_USTAR_GUSTLESS_BUG", CS%fix_ustar_gustless_bug, &
config_src/drivers/nuopc_cap/mom_surface_forcing_nuopc.F90:  call get_param(param_file, mdl, "FIX_USTAR_GUSTLESS_BUG", CS%fix_ustar_gustless_bug, &
config_src/drivers/FMS_cap/MOM_surface_forcing_gfdl.F90:  call get_param(param_file, mdl, "FIX_USTAR_GUSTLESS_BUG", CS%fix_ustar_gustless_bug, &
config_src/drivers/solo_driver/MOM_surface_forcing.F90:  call get_param(param_file, mdl, "FIX_USTAR_GUSTLESS_BUG", CS%fix_ustar_gustless_bug, &
src/core/MOM_dynamics_unsplit.F90:  call get_param(param_file, mdl, "FIX_UNSPLIT_DT_VISC_BUG", CS%use_correct_dt_visc, &
src/core/MOM_dynamics_unsplit_RK2.F90:  call get_param(param_file, mdl, "FIX_UNSPLIT_DT_VISC_BUG", CS%use_correct_dt_visc, &

cgrid based experiments have a restart issue under Intel avx2

When we compile MOM6-SIS2 with avx2 instruction sets , e.g. using compiler switch -march=core-avx2 instead of -xsse2 on c4 with Intel21, then MOM6_SIS2_cgrid does not reproduce across a restart (1x2days answer != 2x1day answer).
MOM6_SIS2 (bgrid) has no such issue.

Potential bug related the add_LES_viscosity option

I think there is bug with Smargorinsky and Leith viscosity (for Laplacian) when add_LES_viscosity is true. This may explain the issue described in #62.

if (CS%add_LES_viscosity) then
do J=js-1,Jeq ; do I=is-1,Ieq
Kh(I,J) = Kh(I,J) + CS%Laplac2_const_xx(i,j) * Shear_mag(i,j)
enddo ; enddo
else
do J=js-1,Jeq ; do I=is-1,Ieq
Kh(I,J) = max(Kh(I,J), CS%Laplac2_const_xy(I,J) * Shear_mag(i,j) )
enddo ; enddo
endif

The term in line 1273 should be CS%Laplac3_const_xy instead of CS%Laplac3_const_xx, because this part of code is for q-points (the if-block cited is for whether one takes the maximum or adds the Smag viscosity, so the expressions should be the same).

For the same reason, it should be Shear_mag(I,J) (Shear_mag here has been recalculated on q-points)

This also applies to Leith_Kh a few lines below.

Providing OBC data that doesn't span all of 0:N causes trouble

I can reproduce Liz Drenkard's report of not being able to read values for an open boundary that doesn't span the whole global domain extent:

At line 3896 of file //import/c1/AKWATERS/kate/ESMG/ESMG-configs/src/MOM6/src/core/MOM_open_boundary.F90
Fortran runtime error: Index '281' of dimension 2 of array 'tmp_buffer' outside of expected range (1:275)

Error termination. Backtrace:
At line 3902 of file //import/c1/AKWATERS/kate/ESMG/ESMG-configs/src/MOM6/src/core/MOM_open_boundary.F90
Fortran runtime error: Index '371' of dimension 1 of array 'tmp_buffer' outside of expected range (1:105)

I will dig more.

Non-reproducible trigonometric functions in viscous BBL

There are trigonometric functions in the calculation of the bottom boundary layer viscosity, set_viscous_BBL(), of the form cos(acos(x)/3 - 2*pi/3).

These functions may to be responsible for an answer change when transitioning from an Intel to AMD processor:

@@ -298,11 +298,11 @@
 h-point: mean=   2.5651029259221975E+01 min=   0.0000000000000000E+00 max=   4.0983512515433851E+01 Start set_viscous_BBL S
 h-point: c= 937092751 sw= 934896663 se= 934896663 nw= 939288839 ne= 939288839 Start set_viscous_BBL S
 u-point: mean=   4.7372083769301478E-07 min=   0.0000000000000000E+00 max=   5.1746049326678756E-02 u Ray [uv]
-u-point: c= 575868012 u Ray [uv]
+u-point: c= 575868009 u Ray [uv]
 v-point: mean=   4.5227642461442761E-07 min=   0.0000000000000000E+00 max=   1.1190647973926611E-02 v Ray [uv]
-v-point: c= 574147674 v Ray [uv]
+v-point: c= 574147672 v Ray [uv]
 u-point: mean=   9.2146868327139069E-04 min=   0.0000000000000000E+00 max=   1.5000000000000007E-03 u kv_bbl_[uv]

Compilers were identical across machines. AFAIK, Intel does not use the C math library to compute these functions.

CHANNEL_DRAG = False restores answers, hinting that these functions may be responsible for the change in answers. In any case, they are recognized as a potential source of answer changes.

We may want to consider replacing these expressions with a bit-reproducible approximation.

Snippet containing the lines is shown below.

elseif (crv > 0) then
! There may be a minimum depth, and there are
! analytic expressions for L for all cases.
if (vol < Vol_2_reg) then
! In this case, there is a contiguous open region and
! vol = 0.5*L^2*(slope + crv/3*(3-4L)).
if (a2x48_apb3*vol < 1e-8) then ! Could be 1e-7?
! There is a very good approximation here for massless layers.
L0 = sqrt(2.0*vol*Iapb) ; L(K) = L0*(1.0 + ax2_3apb*L0)
else
L(K) = apb_4a * (1.0 - &
2.0 * cos(C1_3*acos(a2x48_apb3*vol - 1.0) - C2pi_3))
endif
! To check the answers.
! Vol_err = 0.5*(L(K)*L(K))*(slope + crv_3*(3.0-4.0*L(K))) - vol
else ! There are two separate open regions.
! vol = slope^2/4crv + crv/12 - (crv/12)*(1-L)^2*(1+2L)
! At the deepest volume, L = slope/crv, at the top L = 1.
!L(K) = 0.5 - cos(C1_3*acos(1.0 - C24_crv*(Vol_open - vol)) - C2pi_3)
tmp_val_m1_to_p1 = 1.0 - C24_crv*(Vol_open - vol)
tmp_val_m1_to_p1 = max(-1., min(1., tmp_val_m1_to_p1))
L(K) = 0.5 - cos(C1_3*acos(tmp_val_m1_to_p1) - C2pi_3)
! To check the answers.
! Vol_err = Vol_open - 0.25*crv_3*(1.0+2.0*L(K)) * (1.0-L(K))**2 - vol
endif

Excess pass_var calls in ZB2020

The recently merged ZB2020 implementation is currently usable but appears to suffer from performance issues. The following changes have been suggested:

  • Halo updates applied for individual 2D layers could be deferred and applied to the full 3D field.

  • There are instances of halo updates applied before and after a computation. The halo should account for previous computation and only one should be required.

  • Many individual halo updates could be bundled into a do_group_pass.

  • Expensive collective min_max tests for monotonicity may be better suited under a debug-like flag (either the global MOM debug flag or a ZB2020-specific flag.

  • CPU clocks around calls to ZB2020 would be useful for diagnosing future issues.

These are discussed in detail in #356.

Model dies with segmentation fault if depth greater than 9250 m

The model dies with a segmentation fault if the ocean depth is anywhere greater than 9250 m unless non-obvious, non-default settings are used.

Reproduction

Run the "double_gyre" example out of the box, except set TOPO_CONFIG = "flat" and MAXIMUM_DEPTH to anything greater than 9250.

Error message

Depends on fortran runtime and optimization level. Optimized Intel is unintelligible. gfortran helpfully produces

Error termination. Backtrace:
At line 465 of file ../../../../src/MOM6/src/ALE/MOM_regridding.F90
Fortran runtime error: Index '41' of dimension 1 of array 'woa09_dz' above upper bound of 40

Expected behavior

No segmentation faults. If the parameters are inconsistent, model should post an error message and exit gracefully.

Origin

initialize_regridding, called by the diagnostics mediator, will default (for convenience) to WOA09 vertical levels if MAXIMUM_DEPTH > 3000. The offending lines then look for an entry in the list of WOA09 vertical levels that exceeds the maximum depth of the domain. After 9250 m, the list ends but initialize_regridding keeps looking until it runs off the end of the list.

Workaround

Set DIAG_COORD_DEF_Z explicitly. If you don't need diagnostics on anything other than the native grid, DIAG_COORD_DEF_Z = "UNIFORM" works fine.

Latest FMS causes trouble

With gfortran, I get:

MOM_io_infra.o:MOM_io_infra.F90:function __mom_io_infra_MOD_write_time_if_later: error: undefined reference to '__fms_io_utils_mod_MOD___vtab_REAL_8_'
MOM_io_infra.o:MOM_io_infra.F90:function __mom_io_infra_MOD_write_metadata_axis: error: undefined reference to '__fms_io_utils_mod_MOD___vtab_INTEGER_4_'
MOM_io_infra.o:MOM_io_infra.F90:function __mom_io_infra_MOD_mom_write_axis: error: undefined reference to '__fms_io_utils_mod_MOD___vtab_REAL_8_'
MOM_io_infra.o:MOM_io_infra.F90:function __mom_io_infra_MOD_mom_write_axis: error: undefined reference to '__fms_io_utils_mod_MOD___vtab_REAL_8_'
MOM_io_infra.o:MOM_io_infra.F90:function __mom_io_infra_MOD_write_field_0d: error: undefined reference to '__fms_io_utils_mod_MOD___vtab_REAL_8_'
MOM_io_infra.o:MOM_io_infra.F90:function __mom_io_infra_MOD_read_field_1d_int: error: undefined reference to '__fms_io_utils_mod_MOD___vtab_INTEGER_4_'
MOM_io_infra.o:MOM_io_infra.F90:function __mom_io_infra_MOD_read_field_1d_int: error: undefined reference to '__fms_io_utils_mod_MOD___vtab_INTEGER_4_'
MOM_io_infra.o:MOM_io_infra.F90:function __mom_io_infra_MOD_read_field_0d_int: error: undefined reference to '__fms_io_utils_mod_MOD___vtab_INTEGER_4_'

Generic tracer runoff fluxes are incorrect if DT_THERM < dt_cpld

For generic tracers that have a non-zero runoff concentration, the contribution of the runoff to the tracer surface flux is computed separately in MOM_generic_tracer_column_physics:

call g_tracer_get_pointer(g_tracer,g_tracer_name,'stf', stf_array)
call g_tracer_get_pointer(g_tracer,g_tracer_name,'trunoff',trunoff_array)
call g_tracer_get_pointer(g_tracer,g_tracer_name,'runoff_tracer_flux',runoff_tracer_flux_array)
!nnz: Why is fluxes%river = 0?
runoff_tracer_flux_array(:,:) = trunoff_array(:,:) * &
US%RZ_T_to_kg_m2s*fluxes%lrunoff(:,:)
stf_array = stf_array + runoff_tracer_flux_array

The problem is that this routine is called every thermodynamics timestep, but stf is refreshed from the coupler every coupling time step. This means that if DT_THERM < dt_cpld, the tracer runoff is added to stf multiple times. So, on the first thermo timestep after a coupler step, the runoff input is correct, on the second thermo timestep it is twice what it should be, etc.

This is not an issue for GFDL's previous global earth system model runs, which have used a DT_THERM much greater than dt_cpld. This bug has only been encountered now that we've started running regional models with short timesteps.

One way to fix it that I found works is to not add the runoff tracer flux to stf and instead pass it as the optional in_flux_optional argument to applyTracerBoundaryFluxesInOut later in this subroutine.

Is this the best way to fix the issue (I will send a PR if yes)? It seems to me it would be better to have g_tracer_coupler_get do the addition of the runoff tracer flux to the surface flux, but that routine is unaware of the liquid runoff flux and has a note that

!runoff contributes to %stf in GOLD but not in MOM, 
!so it will be added later in the model-dependent driver code (GOLD_generic_tracer.F90)

INTERPOLATE_SPONGE_TIME_SPACE not recorded in MOM_parameter_doc.short

The result of this code:

  call get_param(param_file, mdl, "NEW_SPONGES", time_space_interp_sponge, &
                 "Set True if using the newer sponging code which "//&
                 "performs on-the-fly regridding in lat-lon-time.",&
                 "of sponge restoring data.", default=.false.)
  if (time_space_interp_sponge) then 
    call MOM_error(WARNING, " initialize_sponges:  NEW_SPONGES has been deprecated. "//&
                   "Please use INTERPOLATE_SPONGE_TIME_SPACE instead. Setting "//&
                   "INTERPOLATE_SPONGE_TIME_SPACE = True.")
  endif
  call get_param(param_file, mdl, "INTERPOLATE_SPONGE_TIME_SPACE", time_space_interp_sponge, &
                 "Set True if using the newer sponging code which "//&
                 "performs on-the-fly regridding in lat-lon-time.",&
                 "of sponge restoring data.", default=time_space_interp_sponge)

is that INTERPOLATE_SPONGE_TIME_SPACE registers as default = True and therefore not recorded in the .short file.

Kd_interface diagnostic incomplete in newer diabatic_ALE

The Kd_interface diagnostic in diabatic_ALE is a copy of Kd_heat before the double diffusion contribution, but also before the ePBL contribution. One could instead have it be Kd_heat minus KT_extra at the end, or just note that Kd_interface isn't what one should be asking for.

Regional ice/ocean restart issue

The ice gets its view of the ocean surface currents via Ocean%[uv]_surf and Ice%[uv]_surf. These things have no halo points and in fact assume symmetric=.false. so they are missing the points at one edge each. If one wants MERGED_CONTINUITY=True and ice OBCs, this will have to change. On restart, things are subtly different in these surface velocities, leading to small changes in the flux calculations.

Another tracer OBC bug at tile boundaries

I have found a spot in the Arctic domain in which the value of one point inside the halo doesn't match the value in the next tile over. This is on the eastern boundary after advect_x. This point then infects the tracer update in advect_y, but only sometimes, only below k=26. I'm still investigating.

For @adcroft, this is when going back to the old FMS1 code - which made the ice troubles go away.

MOM6 with FMS2 fails when restart's Time axis is not unlimited.

If MOM is compiled with FMS2 and uses a restart file whose time axis is not unlimited, there is a FMS2 error:

NOTE from PE     0: MOM_restart: MOM run restarted using : INPUT/MOM.res.nc

FATAL from PE     0: NetCDF: Invalid dimension ID or name: get_unlimited_dimension_name: file:INPUT/MOM.res.nc


FATAL from PE     0: NetCDF: Invalid dimension ID or name: get_unlimited_dimension_name: file:INPUT/MOM.res.nc

#0  0x2d6d87e in __mpp_mod_MOD_mpp_error_basic
	at MOM6-examples/src/FMS/mpp/include/mpp_util_mpi.inc:72
#1  0x330d3d5 in __fms_io_utils_mod_MOD_error
	at MOM6-examples/src/FMS/fms2_io/fms_io_utils.F90:193
#2  0x2234edf in __netcdf_io_mod_MOD_check_netcdf_code
	at MOM6-examples/src/FMS/fms2_io/netcdf_io.F90:371
#3  0x22283a3 in __netcdf_io_mod_MOD_get_unlimited_dimension_name
	at MOM6-examples/src/FMS/fms2_io/netcdf_io.F90:1316
#4  0xf56cf4 in __mom_io_infra_MOD_get_file_info
	at MOM6-examples/src/MOM6/config_src/infra/FMS2/MOM_io_infra.F90:480
#5  0xf568ef in __mom_io_infra_MOD_get_file_times
	at MOM6-examples/src/MOM6/config_src/infra/FMS2/MOM_io_infra.F90:488
#6  0x18f72c3 in __mom_io_file_MOD_get_file_times_infra
	at MOM6-examples/src/MOM6/src/framework/MOM_io_file.F90:1249
#7  0xe762ae in __mom_restart_MOD_restore_state
	at MOM6-examples/src/MOM6/src/framework/MOM_restart.F90:1540
#8  0x1ac1054 in __mom_state_initialization_MOD_mom_initialize_state
	at MOM6-examples/src/MOM6/src/initialization/MOM_state_initialization.F90:524
#9  0xba1382 in __mom_MOD_initialize_mom
	at MOM6-examples/src/MOM6/src/core/MOM.F90:2834
#10  0x6d5dac in __ocean_model_mod_MOD_ocean_model_init
	at MOM6-examples/src/MOM6/config_src/drivers/FMS_cap/ocean_model_MOM.F90:284
#11  0x10adcb5 in coupler_init
	at MOM6-examples/src/coupler/coupler_main.F90:1843
#12  0x109d67e in coupler_main
	at MOM6-examples/src/coupler/coupler_main.F90:614
#13  0x10afa40 in main
	at MOM6-examples/src/coupler/coupler_main.F90:313

The issue seems to be that get_file_info(file, ntimes=...) inside of restore_state assumes that an unlimited dimension exists, and raises an error if it does not.

The FMS1 implementation (mpp_get_times()) reads the time levels from internally stored data, which is compiled when the file is opened, and has a fallback when there is no "recdim" (i.e. unlimited dimension).

I am still unsure to what extent FMS2 handles an absent unlimited, but we will need to provide similar support where it is missing from FMS2.

Thanks to @jiandewang for reporting this.

Recent update broke DOME

DOME runs fine when compiled for debugging, but fails in repro mode:

At line 1285 of file //import/c1/AKWATERS/kate/ESMG/ESMG-configs/src/MOM6/src/core/MOM_continuity_PPM.F90
Fortran runtime error: Index '26' of dimension 3 of array 'por_face_areav' above upper bound of 25
chinook01.rcs.alaska.edu 303% which mpif90
/usr/local/pkg/mpi/OpenMPI/1.10.3-GCC-8.3.0-2.32/bin/mpif90

This is in dev/gfdl code du jour.

Wetting and drying in z* coordinates - bad idea?

For historical reasons, our regional domains started with the z* vertical coordinate. Most have continued using z* because the OBCs work better that way. We now have a Gulf of Alaska domain in which we would eventually like to have wetting and drying. Both Cook Inlet and the Copper River delta have tidal mud flats. Setting the MASKING_DEPTH to -20 and MINIMUM_DEPTH to -10 causes the model to blow up:

FATAL from PE    42: MOM_regridding: adjust_interface_motion() - implied h<0 is larger than roundoff!

Trying again with MASKING_DEPTH = 0 and MINIMUM_DEPTH = 1 allows it to run. Does it make sense to have these values match? Am I setting them wrong or is z* the problem?

More flexibility when writing restart files

At the moment, save_restart is called directly by the drivers, e.g. here. I would like to option to save non-standard restart files that contain particle locations, but I cannot currently do so without writing new code in all of the drivers (which doesn't seem ideal).

I would like it if the driver could call a new subroutine that we could add to MOM.F90, and that subroutine could call save_restart. Then any extra steps in the save_restart process could be added only in one place in the code. This would help me directly, but is also likely to help others who might want to add to the code in the future.

I talked to @adcroft about this several months ago. He seemed to be on board with the idea. I am posting here to create a paper trail and to start a discussion on whether/how to make this happen.

No underscores in `sigsetjmp` on MacOS 12

Recent updates have included unit testing routines which reference sigsetjmp defined in src/framework/posix.h as __sigsetjmp. This causes linking to fail with an "unresolved symbol" error because sigsetjmp is defined without underscores on MacOS 12 (possible previous versions as well).

There are a couple of ways to resolve this. I define __APPLE__ in my mkmf template and test for this definition in posix.h. Not sure what would fit best with the GFDL coding style.

Slopes may have sign error, but thickness parameterization is fine (double sign error)

I was looking through a few of the MOM6 F90 files where slopes are defined and used:

  • MOM_isopycnal_slopes.F90 (e.g line 325)
  • MOM_thickness_diffuse.F90 (e.g. line 991).

I noticed that the slope is defined as (for example):

slope = (e(i,j,K)-e(i+1,j,K)) * G%IdxCu(I,j)

I am wondering why is this definition of slope, for example in x direction, $= - d \eta /dx$ used, which also corresponds to ($\partial_x \rho /\partial_z \rho$)? Rather than the more standard definition, which would have a minus in front of what is used ($d \eta/dx$ or $- \partial_x \rho / \partial_z \rho$)? Is this an error?.

So, I checked what the slope diagnostic looks like in an example where the slope should be physically clear. Assuming that MOM6 uses the usual physical definition of slopes : isopycnal shoaling towards the surface as we move north should be positive. This is the setup in Phillips_2Layer (also drawn in Figure 2 of Hallberg 2013).

Here is a figure of the diagnostics from model:
Screen Shot 2023-04-29 at 9 59 19 PM
The slope is negative, even though it should be positive.

The concern then was whether the parameterized transport resulting from the thickness package was right? The answer is yes. The model manages to get the right parameterized transport, even with a sign error in slope. This happens because there is seemingly an error in the definition of stream function as well, at least relative to most of the literature, eg Ferrari et al 2010. From this paper equation 9 for $\gamma$ should be considered and the corresponding transport is in equation 5, which matches the definition of $uh^*$ here (the doc denotes $\gamma$ from Ferrari et al 2010 as $\Psi$).
This streamfunction $\gamma = - K S$ (called $\psi$ in MOM6 doc).
However, in the code (eg line 994 in MOM_thickness_diffuse.F90) the stream function is defined as $KS$. Since the slope has a sign error and then there is error in sign of stream function too, the parameterized streamfunction and transport end up with the right signs.

The doc here is also confusing. First it defines $\psi$ as KS, which does match what is in the code but not what we want. Then N2 is defined as $-g \partial_z \rho/ \rho_0$, but then M2 is defined as $g \nabla \rho/ \rho_0$ (Is this sign difference between M2 and N2 conventional?).
I checked the definitions of streamfunctions in Hallberg 2013 and I believe they are consistent with what is in in the rest of the literature (eg Ferrari et al 2010), but not with the code or the doc in MOM6.

Error with runoff_added_to_stf flag

Hi,

I was updating our group's fork of MOM6 this morning. And I'm getting a compiler error from a recent commit:

/MOM6-examples-merge-12162022-2/src/MOM6/src/tracer/MOM_generic_tracer.F90(515): error #6460: This is not a field name that is defined in the encompassing structure. [RUNOFF_ADDED_TO_STF] if (allocated(g_tracer%trunoff) .and. (.NOT. g_tracer%runoff_added_to_stf)) then ------------------------------------------------------------^

I can see that runoff_added_to_stf was added to the g_tracer_type in /config_src/external/GFDL_ocean_BGC/generic_tracer_utils.F90, so I'm not sure what the issue is.

Did anyone else experience this?

Dimesionally incorrect expressions in apply_oda_tracer_increment()?

The lines in apply_oda_tracer_increment() where the temperature and salinity increments are applied to the temperature and salinity appear to me to be dimensional inconsistent. The increments (apparently in [C ~> degC] or [S ~> ppt]) are being multiplied by a timestep (in [s]) before being added to the temperatures and salinities themselves (in [C ~> degC] or [S ~> ppt]). This is dimensionally inconsistent. This code is at about lines 717 and 718 of src/ocean_data_assim/MOM_oda_driver.F90, which were added on March 18, 2021 as a part of MOM6 PR#1453.

I believe that the correction might simply be to eliminate the multiplication by the timesteps on these lines, but we might need to figure out whether these lines are being used for any important projects, and hence whether we would need to add a run time bug fix flag,

@MJHarrison-GFDL, could you please take a look at this to see whether I am correct that these lines are dimensionally inconsistent, and also assess which projects might be using them.

Advice on implementing sorting prior to regrid-remap (to allow diagnostic grids to be built for non-monotonic tracers)

Intention/scope

I am planning to implement the capacity to build diagnostic grids for tracers that are non-monotonic in thee vertical dimension. The purpose is to be able to output diagnostics on a grid of any arbitrary tracer, e.g. temperature. I am proposing that this be used for diagnostic regrid-remap only, not for regrid-remap of the prognostic model fields and grid, for which the feature would be switched off.

I am implementing this initially for the rho coordinate type, and will subsequently extend it to anything in the tracer registry. I am only implementing this at the moment for diagnostics on the tracer points, not yet on the u, v, or w points.

Proposed procedure

To do this requires that the tracer, e.g. rho, has the option to be sorted prior to the regrid step. The regrid will then derive interface heights and thicknesses associated with the sorted column. This will be implemented with a simple sorting algorithm in build_*_column (the algorithm itself would be housed in MOM_regridding.F90. A boolean held in the coordinate control structure, e.g. needs_sorting=.true., could be used to turn this on and off as required.

To appropriately remap to this new grid requires that the model field is also sorted prior to its remap. The most efficient way to do this would be to create an array of the sorting indices and pass it to remapping_core_h. A key question is where and how to build this array. One option would be to create it as a 3d array at the same time as the 3d array of new grid thickness is created (i.e. in diag_remap_update for diagnostic grids and build_*_grid for prognostic grids), and carry this array in the remap control structure. This would mean additional 3d array(s) being carried around in memory.

I am posting this here to invite comments/thoughts/opinions before I start implementing this. Any thoughts welcome.

Unit test failure with ifort 2023.1 on chinook

I thought I would try intel's 2023 compilers to see if I still have some of my optimizer troubles with 2022. Instead, it has spawned some new troubles:

  • netcdf failures with the system libraries
  • netcdf failure to build the latest netcdf-fortran from source (4.6.1)
  • netcdf failure to "make check" cleanly from source for 4.6.0
  • all seems well with 4.5.3 (didn't try 4.5.4)

But now I get an error from the unit_tests:

 ==== remapping_attic: remapping_attic_unit_tests =================
 h0 (test data)
i=         1         2         3         4         5
x=  0.00E+00  7.50E-01  1.50E+00  2.25E+00  3.00E+00
i=              1         2         3         4
h=       7.50E-01  7.50E-01  7.50E-01  7.50E-01
u=       9.00E+00  3.00E+00 -3.00E+00 -9.00E+00
 h1 (by delta)
i=         1         2         3         4
x=  0.00E+00  1.00E+00  2.00E+00  3.00E+00
i=              1         2         3
h=       1.00E+00  1.00E+00  1.00E+00
u=       8.00E+00 -1.11E-16 -8.00E+00
 h2
i=         1         2         3         4         5         6         7
x=  0.00E+00  5.00E-01  1.00E+00  1.50E+00  2.00E+00  2.50E+00  3.00E+00
i=              1         2         3         4         5         6
h=       5.00E-01  5.00E-01  5.00E-01  5.00E-01  5.00E-01  5.00E-01
u=       1.00E+01  6.00E+00  2.00E+00 -2.00E+00 -6.00E+00 -1.00E+01
 hn2
i=         1         2         3         4         5         6         7
x=  0.00E+00  5.00E-01  1.00E+00  1.50E+00  2.00E+00  2.50E+00  3.00E+00
i=              1         2         3         4         5         6
h=       5.00E-01  5.00E-01  5.00E-01  5.00E-01  5.00E-01  5.00E-01
u=       1.00E+01  6.00E+00  2.00E+00 -2.00E+00 -6.00E+00 -1.00E+01
 remapping_attic_unit_tests: Failed remapByDeltaZ() 2
 === MOM_remapping: interpolation and reintegration unit tests ===
 ==== remapping_attic: remapping_attic_unit_tests =================
 h0 (test data)
i=         1         2         3         4         5
x=  0.00E+00  7.50E-01  1.50E+00  2.25E+00  3.00E+00
i=              1         2         3         4
h=       7.50E-01  7.50E-01  7.50E-01  7.50E-01
u=       9.00E+00  3.00E+00 -3.00E+00 -9.00E+00
 h1 (by delta)
i=         1         2         3         4
x=  0.00E+00  1.00E+00  2.00E+00  3.00E+00
i=              1         2         3
h=       1.00E+00  1.00E+00  1.00E+00
u=       8.00E+00 -1.11E-16 -8.00E+00
 h2
i=         1         2         3         4         5         6         7
x=  0.00E+00  5.00E-01  1.00E+00  1.50E+00  2.00E+00  2.50E+00  3.00E+00
i=              1         2         3         4         5         6
h=       5.00E-01  5.00E-01  5.00E-01  5.00E-01  5.00E-01  5.00E-01
u=       1.00E+01  6.00E+00  2.00E+00 -2.00E+00 -6.00E+00 -1.00E+01
 hn2
i=         1         2         3         4         5         6         7
x=  0.00E+00  5.00E-01  1.00E+00  1.50E+00  2.00E+00  2.50E+00  3.00E+00
i=              1         2         3         4         5         6
h=       5.00E-01  5.00E-01  5.00E-01  5.00E-01  5.00E-01  5.00E-01
u=       1.00E+01  6.00E+00  2.00E+00 -2.00E+00 -6.00E+00 -1.00E+01
 remapping_attic_unit_tests: Failed remapByDeltaZ() 2
 === MOM_remapping: interpolation and reintegration unit tests ===

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.