Coder Social home page Coder Social logo

shield_physics's Issues

`ccnorm` namelist parameter ignored in when `cloud_gfdl` and `pdfcld` are true

During SHiELD physics/microphysics setup, the choices of cloud_gfdl and pdfcld influence which progcld subroutine is run. If both of these are set to true, progcld6 is run. progcld6, unlike the other progcld choices (eg), does not scale up condensate by cloud fraction even if ccnorm is set to True, despite indicating that is uses the ccnorm parameter setting. This seems unexpected for users and inconsistent with the other progcld behavior in SHiELD's radiation/microphysics routines.

Expected behavior
The ccnorm logic (scaling up of condensate based on cloud fraction if ccnorm is True) should be added to progcld6 to make it possible to do simulations that are physically consistent in terms of cloud condensate effects on radiation (ccnorm being False is not), and consistent with the behavior from other radiation scheme/microphysics namelist choices.

Recommended Issues to Address from PR #22

Describe the bug
I am opening this issue to document the recommended changes from Rusty's review of PR #22 after PR 22 is merged.

The following are comments copied from the PR:

! -- CHECK for ntke if using satmedmf
if (Model%satmedmf) then
if (Model%ntke < 1 .or. Model%ntke > Model%ntrac) then
write(*,*) ' FATAL GFS_typedefs: TKE PBL scheme enabled (satmedmf) but TKE tracer not found in field_table.'
write(*,*) ' Stopping execution.'
stop 999
endif
endif

From bensonr: Stop statements may not abort a program correctly when using MPI. Should either import and use mpp_error or make a call to MPI_Finalize to properly abort execution.

!--- dynamical core parameters
logical :: dycore_hydrostatic = .true. !< whether the dynamical core is hydrostatic

From bensonr: The designed way to get this information to the physics from outside is to use the GFS_init_type. This type is populated with dycore and other external to the physics information within the atmos_model.F90::atmos_model_init function and it is passed to GFS_Initialize, where it is passed to the Model%init procedure.

From bensonr: Since we no longer need compatibility with ufs as they've embraced CCPP, we
could start using FMS functions and change the write statements to
mpp_error statement. With the appropriate FATAL condition, mpi will be
properly terminated and any future coupled components running concurrently
will also get the termination signal.

To Reproduce
N/A

Expected behavior
N/A

System Environment
N/A

Additional context
N/A

Units for fhcyc in inline comment incorrect?

real(kind=kind_phys) :: fhcyc !< frequency for surface data cycling (secs)

There is a comment here that indicates that fhcyc has units of 'seconds', however below in the same file it multiplies by 3600., indicating that the implied units is 'hours'. I also see in the regional_Laura test case that this is supplied as '24.', which I assume intends a daily update.

Model%nscyc = nint(fhcyc*3600./Model%dtp)

how to produce diagnostic winds at time step 0?

Is your question related to a problem? Please describe.
From our own external driver routine, we are accessing the 10m winds through IPD_Data(nb)%intdiag%u10m and IPD_Data(nb)%intdiag%v10m. This works fine for us after the first model time step. However we have noticed that on time step 0 (upon initialization of the model) these fields are empty. It appears that they are diagnostic fields that only get populated after the first model time step completes. We would like realistic u10m/v10m fields to be populated at time 0, e.g. based on the initial condition file or restart file.

Is there a straightforward way to trigger the diagnostic computation of u10m/v10m winds when the model initializes so that they are not empty at time step 0?

Describe what you have tried
We've tried accessing the Atm(mygrid)%u_srf and Atm(mygrid)%v_srf fields from the fv3 dynamical core at timestep 0, but they appear to be represented slightly differently on the grid. It's not clear to me whether these are on a C-grid or D-grid stagger while it appears the diagnostic 10m winds above are at cell centers. Ideally, we'd like them to be on the same grid at the same associated height. The bigger problem for us in this case was identifying the appropriate means of collecting the parallel-distributed data, which appears to be different from what was used above for the IPD_Data(nb)%intdiag data structure.

We attempted something like:

      if (varchar=='u') then
          do j=jsc,jec
              do i=isc,iec
                  dataPtr_r8(i,j) = real(Atm(mygrid)%u_srf(i,j),kind=8)
              enddo
          enddo
      elseif (varchar=='v') then
          do j=jsc,jec
              do i=isc,iec
                  dataPtr_r8(i,j) = real(Atm(mygrid)%v_srf(i,j),kind=8)
              enddo
          enddo
      endif

but the resulting fields were not reconstructed correctly. Further, we presume these fields are defined at the model's bottom level and not at the desired 10m height.

New release breaks existing docker build using GFDL_atmos_cubed_sphere main

System:
Ubuntu docker container using arm64/m1.

The new release (April 1, 2022) breaks my current build:

44 7.251 /fv3_gfsphysics/GFS_layer/GFS_typedefs.F90:8:12:
#44 7.251
#44 7.251 8 | use gfdl_cld_mp_mod, only: rhow
#44 7.251 | 1
#44 7.251 Fatal Error: Cannot open module file 'gfdl_cld_mp_mod.mod' for reading at (1): No such file or directory
#44 7.251 compilation terminated.
#44 7.253 make: *** [Makefile_gfs:31: GFS_typedefs.o] Error 1
#44 7.254 make: *** Waiting for unfinished jobs....

The old branch 'old_main' still builds with no problem.

Where is gfdl_cld_mp_mod?

In GFS_driver, it seems like main is replacing:

use gfdl_cloud_microphys_mod, only: gfdl_cloud_microphys_init
#ifndef fvGFS_2017
  use cloud_diagnosis_mod,      only: cloud_diagnosis_init

with:

use gfdl_cld_mp_mod,          only: gfdl_cld_mp_init
#ifndef fvGFS_2017
  use cld_eff_rad_mod,          only: cld_eff_rad_init
#endif

However, in https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/main/driver/SHiELD/gfdl_cloud_microphys.F90
this still has the old name:

module gfdl_cloud_microphys_mod
...
! =======================================================================
! initialization of gfdl cloud microphysics
! =======================================================================

!subroutine gfdl_cloud_microphys_init (id, jd, kd, axes, time)
subroutine gfdl_cloud_microphys_init (me, master, nlunit, input_nml_file, logunit, fn_nml)

Requested aerosol data file "INPUT/aerosol.dat" not found!

I am running with the new 202204 version, and I now get this runtime error:

Note: The following floating-point exceptions are signalling: IEEE_UNDERFLOW_FLAG
     Requested aerosol data file "INPUT/aerosol.dat               " not found!
     *** Stopped in subroutine aero_init !!
Note: The following floating-point exceptions are signalling: IEEE_UNDERFLOW_FLAG
     Requested aerosol data file "INPUT/aerosol.dat               " not found!
     *** Stopped in subroutine aero_init !!
     Requested aerosol data file "INPUT/aerosol.dat               " not found!
Note: The following floating-point exceptions are signalling: IEEE_UNDERFLOW_FLAG
     *** Stopped in subroutine aero_init !!
     Requested aerosol data file "INPUT/aerosol.dat               " not found!
     *** Stopped in subroutine aero_init !!
Note: The following floating-point exceptions are signalling: IEEE_UNDERFLOW_FLAG

Where can I get the "aerosol.dat" file, or instructions for creating it?

GFS physics radiation: solar hour incorrect if model initialized not on the hour

The GFS physics radiation routine sets the solar hour (Model%solhr) used to determine cosine of the solar zenith angle and thus radiative fluxes. The way in which it is computed will result in the solar hour being rounded backwards by a fraction of an hour if the model's initialization time is not on the hour (i.e, minutes and seconds are 0 in the initial date). This is because solar hour is computed as the sum of the fractional hour since initialization plus the integer hour of initialization. There is not any problem if the model is initialized on the hour, but if initialized off the hour, the physics will use a solar hour that is shifted by the sub-hourly amount. Thus any forecasts outputs and diagnostics involving time and solar diurnal cycle will be nominally incorrect (the internal dynamics and physics of the model presumably aren't affected).

To Reproduce
Initialize SHiELD with a not-on-the-hour initialization time; the peak downward shortwave flux will not be at 0° longitude at 12Z, for example (neglecting eq. of time), but will instead be shifted east by 0.0 < subhour < 1.0 hours (0.0 < deg < 15.0), where subhour is the fraction of the hour after the last full hour in the initialization time.

Expected behavior
The solar hour result should not be shifted in this way. Instead, the solar hour should be computed from the fractional hours elapsed since initialization plus the fractional hour of initialization (instead of the integer hour of initialization).

atmos_drivers and simple_coupler need to be removed

Is your feature request related to a problem? Please describe.
The files in atmos_drivers and simpler_coupler are not being used in the build of the SHiELD model, so these files should be removed.

Describe the solution you'd like
Delete these files

Describe alternatives you've considered
If we leave the files, people may update them here instead of the appropriate FMScoupler and atmos_drivers repos.

Additional context
N/A

Make `Diag` structure within FV3GFS_io.F90 public?

In AI2's Python-wrapped version of FV3GFS, we have some functionality that enables getting the values of physics diagnostics. While it can be a little dangerous (e.g. I would not recommend getting the values of physics diagnostics that are accumulated) this functionality can be useful for some purposes, including testing some override capabilities (e.g. prescribing surface radiative fluxes or the sea surface temperature from the wrapper).

This feature requires access to the Diag data structure in the FV3GFS_io.F90 module, which is an array of gfdl_diag_type structures that contain metadata and pointers to the data for the main physics diagnostics of the model. In SHiELD, this data structure (as well as its internal attributes) are not tagged as public. Would we be open to making this data structure public in SHiELD, so that I can implement similar functionality for the Python-wrapped version of SHiELD?

202204 release - exited on signal 11 (Segmentation fault)

Running on linux ubuntu gnu (docker container):

I'm still working through getting the 202204 release running successfully (i.e. to at least roughly replicate the pre-202204 version). I'm currently getting this segmentation fault. A previous segmentation fault was corrected by updating the FMS package build to the 'main' branch of the FMS repo. I've symbolic linked the aerosol.txt, solarconstant_noaa_an.txt, co2historicaldata_*.txt and a few other key files to the INPUT/ directory (from their previous location in the main experiment directory):

  Updating solar constant with cycle approx
    Opened solar constant data file: INPUT/solarconstant_noaa_an.txt 
  CHECK: Solar constant data used for year        2020   1361.0400000000000        1361.0400000000000     
0 FORECAST DATE          26 AUG.  2020 AT 12 HRS  0.00 MINS
  JULIAN DAY             2459088  PLUS   0.000000
  RADIUS VECTOR          1.0104738
  RIGHT ASCENSION OF SUN  10.3754267 HRS, OR  10 HRS  22 MINS  31.5 SECS
  DECLINATION OF THE SUN  10.1408708 DEGS, OR   10 DEGS   8 MINS  27.1 SECS
  EQUATION OF TIME        -1.7063098 MINS, OR   -102.38 SECS, OR-0.007466 RADIANS
  SOLAR CONSTANT        1332.9711572 (DISTANCE AJUSTED)


    for cosz calculations: nswr,deltim,deltsw,dtswh =           8   450.00000000000000        3600.0000000000000        1.0000000000000000        anginc,nstp =   3.2724923474893676E-002           9
    Opened aerosol data file: INPUT/aerosol.dat               
   --- Reading  MONTH OF AUGUST    CLIMATOLOGICAL AEROSOL GLOBAL DISTRIBUTION                  
    Request volcanic date out of range, optical depth set to lowest value
  CHECK: Sample Volcanic data used for month, year:           8        2020
           1           1           1           1
    Opened co2 data file: INPUT/co2historicaldata_2020.txt
        2020  MONTHLY CO2 (PPMV)   24  12  LON/LAT (N-S/0-360E) IN 15 DEGREE RESOLUTION,  GLB ANNUAL MEAN =   412.81000000000000        GROWTH RATE =   2.5200000000000000     
    Global annual mean CO2 data for year        2020   4.1281000000000000E-004
  CHECK: Sample of selected months of CO2 data used for year:        2020
         Month =           1
   4.1894999999999996E-004   4.1873000000000002E-004   4.1708999999999995E-004   4.1537999999999997E-004   4.1341000000000001E-004   4.1173000000000002E-004   4.1005000000000002E-004   4.0923000000000001E-004   4.0920999999999997E-004   4.0912999999999995E-004   4.0892000000000001E-004   4.0863000000000000E-004
         Month =           4
   4.2148000000000001E-004   4.1961000000000000E-004   4.1841000000000003E-004   4.1831999999999997E-004   4.1779000000000002E-004   4.1539999999999996E-004   4.1255999999999997E-004   4.1018000000000001E-004   4.1001999999999998E-004   4.0969999999999998E-004   4.0936999999999999E-004   4.0924000000000001E-004
         Month =           7
   4.0852999999999994E-004   4.0848000000000002E-004   4.0861000000000001E-004   4.0970999999999998E-004   4.1144000000000000E-004   4.1177999999999994E-004   4.1160999999999997E-004   4.1099999999999996E-004   4.1077999999999997E-004   4.1047000000000002E-004   4.1013999999999997E-004   4.1000999999999999E-004
         Month =          10
   4.1172000000000002E-004   4.1114999999999994E-004   4.1237999999999995E-004   4.1209999999999999E-004   4.1077999999999997E-004   4.1110000000000002E-004   4.1175999999999995E-004   4.1212999999999997E-004   4.1164999999999995E-004   4.1120999999999996E-004   4.1104999999999999E-004   4.1089999999999996E-004
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 2 with PID 0 on node e90980d4b77e exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

For verification, I've also tried running the regional_Laura test case, and get a similar error:

   Updating solar constant with cycle approx
    Opened solar constant data file: INPUT/solarconstant_noaa_an.txt 
  CHECK: Solar constant data used for year        2020   1361.0400000000000        1361.0400000000000     
0 FORECAST DATE          26 AUG.  2020 AT 12 HRS  0.00 MINS
  JULIAN DAY             2459088  PLUS   0.000000
  RADIUS VECTOR          1.0104738
  RIGHT ASCENSION OF SUN  10.3754267 HRS, OR  10 HRS  22 MINS  31.5 SECS
  DECLINATION OF THE SUN  10.1408708 DEGS, OR   10 DEGS   8 MINS  27.1 SECS
  EQUATION OF TIME        -1.7063098 MINS, OR   -102.38 SECS, OR-0.007466 RADIANS
  SOLAR CONSTANT        1332.9711572 (DISTANCE AJUSTED)


    for cosz calculations: nswr,deltim,deltsw,dtswh =           8   450.00000000000000        3600.0000000000000        1.0000000000000000        anginc,nstp =   3.2724923474893676E-002           9
    Opened aerosol data file: INPUT/aerosol.dat               
   --- Reading  MONTH OF AUGUST    CLIMATOLOGICAL AEROSOL GLOBAL DISTRIBUTION                  
    Request volcanic date out of range, optical depth set to lowest value
  CHECK: Sample Volcanic data used for month, year:           8        2020
           1           1           1           1
    Opened co2 data file: INPUT/co2historicaldata_2020.txt
        2020  MONTHLY CO2 (PPMV)   24  12  LON/LAT (N-S/0-360E) IN 15 DEGREE RESOLUTION,  GLB ANNUAL MEAN =   412.81000000000000        GROWTH RATE =   2.5200000000000000     
    Global annual mean CO2 data for year        2020   4.1281000000000000E-004
  CHECK: Sample of selected months of CO2 data used for year:        2020
         Month =           1
   4.1894999999999996E-004   4.1873000000000002E-004   4.1708999999999995E-004   4.1537999999999997E-004   4.1341000000000001E-004   4.1173000000000002E-004   4.1005000000000002E-004   4.0923000000000001E-004   4.0920999999999997E-004   4.0912999999999995E-004   4.0892000000000001E-004   4.0863000000000000E-004
         Month =           4
   4.2148000000000001E-004   4.1961000000000000E-004   4.1841000000000003E-004   4.1831999999999997E-004   4.1779000000000002E-004   4.1539999999999996E-004   4.1255999999999997E-004   4.1018000000000001E-004   4.1001999999999998E-004   4.0969999999999998E-004   4.0936999999999999E-004   4.0924000000000001E-004
         Month =           7
   4.0852999999999994E-004   4.0848000000000002E-004   4.0861000000000001E-004   4.0970999999999998E-004   4.1144000000000000E-004   4.1177999999999994E-004   4.1160999999999997E-004   4.1099999999999996E-004   4.1077999999999997E-004   4.1047000000000002E-004   4.1013999999999997E-004   4.1000999999999999E-004
         Month =          10
   4.1172000000000002E-004   4.1114999999999994E-004   4.1237999999999995E-004   4.1209999999999999E-004   4.1077999999999997E-004   4.1110000000000002E-004   4.1175999999999995E-004   4.1212999999999997E-004   4.1164999999999995E-004   4.1120999999999996E-004   4.1104999999999999E-004   4.1089999999999996E-004
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 0 on node d312f888f66b exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

Microphysics always initialized with hydrostatic=.true.

When running without inline microphysics the GFS layer initializes the microphysics by creating a Statein and then passing Statein(1)%dycore_hydrostatic to gfdl_cld_mp_init. The problem with this is that Statein%dycore_hydrostatic is set to .true.` by default and isn't overwritten with the actual model configuration before initializing the microphysics:

call gfdl_cld_mp_init (Model%input_nml_file, Init_parm%logunit, Statein(1)%dycore_hydrostatic)

Statein%dycore_hydrostatic = .true.

This causes the parameters such as c_air to be set to their hydrostatic values instead of their nonhydrostatic values, and overrides do_sedi_w with .false. regardless of what is in the namelist.

Refresh README to include attribution for external code added after 2017

The README currently only makes reference to derivative code from the "baseline 2017 GFS," but a fair number of substantive updates have been made since then, including the incorporation of:

It would be good to refresh the README to include attribution for these updates. This issue is meant to track discussion started in #42, and collect any further details of the provenance of these schemes we would like to include the README.

cc: @lharris4 @linjiongzhou @gaokun227 @kaiyuan-cheng

Release 202204 Segmentation fault vs previous successful run

Running in ubuntu arm64/m1 docker container, using gfortran10. I am building the most recent release (202204) and comparing to the previous build that I had working prior to this release. I am currently using the "global_nest_Laura" example case, with the local nesting removed and debugging turned on (and some changes to the initial condition files, using GFS initial conditions and a slightly higher resolution vertical grid).

I did have to switch the name of one namelist in input.nml:

! &gfdl_mp_nml  !STEVE: old SHiELD version
 &gfdl_cloud_microphysics_nml

Previously, this ran to completion without any (technical) problems. Now it runs with identical output but halts towards the beginning of the run:

 End of n_split loop
before remap k_split   1/  1
 T_ldyn      78.478391285918278        8.8873567833316294        10.878095240412982
 SPHUM_ldyn      2.0456158130582887E-002  -2.3376208964733159E-005   9.0576554536492115E-003
 liq_wat_ldyn      2.7913915846044191E-002  -2.5788033483946651E-005   2.2906830702015271E-006
 rainwat_ldyn      1.0313284705839292E-003  -1.1338639638732191E-005   1.9124890324397074E-006
 ice_wat_ldyn      5.9994534477211239E-004  -1.7104096388928812E-005   8.3709672235963491E-007
 snowwat_ldyn      1.0408907457975733E-003  -2.7539140115526158E-005   6.2223806557010597E-007
 graupel_ldyn      1.7203779455244067E-003  -3.7421470632618380E-005   2.4008761847598749E-008
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 0 on node 814c394abe25 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

The old version produces this, and continues running successfully:

 End of n_split loop
before remap k_split   1/  1
 T_ldyn      78.478391285918278        8.8873567833316294        10.878095240412982
 SPHUM_ldyn      2.0456158130582887E-002  -2.3376208964733159E-005   9.0576554536492115E-003
 liq_wat_ldyn      2.7913915846044191E-002  -2.5788033483946651E-005   2.2906830702015271E-006
 rainwat_ldyn      1.0313284705839292E-003  -1.1338639638732191E-005   1.9124890324397074E-006
 ice_wat_ldyn      5.9994534477211239E-004  -1.7104096388928812E-005   8.3709672235963491E-007
 snowwat_ldyn      1.0408907457975733E-003  -2.7539140115526158E-005   6.2223806557010597E-007
 graupel_ldyn      1.7203779455244067E-003  -3.7421470632618380E-005   2.4008761847598749E-008
finished k_split   1/  1
 T_dyn_a4      308.46838683524317        187.91507966888838        286.59423040071437
 SPHUM_dyn      2.0454645146700073E-002   0.0000000000000000        9.0570780004288545E-003
 liq_wat_dyn      2.7778811395944871E-002   0.0000000000000000        2.9528982915372590E-006
 rainwat_dyn      1.0389292975611255E-003  -1.0836948664597023E-005   1.9127682992166113E-006
 ice_wat_dyn      1.8211343453849303E-002   0.0000000000000000        6.9624147905132555E-007
 snowwat_dyn      9.2623223882762947E-003  -1.1088728007638951E-005   6.2731752123961645E-007
 graupel_dyn      1.7202152583613056E-003  -3.0970360241741023E-005   2.3464375391651926E-008
...

Are there any insights as to what might be happening? Previously, I have tracked unexplained halting of the code to running out of memory. However, this is a fairly low resolution configuration, I'm dedicating my machine's entire 32GB to this process, and when I monitor the memory usage during the run it does not seem to break too much above 1GB. Although, "signal 11 (Segmentation fault)" looks problematic.

gnu.mk file OPENMP, Undefined reference to `omp_get_ ...

The mk_make script builds with:

(cd exec/${CONFIG}_${COMPILER} ; make -j 8 OPENMP=Y NETCDF=3 ${COMP} AVX=${AVX} ${BIT} NCEPLIBS="${NCEPLIBS}" -f Makefile_fv3)

which indicates the use of OPENMP=Y. However in the gnu.mk template file
https://github.com/NOAA-GFDL/SHiELD_build/blob/main/site/gnu.mk
it has the openmp flag commented out, for example:
FFLAGS_OPENMP = #-fopenmp
at:
https://github.com/NOAA-GFDL/SHiELD_build/blob/d6581a4610adf3961bb987061358bed1f40a80de/site/gnu.mk#L51
https://github.com/NOAA-GFDL/SHiELD_build/blob/d6581a4610adf3961bb987061358bed1f40a80de/site/gnu.mk#L58
https://github.com/NOAA-GFDL/SHiELD_build/blob/d6581a4610adf3961bb987061358bed1f40a80de/site/gnu.mk#L67

This causes a series of compile-time errors of the type omp_get_ ...

At the moment I'm changing this using:

sed "s/#-fopenmp/-fopenmp/g" < ${BUILD_ROOT}/${TEMPLATE} > OUT \
  && mv OUT ${BUILD_ROOT}/${TEMPLATE}

and the model seems to compile successfully.

Is there a reason for removing the openmp flags?

Slab ocean Q-flux is read in from file, but not propagated to `Sfcprop%qfluxadj`

Describe the bug

Within sfcsub.F, infrastructure is in place to read in an ocean Q-flux from a file into a locally defined qflux variable in the clima subroutine:

!
! qflux for slab ocean model
!
if(fnqfluxc(1:8).ne.' ') then
kpd7=-1
call fixrdc(lugb,fnqfluxc,kpdqflux,kpd7,mon,slmask,
& qflux(1,nn),len,iret
&, imsk, jmsk, slmskh, gaus,blno, blto
&, outlat, outlon, me)
if (me .eq. 0) write(6,*) 'climatological ocean
& mixed layer depth read in.'
endif

However, data from this local qflux variable is not propagated to the output qfluxadj variable, and therefore the Q-flux assigned to Sfcprop%qfluxadj (for eventual use elsewhere in the model) is always zero.

To Reproduce

Run SHiELD with a prescribed Q-flux file, i.e. with a file specified for the namsfc.fnqfluxc namelist parameter. It is useful to define a diagnostic pointing to Sfcprop%qfluxadj to assess this (I have done this in a fork).

Expected behavior

qfluxadj should be populated in a similar manner to mldclm, i.e. interpolated between monthly means loaded in from a file:

if(fnmldc(1:8).ne.' ') then
do i=1,len
mldclm(i) = wei1m * mld(i,k1) + wei2m * mld(i,k2)
enddo
endif

I have implemented this in a fork, and it seems to address this issue. I will make a PR after a little more testing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.