Coder Social home page Coder Social logo

ccpp-scm's People

Contributors

climbfuji avatar dependabot[bot] avatar domheinzeller avatar dustinswales avatar grantfirl avatar gthompsnwrf avatar ligiabernardet avatar lisa-bengtsson avatar llpcarson avatar michelleharrold avatar microted avatar mkavulich avatar mzhangw avatar nickszap avatar pjpegion avatar robertpincus avatar scrasmussen avatar t-brown avatar tanyasmirnova avatar wangevine avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ccpp-scm's Issues

Python plotting scripts broken on Cheyenne

The python environment on Cheyenne has changed and the machine setup scripts need to be updated to use it. [See https://www2.cisl.ucar.edu/resources/computational-systems/cheyenne/software/python]

It would be best just to eliminate the need for f90nml to not require users to install it.

However, since plotting will be redone in the near future, it may not be necessary to fix.

Revisit Thompson MP table generation in the SCM

After communication with, @gthompsonWRF, we should revisit the computation of the Thompson MP tables in the SCM. Years ago, when Thompson MP was first put in the CCPP, we tried to let the SCM generate the tables, but it was prohibitive computationally. I think this was tried with/without openmp enabled in the SCM without much luck. The solution was to include pre-computed tables (that were generated in an FV3 run) in the scm/data/physics_input_data directory. @gthompsonWRF, in association with the code changes in NCAR/ccpp-physics#567, would like the ability to generate new tables with the SCM.

Alarming result differences using shorter timesteps (GFS_v16)

Included in this issue is a graphic to demonstrate the point I am trying to make.

Here is a comparison of a run I call Control which uses the exact same suite definition file and namelist unedited from ccpp-scm repo with GFS_v16 and the ARM-SGP case (about 27 days simulated) versus running with shorter timesteps and a different column_area size. The top panel of the graphic shows Control whereas the bottom panel shows Exp1 with the following list of 4 items changed

  1. dynamics timestep reduced from 600 to 60s
  2. microphysics timestep reduced from 150 to 60s
  3. radiation timestep reduced from 3600 to 600s
  4. column_area reduced from 2E9 (about 45km DX) to 1E7 (about 3km DX)

The rationale for these changes is a more fair comparison at HRRR resolution when switching from GFS physics to GSD physics. I was attempting to determine whether the large change of physics suite from GFS to GSD was primarily responsible for the large changes I am seeing.

The issue I am raising is two-fold. The quickly obvious plot of cloud water content clearly is exploding the frequency of low clouds. There may actually be nothing wrong with this. While the chnage might be a bit dramatic, the far larger issue I am seeing is the very dramatic rise in temperature, particularly in the middle atmosphere of nearly 5 degrees. The solid lines with warm colors (reds/oranges) are each 5C interval whereas the solid light blue line centered near 600 hPa is 0C and each dashed line in cool colors (blues/purples) are each 5C below zero. There is also a "compaction" of temperature lines aloft up near 100-200 hPa.

I am very alarmed by this. Remember, the time period is middle June to middle July (1997) and I expect very hot weather in central Okllahoma, but approaching 0C at 500hPa is extremely rare anywhere in the USA. For context, the truly massive heat bubble of Summer 2021 had core values of 500hPa temperature around -2C.

I do not believe shorter timesteps that seem very valid should be producing a warming such as this. The problem is even worse when switching to GSD physics suite. Can anyone offer some explanations?
tstep_comparison

Pull thirdparty libraries out or reconfigure build system

The thirdparty libraries required to compile SCM are currently compiled with a mixture of compiler flag and preprocessor definitions coming from the calling CMakeLists.txt from scm/src and from within the individual directories (e.g. external/w3nco/v2.0.6/src/CMakeLists.txt).

I would like to propose to pull the thirdparty libraries out of gmtb-scm and install them separately in the same way as we do it for FV3. We do have a github repository for all NCEP libraries that compile them with compiler flags and preprocessor options independent of what is chosen for SCM (https://github.com/climbfuji/NCEPlibs). This will hopefully be consolidated with EMC's effort in the near future, but until then we could use this repository and the instructions in there to build on all platforms with Intel, GNU or PGI.

This would allow us to tidy up the build system for gmtb-scm and also for ccpp-physics.

Thoughts, opinions?

clw standard names

Update standard names for the clw variables to correspond with their counterparts in Statein%qgrs and Stateout%gq0 with decoration of "convectively_transported" or similar.

Remove fv3_model_point_noahmp case

The fv3_model_point_noahmp case is obsolete and can be removed since NoahMP can be initialized from Noah ICs within the physics now. The file scm/data/processed_case_input/fv3_model_point.nc is also obsolete and can be removed.

Update SCM to work with new ccpp-framework (no-cdata static build)

NCAR/ccpp-framework#168 and PRs referenced therein require several changes to the host model variable metadata and the module use statements. This has not been tested with SCM, most likely SCM is not working with the current master versions of ccpp-framework and ccpp-physics (even though the static build is not used).

Related to this, we could consider adding a static build option for SCM.

Cleanup of metadata tables

The metadata tables in scm/src/gmtb_scm_type_defs.F90 are no longer in sync with those in NEMSfv3gfs and in particular do not contain entries for all the latest developments made in NEMSfv3gfs.

Before releasing SCM+CCPP end of July, this should be cleaned up.

Transient in surface temperature when using LSM (even with UFS ICs)

Qingfu Liu discovered that the surface temperature goes down to 150C shortly after initialization, even when using UFS initial conditions to initialize the Noah LSM. This resembles a strong spinup-related transient, although I would expect there not to be LSM spinup when using UFS ICs. Check whether LSM variables are receiving the correct data at initialization time.

Moving the data files to a submodule?

Hi! I wanted to start by thanking you for making this excellently documented tool available. I've been running the model successfully, but I have been noticing some potential improvements that would make this easier to use, especially for docker + cloud workflows.

It take long time to checkout this repository because it includes many large binary data files in scm/data. This makes it pretty slow to download the code quickly and poke around. Also, it prevents users developing their own more efficient/more parsimonious ways of provisioning the data-files at runtime, we have been using https://github.com/VulcanClimateModeling/fv3config/ to do this for FV3 workflows. Also, conceivably someone may want to change this data or add new cases for some experiment, in which case the git repo will be holding multiple versions of the datafiles, which could lead to further repo bloat.

Would you consider removing the data files from this repo? Moving it to a submodule would be a minimal change to the current workflow or you could host it on some HTTP server and provide a download_data.sh script. Unfortunately, it's pretty tricky to remove large files from git history, but I have followed this answer on my own work in the past.

Got an error when runnin "UFS_IC_generator.py"

I got an error when I run "UFS_IC_generator.py" to create ICs on NOAA's Hera HPC. For example, I just try a sample case as follows:

./UFS_IC_generator.py -l 261.51 38.2 -d 201610030000 -i ../../data/raw_case_input/FV3_C96_example_ICs -g ../../data/raw_case_input/FV3_C96_example_ICs -n fv3_model_point_noah -oc

The error message shows as follows:
File "./UFS_IC_generator.py", line 2045
print 'Tile found: {0}'.format(tile)
^
SyntaxError: invalid syntax

BTW, the shapely python package is installed on Hera, and it does not seem problem after I check it.

Thanks,

Weizhong

unable to follow tutorial with singularity on hpc

Hi, I have tried to follow the tutorial using Singularity, but I couldn't get it to work as the tutorial shows. Could you please look into this and let me know if I am doing something wrong or if additional support is required for Singularity?

  1. I made a directory and set the path, export SCM_WORK='pwd'
  2. singularity pull docker://dtcenter/ccpp-scm:v5.0.0-tutorial
  3. export OUT_DIR=/path/to/output
  4. I tried a variety of Singularity calls, e.g. singularity shell ccpp-scm_v5.0.0-tutorial.sif and singularity run ccpp-scm_v5.0.0-tutorial.sif and singularity exec --bind ${OUT_DIR}:/home ../ccpp-scm_v5.0.0-tutorial.sif
  5. Once the Singularity image is activated (e.g. Singularity> prompt), I couldn't find the correct folders in the tutorial; that is, cd $SCM_WORK/ccpp-scm/ccpp/suites didn't work. The folder ccpp-scm didn't exist anywhere. I even tried find . ccpp-scm inside and outside the image in multiple directories to no avail.

Do I need to pull the container in a specific way? (I don't think that was part of the tutorial.) Or do I need to bind the directories in different way than --bind. Generally, I think it would be much more helpful to have instructions for Singularity rather than Docker due to sudo rights associated with the latter.

Thank you!

Add LSM initialization

Currently, using sfc_type = 1 without using prescribed surface fluxes will result in a segmentation fault due to uninitialized variables.

Standard names need to be streamlined and documented

The standard names used in the physics metadata were created over the years to address the needs to each physics scheme. Whenever possible, CF convections were followed. However, CF conventions were insufficient and new names had to be created. It is necessary to establish rules for CCPP standard names, review the names currently used, make necessary changes to follow the rules, and document the rules and standard names using a CCPP Standard Names dictionary that can be used by the community of CCPP developers.

Separate memory for statein and stateout variables

Currently, both "statein" and "stateout" state variables are pointers pointing to the same space in memory for the forward Euler scheme. For both stochastic physics and the calculation of non-physics tendencies, it would make sense to allocate separate memory for these.

Need software license

There is no license file associated with CCPP SCM. How about adding Apache 2.0, such as used for CCPP Physics and Framework?

Forcing variable output

When the new DEPHY input format (input_type=1) is used, all the relevant forcing variables should be in the output. This is somewhat hit-and-miss now, for example W is included but omega is not, even if forc_w =0 and forc_omega=1.

multi_run_gmtb_scm.py failing silently

For certain errors multi_run_gmtb_scm.py is failing silently. I tried to run gmtb_scm after merging the GFSv16 changes but without the CIRES UGWP namelist bugfix (NCAR/ccpp-physics#343). The error reported from the model was that the namelist file input.nml cannot be found when running the model for a specific setup using

./run_gmtb_scm.py -c twpice -s SCM_GSD_v0

However, when I used multi_run_gmtb_scm.py it reported that all runs finished successfully. I got suspicious because the model run finished so quickly even though I forgot to copy the Thompson MP lookup tables.

multi_run_scm.py not catching errors

The multi_run_scm.py script doesn't catch model execution errors. To reproduce: Set up everything correctly, but don't check out the Thompson tables. Then run:

./run_scm.py -c twpice -s SCM_RRFS_v1alpha

Model aborts with:

Called ccpp_physics_init with suite 'SCM_RRFS_v1alpha', ierr=1
An error occurred in ccpp_physics_init: An error occured in mp_thompson_init: module_mp_thompson: error opening CCN_ACTIVATE.BIN on unit 20. Exiting...
Note: The following floating-point exceptions are signalling: IEEE_UNDERFLOW_FLAG
STOP 1
INFO: Execution of ./scm failed!

Then, run:

./multi_run_scm.py -t -vv 2>&1 | tee multi_run_scm.log

Output is:

INFO: elapsed time: 2.8397841789999916 s for case: twpice suite: SCM_RRFS_v1alpha
INFO: Process "./run_scm.py -c arm_sgp_summer_1997_A -s SCM_GFS_v15p2" completed successfully
INFO: Process "./run_scm.py -c arm_sgp_summer_1997_A -s SCM_GFS_v16" completed successfully
INFO: Process "./run_scm.py -c arm_sgp_summer_1997_A -s SCM_csawmg" completed successfully
INFO: Process "./run_scm.py -c arm_sgp_summer_1997_A -s SCM_GSD_v1" completed successfully
INFO: Process "./run_scm.py -c arm_sgp_summer_1997_A -s SCM_RRFS_v1alpha" completed successfully
INFO: Process "./run_scm.py -c astex -s SCM_GFS_v15p2" completed successfully
INFO: Process "./run_scm.py -c astex -s SCM_GFS_v16" completed successfully
INFO: Process "./run_scm.py -c astex -s SCM_csawmg" completed successfully
INFO: Process "./run_scm.py -c astex -s SCM_GSD_v1" completed successfully
INFO: Process "./run_scm.py -c astex -s SCM_RRFS_v1alpha" completed successfully
INFO: Process "./run_scm.py -c bomex -s SCM_GFS_v15p2" completed successfully
INFO: Process "./run_scm.py -c bomex -s SCM_GFS_v16" completed successfully
INFO: Process "./run_scm.py -c bomex -s SCM_csawmg" completed successfully
INFO: Process "./run_scm.py -c bomex -s SCM_GSD_v1" completed successfully
INFO: Process "./run_scm.py -c bomex -s SCM_RRFS_v1alpha" completed successfully
INFO: Process "./run_scm.py -c LASSO_2016051812 -s SCM_GFS_v15p2" completed successfully
INFO: Process "./run_scm.py -c LASSO_2016051812 -s SCM_GFS_v16" completed successfully
INFO: Process "./run_scm.py -c LASSO_2016051812 -s SCM_csawmg" completed successfully
INFO: Process "./run_scm.py -c LASSO_2016051812 -s SCM_GSD_v1" completed successfully
INFO: Process "./run_scm.py -c LASSO_2016051812 -s SCM_RRFS_v1alpha" completed successfully
INFO: Process "./run_scm.py -c twpice -s SCM_GFS_v15p2" completed successfully
INFO: Process "./run_scm.py -c twpice -s SCM_GFS_v16" completed successfully
INFO: Process "./run_scm.py -c twpice -s SCM_csawmg" completed successfully
INFO: Process "./run_scm.py -c twpice -s SCM_GSD_v1" completed successfully
INFO: Process "./run_scm.py -c twpice -s SCM_RRFS_v1alpha" completed successfully
INFO: Done

Pressure field in SCM gabls3 case doesn't change

Good evening,

I find that the column and surface pressure fields when running the single column model for the gabls3 case does not change. This was determined after attempting to use the column pressure for density calculations required for an experiment that I would like to test eventually. Ideally I would like to have both the hydrostatic and non-hydrostatic field output. Currently, however, I do not feel comfortable using the pressure field as it is represented right now. Is this expected with the single column model? Thanks.

Running time of ~7 hours versus about 15 minutes

Description

I don't even know how it would be possible, but when I run ccpp-scm with ARM-SGP case and GSD suite physics on Cheyenne under my /glade/work directory, it is taking over 7 hours to run. Yet when I run the same thing within my /glade/home area, it takes roughly 15 minutes. No idea how this could be possible.

Steps to Reproduce

I will now test this again to see if it is entirely reproducible but I will do a git clone on a fresh directory within Cheyenne's /glade/work and another in /glade/home and see if I can reliably reproduce it.

Need to have ability to specify run and output directory

Description

A major disadvantage of the structure of the SCM is the hardwired place for everything to run (scm/bin) that makes it very inflexible to run a set of parallel runs (of same case but using different physicss or namelists) and for redirection of the output files away from the same disk where the source code resides.

Solution

Either using a command-line option (to run_scm.py) or namelist variable to redirect the output would permit an easy way to run parallel instances of the python script using a series of parallel job scripts for changing either suite definition files and/or namelists for simultaneous runnings of the SCM. Also, the source code could be more easily kept in various HPC systems' home directory spaces that have very high backups compared to expendable model output directories in which they can easily fill up the disk quotas and prevent further runs of SCM without a bunch of manual intervention of moving files. An example is Cheyenne computer at NCAR in which we have quite limited home space, but it is backed up with high frequency and a place where I keep CODE versus the /glade/scratch or /glade/work or /glade/project spaces where I can keep model runs that can be easily replaced in event of a disk failure. The inability to keep source code of ccpp-scm in my home dir and multiple outputs of its simulations along with many other source codes is causing me strife, because I have to manage the disk space manually and semi-frequently due to quotas. I always prefer source code in /home areas and model outputs in expendable places for this reason.

As part of this issue is the fact that tracer text files are found in etc/tracer_config and yet they are linked at run time to a hard-wired file name in the run dir. The specification of that tracer file as a relative or absolute path results in failure. The same is true with the suite configuration file being hardwired into ccpp/suites and the namelists being in ccpp/physics_namelists and all of which are linked at run time. It would seriously improve disk mobility to permit the usage of relative and absolute paths to make SCM far more flexible with job scripting in parallel for a single case while changing suites and namelist options. Otherwise, users need to make copies of directories in various places to overcome this obstacle.

Example: I want to run ARM-SGP case, but set up a series of 5-10 experiments to run in parallel because each run takes on 1 CPU, but changing the SDF (suite definition file) and namelist. I don't prefer to run 5-10 different cases, which is how things are set up now for multi-run, but even that doesn't solve the disk quota problem of writing output into the scm/bin directory.

In other words, the work data flow insfrastructure lacks flexibility in its current implementation. With WRF, I compile in my /home directory, then copy executables to run directories for sensitivity experiments to avoid filling up the home partition/quota with my model results. Maybe it's possible using existing ccpp-scm, but it's not immediately obvious to me how to do so.

Fix setting of bl_mynn_tkebudget in GFS_typedefs.F90

@egrell found a bug related to setting the bl_mynn_tkebudget control variable in GFS_typedefs.F90. In particular, the bl_mynn_tkebudget variable is not read in via namelist nor set. This should only affect diagnostic output of the MYNN PBL scheme.

To fix, simply add a couple of lines:

  1. Add the bl_mynn_tkebudget around line 3323.
  2. Add Model%bl_mynn_tkebudget = bl_mynn_tkebudget around line 3873.

This bug affects both the SCM and FV3 versions of the GFS_typedefs.F90 file. Ideally, it should be fixed in both repositories simultaneously.

Suggested improvements from Evelyn Grell

The following arrived from Evelyn Grell via the RT help desk. This is copied here to remind me to address some/all of these concerns.

Messages starts here:

I just wanted to report back on a couple of things that I ran into while using the UFS_IC_generator.py script to create initial conditions for the SCMv4.0.

I ran using specified i,j points, and a specified tile.

Caveats: a) I am using an older dataset (2014 case), and the 3d model was run by someone else, so perhaps there is something different about those files.
b) my python skills are extremely limited.

  1. The first issue occurred at line 1542 in the UFS_IC_generator.py script -- "'filename' referenced before assignment"
    This occurred because the script was searching for the grid.tile.nc file in the in_dir directory, and mine was in the grid_dir directory.
    When I copied my C768_grid.tile.3.ncl from the "fix" directory to the input directory, I got a few lines further.

  2. line 1547 -- basically the same problem as #1 -- the script is looking for file oro_data in the in_dir, while mine was in the grid_dir. But one more issue here....
    For the oro_data file, the filename_pattern is:
    filename_pattern = 'oro_data.tile{0}.nc'.format(tile) (line 485)
    But I think there should be a * before oro (ie. *oro_data.tile{0}.nc)
    because the files I had were called, eg. C768_oro_data.tile3.nc
    So, I copied the C768_oro_data.tile3.nc from the "fix" directory to my input directory and called it oro_data.tile3.nc
    and that worked.

The documentation noted that for the example, the input directory was the same as the grid directory, so that is probably why these issues did not show up when you tested.
I chose to move the files rather than change the script, but I am reporting back because I expect you would rather change the script for the next release (or change the documentation to specify the correct location of the files). (All assuming caveat (a) is not the real problem)

Otherwise, the SCM has been working pretty well for me. Unfortunately, it is not doing a great job of reproducing the 3D results, but I have no forcing, so that is not a huge surprise.

I would like to make 2 unsolicited suggestions for minor changes for a future release:

  1. For the documentation -- it would be helpful to have a brief subsection that describes the changes needed to use some existing physics package in a new suite. For example, I used the v15p2 suite but changed the microphysics scheme to Thompson. All the code already exists, but I had to make a SDF, change a couple of scripts, and recompile.
  2. In my opinion, it would be better if the default output location was not under the bin directory. I always move it up a directory level or two so that it would not be lost if recompiling is necessary.

Hope that is helpful! And thank you!
Evelyn

More cleanup for ozone, h2o, aerosols, and ICCN

Related to issues #59 and #70, update the build system to be able to compile aerinterp.F90 and iccninterp.F90 using netcdf; modify GFS_typedefs.F90 to closer resemble the FV3 version (especially as it relates to ozone, strat. h2o, aerosols, and ICCN; stage global_o3prodlos.f77 and similar for h2o, aerosols, and iccn

Diagnostics clearing order

The order of operations of diagnostics clearing is causing a problem where they are being written out AFTER being cleared and filled for one timestep. Therefore, as the code sits today, when n_itt_diag /= 1, the diagnostic variables that are written out are incorrect. This can be corrected by changing GFS_phys_time_vary modulo statements back to 1 from 0 -- need to verify that this change doesn't break other functionality.

Expand continuous integration tests

Currently there are two CI tests. One to run the entire SCM workflow; install software, build the SCM, run the regressions tests, compare results to baselines. There is another small test that tests the current physics code with the overlaying host, ensuring CCPP compliance.

To bring the SCM documentation and the CI tests close together we need to add the following CI tests:

  • Build SCM w/o HPC-stack (w3EMC/BACIO/SP libraries individually)
  • Build SCM w/ HPC-stack
  • Expand gfortran build tests to use more gfortran versions (add to fortran-compiler matrix)
  • Add Intel?

Tracer output

Along with #229, add logic to output all tracers that were used in the run.

GNU config for Cheyenne and Theia

For both Cheyenne and Theia, configurations for Intel and PGI are available, but not for GNU. This is straightforward (fingers crossed) and should be included in the next release.

Create release branch for CCPP v6 public release

The CCPP v6 public release, along with the SCM, is planned for June 2022. A release branch needs to be created, a precise date will be provided soon. Developers: Please make sure all code needed for this release gets to the main branch asap. The schemes/suites for this release are described here. If you need permissions, please ask.

Change in loading Anaconda module on Hera

It has been reported that loading the Anaconda module on Hera has changed from:

module load contrib
module load anaconda/anaconda2

to

module use -a /contrib/anaconda/modulefiles
module load anaconda/2.3.0

Verify and change as necessary.

Problem to install a shapely python module

I got a problem when I set the environment variables for CCPP-SCM on Hera with icc/ifort as follows:

% source scm/etc/Hera_setup_intel.csh

"Hera_setup_intel.csh" needs to install a shapely python module:

  pip install --index-url http://anaconda.rdhpcs.noaa.gov/simple --trusted-host [anaconda.rdhpcs.noaa.gov](http://anaconda.rdhpcs.noaa.gov/) shapely --user

The error message shows:
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.requests.packages.urllib3.connection.HTTPConnection object at 0x7f8d26951828>, 'Connection to anaconda.rdhpcs.noaa.gov timed out. (connect timeout=15)')': /simple/shapely/

How to fix this problem? Thanks!!

Weizhong

Enable SIONlib support in SCM

Testing the Thompson MP schemes on MacOSX platforms benefits from writing the lookup tables to disk when they are computed the first time. This has been implemented using the SIONlib I/O library for efficiency and future scalability reasons. NEMSfv3gfs now supports the SIONlib I/O library and correctly enables it in ccpp-framework and ccpp-physics, but SCM does not yet.

All debug runs fail with gcc+gfortran on macOS

As of 2022/03/27, all debug runs fail on macOS with gcc+gfortran with

An error occurred in ccpp_physics_init: Detected size mismatch for variable physics%Model%input_nml_file in group physics before cires_ugwp_init, expected        0 but got        1. Exiting...

This is because Fortran thinks that the uninitialized array

    character(len=:), pointer, dimension(:) :: input_nml_file => null() !< character string containing full namelist
                                                                        !< for use with internal file reads

has length one and not length zero.

I am not sure if this is the case with other compilers/on other systems, therefore the best way to fix this is to add an active attribute in GFS_typedefs.meta (last line below):

[input_nml_file_length]
  standard_name = number_of_lines_in_internal_namelist
  long_name = lines in namelist file for internal file reads
  units = count
  dimensions = ()
  type = integer
[input_nml_file]
  standard_name = filename_of_internal_namelist
  long_name = namelist filename for internal file reads
  units = none
  dimensions = (number_of_lines_in_internal_namelist)
  type = character
  kind = len=256
  active = (number_of_lines_in_internal_namelist > 0)

This tells the debug checks in the autogenerated caps to bypass filename_of_internal_namelist if number_of_lines_in_internal_namelist == 0.

Hera/intel cap subroutine names too long

When compiling on Hera/intel in static mode, the compiler complains about many auto-generated cap subroutine names being too long. They are truncated and don't appear to cause a problem, it's just ugly in the compilation output.

/scratch1/BMC/gmtb/Grant.Firl/gmtb-scm/ccpp/physics/physics/ccpp_SCM_GFS_v15plus_prescribed_surface_cap.F90(29): warning #5462: Global name too long, shortened from: ccpp_scm_gfs_v15plus_prescribed_surface_physics_cap_mp_SCM_GFS_V15PLUS_PRESCRIBED_SURFACE_PHYSICS_FINALIZE_CAP to: s_prescribed_surface_physics_cap_mp_SCM_GFS_V15PLUS_PRESCRIBED_SURFACE_PHYSICS_FINALIZE_CAP use ccpp_SCM_GFS_v15plus_prescribed_surface_physics_cap, only: SCM_GFS_v15plus_prescribed_surface_physics_finalize_cap

In addition, the compilation is reporting the following for several files:
fpp: warning: keyword redefined: STATIC

Repository name change needed (gmtb-scm -> ccpp-scm)

Over the past year, we have been directed to remove the name "GMTB" from our software and documentation since it refers to a particular project for which the software will likely outlive. We have chosen to name this repository "ccpp-scm" because this SCM is very closely tied to that software framework.

Setting surface temperature from forcing file even with UFS ICs

Qingfu Liu pointed out that the surface temperature is being set from the forcing data. While this is appropriate for observationally-based cases that use specified surface fluxes (and even perhaps those that do not), it is not appropriate for when the SCM is using UFS ICs -- the surface temperature should be allowed to vary according to the surface energy budget as calculated in the physics.

Surface friction in input_type=1

I am testing the use of the new DEPHY input format (input_type=1). It appears that no surface friction is being calculated. For sure it is not being output, variable ustar is identically zero. This happens in the ARMCU_REF case with suite GSD_v1. I have also tried it with a LLJ case I am developing, with both GSD_v1 and RRFS_v1beta suites. The results are consistent with no surface friction. It was being calculated correctly in the same case with the old input type. I dug through the code some, and did not find an obvious problem. The relevant attributes in the input file are:
:z0 = 0.035 ;
:surfaceType = "land" ;
:surfaceForcing = "surfaceFlux" ;
:surfaceForcingWind = "z0" ;

Add mkavulich to CODEOWNERS

Add mkavulich to CODEOWNERS so that there are more people who understand the code and who can provide reviews that allow merging (not just Grant and Dom).

Need to update the SCM Users Guide for v6 release

Need to update the SCM Users Guide for v6 release. Some of the specific aspects that need update are:

  • Build with HPC Stack (with or without SPACK - likely without) instead of NCEPLIBS.
  • Please add other aspects here
  • Who? Potentially new NCAR SE, TBD after 4/18 starting date. Possibly with Grant's guidance and Dom's and others' review.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.