ncar / ccpp-scm Goto Github PK
View Code? Open in Web Editor NEWCCPP Single Column Model
License: Other
CCPP Single Column Model
License: Other
The python environment on Cheyenne has changed and the machine setup scripts need to be updated to use it. [See https://www2.cisl.ucar.edu/resources/computational-systems/cheyenne/software/python]
It would be best just to eliminate the need for f90nml to not require users to install it.
However, since plotting will be redone in the near future, it may not be necessary to fix.
The CCPP team would like to rename the "master" branch to "main" to be sensitive to anti-racism practices.
Here is an article about this topic: https://www.zdnet.com/article/github-to-replace-master-with-alternative-term-to-avoid-slavery-references/.
Here is the GitHub page with steps for the renaming: https://github.com/github/renaming.
Upstream repositories that refer to the master branch will have to start referring to the new name (main).
After communication with, @gthompsonWRF, we should revisit the computation of the Thompson MP tables in the SCM. Years ago, when Thompson MP was first put in the CCPP, we tried to let the SCM generate the tables, but it was prohibitive computationally. I think this was tried with/without openmp enabled in the SCM without much luck. The solution was to include pre-computed tables (that were generated in an FV3 run) in the scm/data/physics_input_data directory. @gthompsonWRF, in association with the code changes in NCAR/ccpp-physics#567, would like the ability to generate new tables with the SCM.
The original attempt at integrating ozone and h2o data in a similar fashion to FV3 is confusing and probably not correct. Revisit and clean up this code.
If the dynamic build is being abandoned, the SCM must use the static build going forward.
Included in this issue is a graphic to demonstrate the point I am trying to make.
Here is a comparison of a run I call Control
which uses the exact same suite definition file and namelist unedited from ccpp-scm repo with GFS_v16 and the ARM-SGP case (about 27 days simulated) versus running with shorter timesteps and a different column_area size. The top panel of the graphic shows Control
whereas the bottom panel shows Exp1
with the following list of 4 items changed
The rationale for these changes is a more fair comparison at HRRR resolution when switching from GFS physics to GSD physics. I was attempting to determine whether the large change of physics suite from GFS to GSD was primarily responsible for the large changes I am seeing.
The issue I am raising is two-fold. The quickly obvious plot of cloud water content clearly is exploding the frequency of low clouds. There may actually be nothing wrong with this. While the chnage might be a bit dramatic, the far larger issue I am seeing is the very dramatic rise in temperature, particularly in the middle atmosphere of nearly 5 degrees. The solid lines with warm colors (reds/oranges) are each 5C interval whereas the solid light blue line centered near 600 hPa is 0C and each dashed line in cool colors (blues/purples) are each 5C below zero. There is also a "compaction" of temperature lines aloft up near 100-200 hPa.
I am very alarmed by this. Remember, the time period is middle June to middle July (1997) and I expect very hot weather in central Okllahoma, but approaching 0C at 500hPa is extremely rare anywhere in the USA. For context, the truly massive heat bubble of Summer 2021 had core values of 500hPa temperature around -2C.
I do not believe shorter timesteps that seem very valid should be producing a warming such as this. The problem is even worse when switching to GSD physics suite. Can anyone offer some explanations?
The thirdparty libraries required to compile SCM are currently compiled with a mixture of compiler flag and preprocessor definitions coming from the calling CMakeLists.txt from scm/src and from within the individual directories (e.g. external/w3nco/v2.0.6/src/CMakeLists.txt).
I would like to propose to pull the thirdparty libraries out of gmtb-scm and install them separately in the same way as we do it for FV3. We do have a github repository for all NCEP libraries that compile them with compiler flags and preprocessor options independent of what is chosen for SCM (https://github.com/climbfuji/NCEPlibs). This will hopefully be consolidated with EMC's effort in the near future, but until then we could use this repository and the instructions in there to build on all platforms with Intel, GNU or PGI.
This would allow us to tidy up the build system for gmtb-scm and also for ccpp-physics.
Thoughts, opinions?
Update standard names for the clw variables to correspond with their counterparts in Statein%qgrs and Stateout%gq0 with decoration of "convectively_transported" or similar.
The fv3_model_point_noahmp case is obsolete and can be removed since NoahMP can be initialized from Noah ICs within the physics now. The file scm/data/processed_case_input/fv3_model_point.nc is also obsolete and can be removed.
NCAR/ccpp-framework#168 and PRs referenced therein require several changes to the host model variable metadata and the module use statements. This has not been tested with SCM, most likely SCM is not working with the current master versions of ccpp-framework and ccpp-physics (even though the static build is not used).
Related to this, we could consider adding a static build option for SCM.
Need to remove GOCART from the p8c namelist.
Namelist option iaer
should be 5,1,1,1
The metadata tables in scm/src/gmtb_scm_type_defs.F90 are no longer in sync with those in NEMSfv3gfs and in particular do not contain entries for all the latest developments made in NEMSfv3gfs.
Before releasing SCM+CCPP end of July, this should be cleaned up.
Qingfu Liu discovered that the surface temperature goes down to 150C shortly after initialization, even when using UFS initial conditions to initialize the Noah LSM. This resembles a strong spinup-related transient, although I would expect there not to be LSM spinup when using UFS ICs. Check whether LSM variables are receiving the correct data at initialization time.
Hi! I wanted to start by thanking you for making this excellently documented tool available. I've been running the model successfully, but I have been noticing some potential improvements that would make this easier to use, especially for docker + cloud workflows.
It take long time to checkout this repository because it includes many large binary data files in scm/data
. This makes it pretty slow to download the code quickly and poke around. Also, it prevents users developing their own more efficient/more parsimonious ways of provisioning the data-files at runtime, we have been using https://github.com/VulcanClimateModeling/fv3config/ to do this for FV3 workflows. Also, conceivably someone may want to change this data or add new cases for some experiment, in which case the git repo will be holding multiple versions of the datafiles, which could lead to further repo bloat.
Would you consider removing the data files from this repo? Moving it to a submodule would be a minimal change to the current workflow or you could host it on some HTTP server and provide a download_data.sh
script. Unfortunately, it's pretty tricky to remove large files from git history, but I have followed this answer on my own work in the past.
I got an error when I run "UFS_IC_generator.py" to create ICs on NOAA's Hera HPC. For example, I just try a sample case as follows:
./UFS_IC_generator.py -l 261.51 38.2 -d 201610030000 -i ../../data/raw_case_input/FV3_C96_example_ICs -g ../../data/raw_case_input/FV3_C96_example_ICs -n fv3_model_point_noah -oc
The error message shows as follows:
File "./UFS_IC_generator.py", line 2045
print 'Tile found: {0}'.format(tile)
^
SyntaxError: invalid syntax
BTW, the shapely python package is installed on Hera, and it does not seem problem after I check it.
Thanks,
Weizhong
Hi, I have tried to follow the tutorial using Singularity, but I couldn't get it to work as the tutorial shows. Could you please look into this and let me know if I am doing something wrong or if additional support is required for Singularity?
export SCM_WORK='pwd'
singularity pull docker://dtcenter/ccpp-scm:v5.0.0-tutorial
export OUT_DIR=/path/to/output
singularity shell ccpp-scm_v5.0.0-tutorial.sif
and singularity run ccpp-scm_v5.0.0-tutorial.sif
and singularity exec --bind ${OUT_DIR}:/home ../ccpp-scm_v5.0.0-tutorial.sif
Singularity>
prompt), I couldn't find the correct folders in the tutorial; that is, cd $SCM_WORK/ccpp-scm/ccpp/suites
didn't work. The folder ccpp-scm
didn't exist anywhere. I even tried find . ccpp-scm
inside and outside the image in multiple directories to no avail.Do I need to pull the container in a specific way? (I don't think that was part of the tutorial.) Or do I need to bind the directories in different way than --bind
. Generally, I think it would be much more helpful to have instructions for Singularity rather than Docker due to sudo rights associated with the latter.
Thank you!
Currently, using sfc_type = 1 without using prescribed surface fluxes will result in a segmentation fault due to uninitialized variables.
The standard names used in the physics metadata were created over the years to address the needs to each physics scheme. Whenever possible, CF convections were followed. However, CF conventions were insufficient and new names had to be created. It is necessary to establish rules for CCPP standard names, review the names currently used, make necessary changes to follow the rules, and document the rules and standard names using a CCPP Standard Names dictionary that can be used by the community of CCPP developers.
Currently, both "statein" and "stateout" state variables are pointers pointing to the same space in memory for the forward Euler scheme. For both stochastic physics and the calculation of non-physics tendencies, it would make sense to allocate separate memory for these.
There is no license file associated with CCPP SCM. How about adding Apache 2.0, such as used for CCPP Physics and Framework?
The filtered-leapfrog scheme adds complexity and is not regularly tested. Remove to follow KISS principle and simplify codebase.
When the new DEPHY input format (input_type=1) is used, all the relevant forcing variables should be in the output. This is somewhat hit-and-miss now, for example W is included but omega is not, even if forc_w =0 and forc_omega=1.
For certain errors multi_run_gmtb_scm.py
is failing silently. I tried to run gmtb_scm after merging the GFSv16 changes but without the CIRES UGWP namelist bugfix (NCAR/ccpp-physics#343). The error reported from the model was that the namelist file input.nml cannot be found when running the model for a specific setup using
./run_gmtb_scm.py -c twpice -s SCM_GSD_v0
However, when I used multi_run_gmtb_scm.py
it reported that all runs finished successfully. I got suspicious because the model run finished so quickly even though I forgot to copy the Thompson MP lookup tables.
The multi_run_scm.py
script doesn't catch model execution errors. To reproduce: Set up everything correctly, but don't check out the Thompson tables. Then run:
./run_scm.py -c twpice -s SCM_RRFS_v1alpha
Model aborts with:
Called ccpp_physics_init with suite 'SCM_RRFS_v1alpha', ierr=1
An error occurred in ccpp_physics_init: An error occured in mp_thompson_init: module_mp_thompson: error opening CCN_ACTIVATE.BIN on unit 20. Exiting...
Note: The following floating-point exceptions are signalling: IEEE_UNDERFLOW_FLAG
STOP 1
INFO: Execution of ./scm failed!
Then, run:
./multi_run_scm.py -t -vv 2>&1 | tee multi_run_scm.log
Output is:
INFO: elapsed time: 2.8397841789999916 s for case: twpice suite: SCM_RRFS_v1alpha
INFO: Process "./run_scm.py -c arm_sgp_summer_1997_A -s SCM_GFS_v15p2" completed successfully
INFO: Process "./run_scm.py -c arm_sgp_summer_1997_A -s SCM_GFS_v16" completed successfully
INFO: Process "./run_scm.py -c arm_sgp_summer_1997_A -s SCM_csawmg" completed successfully
INFO: Process "./run_scm.py -c arm_sgp_summer_1997_A -s SCM_GSD_v1" completed successfully
INFO: Process "./run_scm.py -c arm_sgp_summer_1997_A -s SCM_RRFS_v1alpha" completed successfully
INFO: Process "./run_scm.py -c astex -s SCM_GFS_v15p2" completed successfully
INFO: Process "./run_scm.py -c astex -s SCM_GFS_v16" completed successfully
INFO: Process "./run_scm.py -c astex -s SCM_csawmg" completed successfully
INFO: Process "./run_scm.py -c astex -s SCM_GSD_v1" completed successfully
INFO: Process "./run_scm.py -c astex -s SCM_RRFS_v1alpha" completed successfully
INFO: Process "./run_scm.py -c bomex -s SCM_GFS_v15p2" completed successfully
INFO: Process "./run_scm.py -c bomex -s SCM_GFS_v16" completed successfully
INFO: Process "./run_scm.py -c bomex -s SCM_csawmg" completed successfully
INFO: Process "./run_scm.py -c bomex -s SCM_GSD_v1" completed successfully
INFO: Process "./run_scm.py -c bomex -s SCM_RRFS_v1alpha" completed successfully
INFO: Process "./run_scm.py -c LASSO_2016051812 -s SCM_GFS_v15p2" completed successfully
INFO: Process "./run_scm.py -c LASSO_2016051812 -s SCM_GFS_v16" completed successfully
INFO: Process "./run_scm.py -c LASSO_2016051812 -s SCM_csawmg" completed successfully
INFO: Process "./run_scm.py -c LASSO_2016051812 -s SCM_GSD_v1" completed successfully
INFO: Process "./run_scm.py -c LASSO_2016051812 -s SCM_RRFS_v1alpha" completed successfully
INFO: Process "./run_scm.py -c twpice -s SCM_GFS_v15p2" completed successfully
INFO: Process "./run_scm.py -c twpice -s SCM_GFS_v16" completed successfully
INFO: Process "./run_scm.py -c twpice -s SCM_csawmg" completed successfully
INFO: Process "./run_scm.py -c twpice -s SCM_GSD_v1" completed successfully
INFO: Process "./run_scm.py -c twpice -s SCM_RRFS_v1alpha" completed successfully
INFO: Done
Good evening,
I find that the column and surface pressure fields when running the single column model for the gabls3 case does not change. This was determined after attempting to use the column pressure for density calculations required for an experiment that I would like to test eventually. Ideally I would like to have both the hydrostatic and non-hydrostatic field output. Currently, however, I do not feel comfortable using the pressure field as it is represented right now. Is this expected with the single column model? Thanks.
I don't even know how it would be possible, but when I run ccpp-scm with ARM-SGP case and GSD suite physics on Cheyenne under my /glade/work directory, it is taking over 7 hours to run. Yet when I run the same thing within my /glade/home area, it takes roughly 15 minutes. No idea how this could be possible.
I will now test this again to see if it is entirely reproducible but I will do a git clone on a fresh directory within Cheyenne's /glade/work and another in /glade/home and see if I can reliably reproduce it.
A major disadvantage of the structure of the SCM is the hardwired place for everything to run (scm/bin
) that makes it very inflexible to run a set of parallel runs (of same case but using different physicss or namelists) and for redirection of the output files away from the same disk where the source code resides.
Either using a command-line option (to run_scm.py
) or namelist variable to redirect the output would permit an easy way to run parallel instances of the python script using a series of parallel job scripts for changing either suite definition files and/or namelists for simultaneous runnings of the SCM. Also, the source code could be more easily kept in various HPC systems' home directory spaces that have very high backups compared to expendable model output directories in which they can easily fill up the disk quotas and prevent further runs of SCM without a bunch of manual intervention of moving files. An example is Cheyenne computer at NCAR in which we have quite limited home space, but it is backed up with high frequency and a place where I keep CODE versus the /glade/scratch or /glade/work or /glade/project spaces where I can keep model runs that can be easily replaced in event of a disk failure. The inability to keep source code of ccpp-scm in my home dir and multiple outputs of its simulations along with many other source codes is causing me strife, because I have to manage the disk space manually and semi-frequently due to quotas. I always prefer source code in /home areas and model outputs in expendable places for this reason.
As part of this issue is the fact that tracer text files are found in etc/tracer_config and yet they are linked at run time to a hard-wired file name in the run dir. The specification of that tracer file as a relative or absolute path results in failure. The same is true with the suite configuration file being hardwired into ccpp/suites and the namelists being in ccpp/physics_namelists and all of which are linked at run time. It would seriously improve disk mobility to permit the usage of relative and absolute paths to make SCM far more flexible with job scripting in parallel for a single case while changing suites and namelist options. Otherwise, users need to make copies of directories in various places to overcome this obstacle.
Example: I want to run ARM-SGP case, but set up a series of 5-10 experiments to run in parallel because each run takes on 1 CPU, but changing the SDF (suite definition file) and namelist. I don't prefer to run 5-10 different cases, which is how things are set up now for multi-run, but even that doesn't solve the disk quota problem of writing output into the scm/bin directory.
In other words, the work data flow insfrastructure lacks flexibility in its current implementation. With WRF, I compile in my /home directory, then copy executables to run directories for sensitivity experiments to avoid filling up the home partition/quota with my model results. Maybe it's possible using existing ccpp-scm, but it's not immediately obvious to me how to do so.
The hpc-stack repo (https://github.com/NOAA-EMC/hpc-stack) should provide a one-stop shop for installing Fortran libraries needed by the SCM to run. Change the machine setup scripts to use pre-installed hpc-stack libraries on Hera and Cheyenne and consider updating documentation for other machines (generic Mac, Linux, etc.) to use hpc-stack as well.
@egrell found a bug related to setting the bl_mynn_tkebudget control variable in GFS_typedefs.F90. In particular, the bl_mynn_tkebudget variable is not read in via namelist nor set. This should only affect diagnostic output of the MYNN PBL scheme.
To fix, simply add a couple of lines:
This bug affects both the SCM and FV3 versions of the GFS_typedefs.F90 file. Ideally, it should be fixed in both repositories simultaneously.
@grantfirl
I think I found an error in ccpp/physical_namelists/input_GSD_v1.nml.
When using Thompson MP (imp_physics==8) and do_mynnedmf .or. imfdeepcnv == imfdeepcnv_gf, effr_in must be set to true, otherwise in progclduni() will not use the inputparticle sizesi, but rather just set them to default values.
This doesn't appear to be and issue in the UFS, https://github.com/ufs-community/ufs-weather-model/blob/develop/tests/parm/ccpp_gsd.nml.IN
The following arrived from Evelyn Grell via the RT help desk. This is copied here to remind me to address some/all of these concerns.
Messages starts here:
I just wanted to report back on a couple of things that I ran into while using the UFS_IC_generator.py script to create initial conditions for the SCMv4.0.
I ran using specified i,j points, and a specified tile.
Caveats: a) I am using an older dataset (2014 case), and the 3d model was run by someone else, so perhaps there is something different about those files.
b) my python skills are extremely limited.
The first issue occurred at line 1542 in the UFS_IC_generator.py script -- "'filename' referenced before assignment"
This occurred because the script was searching for the grid.tile.nc file in the in_dir directory, and mine was in the grid_dir directory.
When I copied my C768_grid.tile.3.ncl from the "fix" directory to the input directory, I got a few lines further.
line 1547 -- basically the same problem as #1 -- the script is looking for file oro_data in the in_dir, while mine was in the grid_dir. But one more issue here....
For the oro_data file, the filename_pattern is:
filename_pattern = 'oro_data.tile{0}.nc'.format(tile) (line 485)
But I think there should be a * before oro (ie. *oro_data.tile{0}.nc)
because the files I had were called, eg. C768_oro_data.tile3.nc
So, I copied the C768_oro_data.tile3.nc from the "fix" directory to my input directory and called it oro_data.tile3.nc
and that worked.
The documentation noted that for the example, the input directory was the same as the grid directory, so that is probably why these issues did not show up when you tested.
I chose to move the files rather than change the script, but I am reporting back because I expect you would rather change the script for the next release (or change the documentation to specify the correct location of the files). (All assuming caveat (a) is not the real problem)
Otherwise, the SCM has been working pretty well for me. Unfortunately, it is not doing a great job of reproducing the 3D results, but I have no forcing, so that is not a huge surprise.
I would like to make 2 unsolicited suggestions for minor changes for a future release:
Hope that is helpful! And thank you!
Evelyn
Related to issues #59 and #70, update the build system to be able to compile aerinterp.F90 and iccninterp.F90 using netcdf; modify GFS_typedefs.F90 to closer resemble the FV3 version (especially as it relates to ozone, strat. h2o, aerosols, and ICCN; stage global_o3prodlos.f77 and similar for h2o, aerosols, and iccn
See changes in PR #194 . These need to find their way back to master.
The order of operations of diagnostics clearing is causing a problem where they are being written out AFTER being cleared and filled for one timestep. Therefore, as the code sits today, when n_itt_diag /= 1, the diagnostic variables that are written out are incorrect. This can be corrected by changing GFS_phys_time_vary modulo statements back to 1 from 0 -- need to verify that this change doesn't break other functionality.
Currently there are two CI tests. One to run the entire SCM workflow; install software, build the SCM, run the regressions tests, compare results to baselines. There is another small test that tests the current physics code with the overlaying host, ensuring CCPP compliance.
To bring the SCM documentation and the CI tests close together we need to add the following CI tests:
Along with #229, add logic to output all tracers that were used in the run.
For both Cheyenne and Theia, configurations for Intel and PGI are available, but not for GNU. This is straightforward (fingers crossed) and should be included in the next release.
The CCPP v6 public release, along with the SCM, is planned for June 2022. A release branch needs to be created, a precise date will be provided soon. Developers: Please make sure all code needed for this release gets to the main branch asap. The schemes/suites for this release are described here. If you need permissions, please ask.
It has been reported that loading the Anaconda module on Hera has changed from:
module load contrib
module load anaconda/anaconda2
to
module use -a /contrib/anaconda/modulefiles
module load anaconda/2.3.0
Verify and change as necessary.
The functions run_gmtb_scm.py/execute and multi_run_gmtb_scm.py/subprocess_work could be combined into one function in a common python module; there may be other commonalities that could be shared between the two.
I got a problem when I set the environment variables for CCPP-SCM on Hera with icc/ifort as follows:
% source scm/etc/Hera_setup_intel.csh
"Hera_setup_intel.csh" needs to install a shapely python module:
pip install --index-url http://anaconda.rdhpcs.noaa.gov/simple --trusted-host [anaconda.rdhpcs.noaa.gov](http://anaconda.rdhpcs.noaa.gov/) shapely --user
The error message shows:
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.requests.packages.urllib3.connection.HTTPConnection object at 0x7f8d26951828>, 'Connection to anaconda.rdhpcs.noaa.gov timed out. (connect timeout=15)')': /simple/shapely/
How to fix this problem? Thanks!!
Weizhong
Testing the Thompson MP schemes on MacOSX platforms benefits from writing the lookup tables to disk when they are computed the first time. This has been implemented using the SIONlib I/O library for efficiency and future scalability reasons. NEMSfv3gfs now supports the SIONlib I/O library and correctly enables it in ccpp-framework and ccpp-physics, but SCM does not yet.
As of 2022/03/27, all debug runs fail on macOS with gcc+gfortran with
An error occurred in ccpp_physics_init: Detected size mismatch for variable physics%Model%input_nml_file in group physics before cires_ugwp_init, expected 0 but got 1. Exiting...
This is because Fortran thinks that the uninitialized array
character(len=:), pointer, dimension(:) :: input_nml_file => null() !< character string containing full namelist
!< for use with internal file reads
has length one and not length zero.
I am not sure if this is the case with other compilers/on other systems, therefore the best way to fix this is to add an active
attribute in GFS_typedefs.meta
(last line below):
[input_nml_file_length]
standard_name = number_of_lines_in_internal_namelist
long_name = lines in namelist file for internal file reads
units = count
dimensions = ()
type = integer
[input_nml_file]
standard_name = filename_of_internal_namelist
long_name = namelist filename for internal file reads
units = none
dimensions = (number_of_lines_in_internal_namelist)
type = character
kind = len=256
active = (number_of_lines_in_internal_namelist > 0)
This tells the debug checks in the autogenerated caps to bypass filename_of_internal_namelist
if number_of_lines_in_internal_namelist == 0
.
When compiling on Hera/intel in static mode, the compiler complains about many auto-generated cap subroutine names being too long. They are truncated and don't appear to cause a problem, it's just ugly in the compilation output.
/scratch1/BMC/gmtb/Grant.Firl/gmtb-scm/ccpp/physics/physics/ccpp_SCM_GFS_v15plus_prescribed_surface_cap.F90(29): warning #5462: Global name too long, shortened from: ccpp_scm_gfs_v15plus_prescribed_surface_physics_cap_mp_SCM_GFS_V15PLUS_PRESCRIBED_SURFACE_PHYSICS_FINALIZE_CAP to: s_prescribed_surface_physics_cap_mp_SCM_GFS_V15PLUS_PRESCRIBED_SURFACE_PHYSICS_FINALIZE_CAP use ccpp_SCM_GFS_v15plus_prescribed_surface_physics_cap, only: SCM_GFS_v15plus_prescribed_surface_physics_finalize_cap
In addition, the compilation is reporting the following for several files:
fpp: warning: keyword redefined: STATIC
Over the past year, we have been directed to remove the name "GMTB" from our software and documentation since it refers to a particular project for which the software will likely outlive. We have chosen to name this repository "ccpp-scm" because this SCM is very closely tied to that software framework.
Qingfu Liu pointed out that the surface temperature is being set from the forcing data. While this is appropriate for observationally-based cases that use specified surface fluxes (and even perhaps those that do not), it is not appropriate for when the SCM is using UFS ICs -- the surface temperature should be allowed to vary according to the surface energy budget as calculated in the physics.
I am testing the use of the new DEPHY input format (input_type=1). It appears that no surface friction is being calculated. For sure it is not being output, variable ustar is identically zero. This happens in the ARMCU_REF case with suite GSD_v1. I have also tried it with a LLJ case I am developing, with both GSD_v1 and RRFS_v1beta suites. The results are consistent with no surface friction. It was being calculated correctly in the same case with the old input type. I dug through the code some, and did not find an obvious problem. The relevant attributes in the input file are:
:z0 = 0.035 ;
:surfaceType = "land" ;
:surfaceForcing = "surfaceFlux" ;
:surfaceForcingWind = "z0" ;
Add mkavulich to CODEOWNERS so that there are more people who understand the code and who can provide reviews that allow merging (not just Grant and Dom).
Need to update the SCM Users Guide for v6 release. Some of the specific aspects that need update are:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.