escomp / cesm_cice5 Goto Github PK
View Code? Open in Web Editor NEWCESM2 Version of CICE
CESM2 Version of CICE
The valid_values and settings for logicals in cime are all upper case. xmlchange will not work if these values are expressed as lower case.
Do these variables need to be in cice history by default and if so is there currently a namelist method available to tune them off?
blkmask, lonu_bounds, latu_bounds
The CICE component of CMEPS gives error when PIO2 is used to write the history files on TACC's stampede2. This is not the case for PIO1 (./xmlchange PIO_VERSION=1) and runs successfully. The CESM log shows following trace,
...
cesm.exe 0000000001D39953 piolib_mod_mp_clo 1438 piolib_mod.F90
cesm.exe 000000000104855E ice_history_write 1319 ice_history_write.F90
cesm.exe 0000000001022866 ice_history_mp_ac 3849 ice_history.F90
cesm.exe 00000000010BBEEE cice_runmod_mp_ic 245 CICE_RunMod.F90
cesm.exe 00000000010BAC30 cice_runmod_mp_ci 76 CICE_RunMod.F90
cesm.exe 0000000000FDF898 ice_comp_nuopc_mp 974 ice_comp_nuopc.F90
...
and it seems that model fail in the components/cice/src/io_pio/ice_history_write.F90 -> call pio_closefile(File)
I found in testing for SLIM that tests with CICE5 fail on the "GENERATE" phase because the history files have "cice" in the name rather than "cice5".
See the SLIM issue..
I don't know if we should care enough to fix it, but I did run into it as an issue.
It appears that cice is incompatible with Python 3.x, but there are not any checks for Python version in use. This can lead to some misleading and unhelpful error messages when the wrong version of Python (i.e., 3.x) is accidentally used. In my case, I need to explicitly load the proper Python module before trying to use any of the CIME build and run scripts, or I run into some misleading errors. For example, if I try to run case.build
without first loading the Python 2.7.9 module in my environment, I get the following error:
ERROR: Command: './generate_cice_decomp.pl -ccsmroot /home/bhillma/codes/e3sm/branches/add-rrtmgp -res ne4np4 -nx 866 -n
y 1 -nproc 16.0 -thrds 1 -output all' failed with error 'b'Value "16.0" invalid for option nproc (number expected)\nSYNO
PSIS\n generate_cice_decomp.pl [options]\nOPTIONS\n -ccsmroot <path> Full pathname for ccsmroot\n
(required)\n -nproc <number> (or -n) Number of mpi tasks used.\t\n
(required)\n -nx <number> number of lons \n
(optional, default is 320)\n -ny <number> number of lats\n
(optional, default is 384)\n -res <resolution> (or -r) Horizontal resolution \n
(optional, default gx1v6)\n -thrds <number> (or -t) Number of threads per mpi task\n
(optional, default 1)\n -output <type>\t (or -o) Either output: all, maxblocks, bsize-x, bsiz
e-y, or decomptype\n\t\t\t (optional, default all)\n\nEXAMPLES\n\n generate_cice_decomp.pl -res gx1v6 -nx 3
20 -ny 384 -nproc 80 -output maxblocks\n\n will return a single value -- t
The actual error is that things seem to be parsed differently here in Python 2 vs Python 3, but it would be most useful to have a check at the beginning that made sure the user was invoking the script with the correct Python version if the code requires that.
The fortran allows for multiple files in variable stream_fldfilename however there is currently no way to
set more than one file without modifying the code.
It looks like workx
and worky
should be included in the OMP private declaration in the following block from subroutine ocn_data_hdgem of src/source/ice_forcing.F90 (which I don't know if anyone is actively using):
!$OMP PARALLEL DO PRIVATE(iblk,i,j)
do iblk = 1, nblocks
do j = 1, ny_block
do i = 1, nx_block
workx = uocn(i,j,iblk)
worky = vocn(i,j,iblk)
uocn(i,j,iblk) = workx*cos(ANGLET(i,j,iblk)) &
+ worky*sin(ANGLET(i,j,iblk))
vocn(i,j,iblk) = worky*cos(ANGLET(i,j,iblk)) &
- workx*sin(ANGLET(i,j,iblk))
uocn(i,j,iblk) = uocn(i,j,iblk) * cm_to_m
vocn(i,j,iblk) = vocn(i,j,iblk) * cm_to_m
enddo ! i
enddo ! j
enddo ! nblocks
!$OMP END PARALLEL DO
Is it possible to automatically set update_ocn_f = .true.
when coupling with MOM?
I thought this had already being done but in my latest cases with cesm2_3_alpha05c, update_ocn_f = .false.
by default.
pinging @alperaltuntas and @dabail10
To date, all the CESM2 high-res G runs that I've heard about have used CICE4 physics. It seems like users have had to include
$ ./xmlchange CICE_CONFIG_OPTS="-phys cice4"
In their setup scripts. If config_component.xml
had something like
$ git diff config_component.xml
diff --git a/cime_config/config_component.xml b/cime_config/config_component.xml
index f6e62ec..0980067 100644
--- a/cime_config/config_component.xml
+++ b/cime_config/config_component.xml
@@ -5,9 +5,10 @@
<entry_id version="3.0">
<description>
- <desc ice="CICE[%PRES][%CMIP6]">Sea ICE (cice) model version 5</desc>
+ <desc ice="CICE[%PRES][%CMIP6][%CICE4]">Sea ICE (cice) model version 5</desc>
<desc option="PRES" > :prescribed cice</desc>
<desc option="CMIP6"> :with modifications appropriate for CMIP6 experiments</desc>
+ <desc option="CICE4"> :running with cice4 physics</desc>
</description>
<entry id="COMP_ICE">
@@ -34,8 +35,9 @@
<entry id="CICE_CONFIG_OPTS">
<type>char</type>
<default_value></default_value>
- <values>
+ <values match="last">
<value compset="_CICE[_%]"> -phys cice5 </value>
+ <value compset="_CICE%[^_]*CICE4"> -phys cice4 </value>
</values>
<group>build_component_cice</group>
<file>env_build.xml</file>
Then we could just define the high-res compsets with CICE%CICE4
and make it obvious to newcomers that the older physics will be used.
I have turned on actual error trapping rather than warning to compare the input meshe lat/lon values to the internally generated values in cice5. I am finding some differences that are much bigger than I would expect for the tx0.66v1 mesh. These errors were present before - but only warnings were printed out. Now the model actually stops.
The mesh file
/glade/p/cesmdata/cseg/inputdata/share/meshes/tx0.66v1_190314_ESMFmesh.nc
was generated from the following SCRIP input file: /glade/p/cesmdata/cseg/mapping/grids/tx0.66v1_SCRIP_190314.nc
which @alperaltuntas created in March of 2019.
The internal ice grid is
/glade/p/cesmdata/cseg/inputdata/ocn/mom/tx0.66v1/grid/horiz_grid_20190315.ieeer8
Errors of the following type are seen
ERROR: CICE n, lonmesh, lon, diff_lon = 352 140.2982497816467 140.2993809030784 0.11311D-02
ERROR: CICE n, lonmesh, lon, diff_lon = 353 143.3065378697909 143.3075936600567 0.10558D-02
ERROR: CICE n, lonmesh, lon, diff_lon = 390 110.5901027413569 110.5911242235564 0.10215D-02
I am not seeing any differences in latitude that are trapped - just in longitude.
This same type of problem I expect would also be in cice6 - but needs to be verified.
For now - I can turn off the aborts but we really need to resolve this as an issue.
I'm putting together a high-res GIAF run with JRA forcing and the ocean BGC turned on. My first attempt to run crashed in cice:
162: istep1, my_task, iblk = 17665 162 1
162: category n = 1
162: Global block: 430
162: Global i and j: 1047 198
162: Lat, Lon: -70.1472839712437 -5.35000000000000
162: aice: 3.442529845610359E-002
162: n: 1 aicen: 2.803949195110218E-003
162: ice: Vertical thermo error
162: ERROR: ice: Vertical thermo error
162:Image PC Routine Line Source
162:cesm.exe 000000000167EA9A Unknown Unknown Unknown
162:cesm.exe 0000000000D768F0 shr_abort_mod_mp_ 114 shr_abort_mod.F90
162:cesm.exe 00000000005058C4 ice_exit_mp_abort 46 ice_exit.F90
162:cesm.exe 000000000070427E ice_step_mod_mp_s 578 ice_step_mod.F90
162:cesm.exe 00000000005E1748 cice_runmod_mp_ci 172 CICE_RunMod.F90
162:cesm.exe 00000000004F7559 ice_comp_mct_mp_i 584 ice_comp_mct.F90
162:cesm.exe 0000000000428324 component_mod_mp_ 737 component_mod.F90
162:cesm.exe 0000000000409D5C cime_comp_mod_mp_ 2603 cime_comp_mod.F90
162:cesm.exe 0000000000427F5C MAIN__ 133 cime_driver.F90
162:cesm.exe 0000000000407D22 Unknown Unknown Unknown
162:libc.so.6 00002AD9A84DA6E5 __libc_start_main Unknown Unknown
162:cesm.exe 0000000000407C29 Unknown Unknown Unknown
Talking to other folks who have run high-res G compsets, the kludgy fix was to update ice_import_export.F90
:
@@ -140,21 +143,24 @@ subroutine ice_import( x2i )
zlvl (i,j,iblk) = aflds(i,j, 3,iblk)
potT (i,j,iblk) = aflds(i,j, 4,iblk)
Tair (i,j,iblk) = aflds(i,j, 5,iblk)
- Qa (i,j,iblk) = aflds(i,j, 6,iblk)
- rhoa (i,j,iblk) = aflds(i,j, 7,iblk)
+ Qa (i,j,iblk) = max(aflds(i,j, 6,iblk), c0)
+ rhoa (i,j,iblk) = max(aflds(i,j, 7,iblk), c0)
frzmlt (i,j,iblk) = aflds(i,j, 8,iblk)
- swvdr(i,j,iblk) = aflds(i,j, 9,iblk)
- swidr(i,j,iblk) = aflds(i,j,10,iblk)
- swvdf(i,j,iblk) = aflds(i,j,11,iblk)
- swidf(i,j,iblk) = aflds(i,j,12,iblk)
+ swvdr(i,j,iblk) = max(aflds(i,j, 9,iblk), c0)
+ swidr(i,j,iblk) = max(aflds(i,j,10,iblk), c0)
+ swvdf(i,j,iblk) = max(aflds(i,j,11,iblk), c0)
+ swidf(i,j,iblk) = max(aflds(i,j,12,iblk), c0)
flw (i,j,iblk) = aflds(i,j,13,iblk)
- frain(i,j,iblk) = aflds(i,j,14,iblk)
- fsnow(i,j,iblk) = aflds(i,j,15,iblk)
+ frain(i,j,iblk) = max(aflds(i,j,14,iblk), c0)
+ fsnow(i,j,iblk) = max(aflds(i,j,15,iblk), c0)
enddo !i
enddo !j
enddo !iblk
I asked @dabail10 about it, and he pointed out that the patch map used in the coupler to map atmosphere fields to the 0.1 degree ocean is not positive-definite, so the above lines prevent negative values where they are not expected. We also talked about a more permanent fix -- having something like
if (.not.a2i_smap_is_pos_def) then
Qa = max(Qa , c0)
rhoa = max(rhoa, c0)
end if
if (.not.a2i_fmap_is_pos_def) then
swvdr = max(swvdr, c0)
swidr = max(swidr, c0)
swvdf = max(swvdf, c0)
swidf = max(swidf, c0)
frain = max(frain, c0)
fsnow = max(fsnow, c0)
end if
Would be great, but the catch is defining a2i_smap_is_pos_def
and a2i_fmap_is_pos_def
. Unfortunately, at run time the name of the mapping file being used is only available in the driver -- CICE can't use it because that would cause a circular dependency where you can't build CICE until the driver has been built but the driver build requires CICE.
I think the best approach would be to add two new variables to the CICE namelist and have the default values depend on the the mapping file in env_run.xml
. I think the value would have to be set via the value
argument of nmlgen.add_default()
, so python logic could be used to determine if the map is positive definite or not. That said, I don't know the CICE code well enough to know where such namelist variables belong. &setup_nml
in ice_init.F90
, maybe?
Anyway, I'm happy to tackle this issue and submit a PR as I put together our sandbox... but if namelist variables in &setup_nl
isn't the way to do this then let's use this issue ticket to figure out a better approach.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.