mpas-dev / compass Goto Github PK
View Code? Open in Web Editor NEWConfiguration Of MPAS Setups
License: Other
Configuration Of MPAS Setups
License: Other
Either as part of #28 or shortly after it is merged, there will still be a lot to do!
Here is a space for keeping track of things that still need to be done:
legacy
branchlegacy
branchmaster
setup.py
compass
package within conda envThese are errors about a nonexistent stream for mixed-layer depth. The desired behavior was to have this stream disabled, but type = 'none'
doesn't seem to have done the trick. output_interval = 'none'
does seem to have the desired behavior. I have a fix in: https://github.com/xylar/compass/tree/fix_ssh_adjustment_errors
ImportError: libnsl.so.1: cannot open shared object file: No such file or directory
conda-forge/fiona-feedstock#138
It seems like we will need to get the admins to install it.
The damped_adjustment_*
steps run successfully but there are NaNs in state validation during the simulation
step.
We don't want to be renormalizing the Earth sphere, we want to be leaving it at the CIME value.
Currently, all streams produced by compass named output are single precision:
<stream name="output"
...
precision="single"
This line must either changed to "double"
or remove (double is default).
All bit-for-bit tests that used the output.nc
file for comparison has been only valid to ~7 digits, not 14 (restart file comparisons are still fine). This could have opened us to merging bugs that pass single precision, but luckily I've been testing with compass legacy for most PRs until early August, and compass legacy was correctly writing output.nc
as double. I also redid the same tests summarized here today: E3SM-Project/E3SM#4518 (comment) but with double precision, and the results were the same, indicating there were no bugs missed due to this single/double problem since August 5.
@matthewhoffman and I discussed that many test cases first run the steps, then perform validation and that is all. It might simplify things to make validate()
a separate method. This way, the default run()
could be used so there would often be no need for the added complexity of explaining to developers that they need to call super().run()
to make sure the steps get run. Since validate()
in the base class wouldn't do anything, you wouldn't need to call super().validate()
.
It would be really helpful for suites with -b
to report the test result and the comparison result separately:
ocean/baroclinic_channel/10km/default
PASS compare: PASS
ocean/baroclinic_channel/10km/threads_test
PASS compare: FAIL
ocean/baroclinic_channel/10km/decomp_test
FAIL compare: FAIL
Right now, FAIL could indicate the comparison or the run itself. The two are obviously both checked lower down, but I think only one is passed up to the suite function for reporting.
In trying to reproduce previous meshes but with different vertical grids or initial conditions, the critical_passages.nc
requirement makes this impossible as that file is not saved long-term. I have rerun a spin up of WC14 with 80 layers and removed the critical_passages portion of streams.ocean and it completed successfully. This suggests this dependency can be removed from initial_state.
I think it would be a really good idea to add git tags to specific commits as the place to go to reproduce specific E3SM meshes. My idea for tag names would be like: mesh_ECwISC30to60E2r1
, so mesh_
and then the E3SM short name.
Thoughts?
If you use the compass repo as a work directory (the default), you get an error because compass
tries to make a symlink to the existing compass
directory.
https://xesmf.readthedocs.io/en/latest/index.html
It could likely replace pyreamp
.
README files aren't getting included in package data. As a result, certain test cases are producing errors like:
Traceback (most recent call last):
File "/usr/projects/climate/SHARED_CLIMATE/anaconda_envs/base/envs/test_compass_1.0/bin/compass", line 10, in <module>
sys.exit(main())
File "/usr/projects/climate/SHARED_CLIMATE/anaconda_envs/base/envs/test_compass_1.0/lib/python3.8/site-packages/compass/__main__.py", line 55, in main
commands[args.command]()
File "/usr/projects/climate/SHARED_CLIMATE/anaconda_envs/base/envs/test_compass_1.0/lib/python3.8/site-packages/compass/suite/__init__.py", line 276, in main
setup_suite(core=args.core, suite_name=args.test_suite,
File "/usr/projects/climate/SHARED_CLIMATE/anaconda_envs/base/envs/test_compass_1.0/lib/python3.8/site-packages/compass/suite/__init__.py", line 69, in setup_suite
testcases = setup_cases(tests, config_file=config_file, machine=machine,
File "/usr/projects/climate/SHARED_CLIMATE/anaconda_envs/base/envs/test_compass_1.0/lib/python3.8/site-packages/compass/setup.py", line 83, in setup_cases
setup_case(path, testcase, config_file, machine, work_dir,
File "/usr/projects/climate/SHARED_CLIMATE/anaconda_envs/base/envs/test_compass_1.0/lib/python3.8/site-packages/compass/setup.py", line 154, in setup_case
configure(testcase, config)
File "/usr/projects/climate/SHARED_CLIMATE/anaconda_envs/base/envs/test_compass_1.0/lib/python3.8/site-packages/compass/landice/tests/enthalpy_benchmark/A/__init__.py", line 58, in configure
with path('compass.landice.tests.enthalpy_benchmark', 'README') as \
File "/usr/projects/climate/SHARED_CLIMATE/anaconda_envs/base/envs/test_compass_1.0/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/usr/projects/climate/SHARED_CLIMATE/anaconda_envs/base/envs/test_compass_1.0/lib/python3.8/importlib/resources.py", line 203, in path
with open_binary(package, resource) as fp:
File "/usr/projects/climate/SHARED_CLIMATE/anaconda_envs/base/envs/test_compass_1.0/lib/python3.8/importlib/resources.py", line 91, in open_binary
return reader.open_resource(resource)
File "<frozen importlib._bootstrap_external>", line 988, in open_resource
FileNotFoundError: [Errno 2] No such file or directory: '/usr/projects/climate/SHARED_CLIMATE/anaconda_envs/base/envs/test_compass_1.0/lib/python3.8/site-packages/compass/landice/tests/enthalpy_benchmark/README'
A fix is in https://github.com/xylar/compass/commits/add_README_to_package
It would be convenient if a test suite printed out steps and not just test cases as progress is being made. Some test cases (e.g. cosine bell and dynamic adjustment) have many time-consuming steps and it can be really helpful to know how far along we are, e.g. for checking on the progress.
I tried to run compass 1.0 on compy and running the model itself just hangs. This is an issue to track the problem as I try to debug it.
It would be nice if config files from test cases were loaded automatically loaded without needing a configure()
. This should mean a lot of test cases don't need configure()
.
config files for MPAS cores and test groups are already loaded automatically as part of setting test cases in the work directory. There's no reason the same couldn't be the default behavior for test cases. And it would be an easy update.
The wealth of sphere_transport tests introduced in #210 would benefit with some form of pass/fail tests. This would enable easier integration into nightly suites and would allow for no room in interpretation as to if a test succeeded.
After #28 there are a number of additional tests to be added to compass. Here is a list to help us keep track. Almost all have been created and tested, but are in MPAS-Model branches and the old COMPASS format.
General
Single-layer MPAS-Ocean, with @nairita87
Barotropic solver tests with convergence against exact solutions, with @siddharthabishnu
neverworld2 idealized Atlantic Basin, with @alicebarthel and Mesoscale eddies CPT (Oct 2022: neverworld2 is not a priority)
- [ ] re-entrant Drake Passage on a sphere
- [ ] Initial condition and spin-up
Vertical advection convergence test with @scalandr and @cbegeman
username
in the subsequent examples):git clone [email protected]:username/compass.git
cd compass
git remote rename origin username/compass
git remote add username/MPAS-Model [email protected]:username/MPAS-Model.git
git fetch --all -p
This part would apply to any branches you want to move over to the COMPASS repo (I'll use my_branch
for the name of the branch):
git worktree add ../my_branch -b my_branch
cd ../my_branch
git reset --hard username/MPAS-Model/my_branch
git filter-branch --force --prune-empty --subdirectory-filter testing_and_setup/compass/ my_branch
git push username/compass my_branch
Currently, compass
supports only bash
and similar shells, not csh
and its variants. There has been some interest in support for csh
. Developers would like this should comment below and, if there is sufficient demand, I will add it.
This takes over for MPAS-Dev/MPAS-Model#585
I would like to move the e3sm_coupling
step to be its own "test" (at the same level as init
and spin_up
) in each global_ocean
test case.
My reasoning for renaming is that we want to make clear that, once this code is more robust, it will generate an E3SM ocean (and possibly sea-ice, though that can be part of this discussion) initial condition. I don't have a great suggestion for the new name but something like e3sm_ocean_init
.
My reasoning for moving it to its own "test" is:
init
and typically also after spin_up
.See the closed issue on MPAS-Model for the rest of the discussion.
When running enthalpy_benchmark/A
, the run fails and file log.landice.0000.err
shows:
----------------------------------------------------------------------
Beginning MPAS-landice Error Log File for task 0 of 1
Opened at 2021/05/12 14:57:41
----------------------------------------------------------------------
ERROR: MALI requires that config_num_halos >= 3. (edgeMask calculations require it.)
ERROR: An error has occurred in landice_init_block.
CRITICAL ERROR: An error has occurred in li_core_init. Aborting...
Logging complete. Closing file at 2021/05/12 14:57:41
This seems to be because of
landice/develop
.From MPAS-Dev/MPAS-Model#564 (comment), we need to document for users that they need to load a compass
conda environment before they load modules for the system MPI because otherwise they can end up with problems where they are using a bad combination of conda and system MPI.
When setting up a test suite in from the conda package, you see errors like:
fatal: not a git repository (or any parent up to mount point /turquoise)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
We shouldn't call git describe
at all if not in a git repo. Fix is in: https://github.com/xylar/compass/commits/no_git_commands_from_package
When @mark-petersen downloaded cached output files on Grizzly, both the files and directories ended up in his group, not the shared climate
group. We need a chown/chmod
step for files downloaded to the shared space on supported machines.
Tried with 2 test cases. "Steps" is empty in both.
compass list -v -n 39
path: ocean/global_convergence/cosine_bell
name: cosine_bell
MPAS core: ocean
test group: global_convergence
subdir: cosine_bell
steps:
@mark-petersen mentioned in E3SM-Project/E3SM#4552 (comment)
Passes pr suite, matches compass baselines for optimized versions (except didn't test cosine bell because it takes too long).
We should make the necessary changes to the cosine bell test doesn't have to be skipped.
When validating a test against a baseline, if the validation fails because the baseline mesh has a different number of cells than the comparison run, the user sees and error in test execution rather than in baseline comparison. This is confusing and should be fixed:
test execution: ERROR
The traceback looks something like:
Exception raised in validate()
Traceback (most recent call last):
File "/gpfs/fs1/home/ac.xylar/mpas-work/compass/test_kdtree/compass/run.py", line 117, in run_suite
test_case.validate()
File "/gpfs/fs1/home/ac.xylar/mpas-work/compass/test_kdtree/compass/ocean/tests/global_ocean/mesh/__init__.py", line 88, in validate
compare_variables(test_case=self, variables=variables,
File "/gpfs/fs1/home/ac.xylar/mpas-work/compass/test_kdtree/compass/validate.py", line 137, in compare_variables
result = _compare_variables(
File "/gpfs/fs1/home/ac.xylar/mpas-work/compass/test_kdtree/compass/validate.py", line 231, in _compare_variables
raise ValueError("Field sizes for variable {} don't match "
ValueError: Field sizes for variable xCell don't match files /lcrc/group/e3sm/ac.xylar/compass_1.0/chrysalis/test_20210923/pr_kdtree/ocean/global_ocean/SOwISC12to60/mesh/mesh/culled_mesh.nc and /lcrc/group/e3sm/ac.xylar/compass_1.0/chrysalis/test_20210923/pr_libnetcdf_481/ocean/global_ocean/SOwISC12to60/mesh/mesh/culled_mesh.nc.
When creating a new grid, the landIceMask variable currently does not get carried into the sea ice grid file from the ocean grid file, but it's needed by MPAS-Seaice. Could this be added?
It would be good to have a way for a user to alter the tests in a test suite after setup without having to edit the code and setup again.
One of my common workflows is to run the full nightly regression suite once, then rerun it with other compilers or debug, openmp settings, but turn off the QU240 mesh and init to save time. Is there a way to do that? I think this PR does not address omitting a test case from a suite altogether.
In Spinning up the WC14 with 80 layers I ran into an issue in every stage of the dynamic_adjustment stage looking for output.nc
. The test ran fun but the output file was not generated (as expected from streams.ocean) resulting in the error. This required running each stage individually. It would be helpful to remove that file from the expected output.
Here is a list of test cases from legacy COMPASS that could be ported to the new compass
python package. I would like help putting them in order of priority.
A checked box below means the test case has already been ported. Strike-through means we don't plan to port them.
legacy COMPASS landice
test cases to port:
Land-ice tests not yet slated for porting:
legacy COMPASS ocean
test cases to port:
Ocean tests not yet slated for porting:
Ocean tests that will not be ported:
The compare_variables
method in the validate.py
package supports optional nonzero values of l1, l2, and linf norms. However, if nonzero norm values are required, they are unlikely to be the same for each variable being compared. The old compass system let each variable have its own norms defined, e.g.:
<field name="normalVelocity" l2_norm="1.0e-13"/>
I can imagine two ways to accomplish this:
compare_variables
multiple times, once with each norm value required.I think 1. is sufficient (and certainly easier to implement). But I wanted to open an issue to see if there are other thoughts.
A related item is that the old system allowed a subset of the l1, l2, and linf norms to be checked for a given variable, but the new system seems to only check all 3 norms. That's less flexible, but I think that's an ok restriction.
@matthewhoffman came up with a great suggestion of no longer generating the ./run.py
scripts (which can't usefully be edited to make changes to the test case or step anyway) and instead going with a command compass run
that runs a suite, test case or step.
The main consideration is how the information unique to the suite, test case or step that is currently in run.py
gets stored and retrieved. Presumably, the pickle files for test case and steps could be given more generic names and then compass
could recognize whether a directory is a test case or a step by whether there's a test_case.pickle
or step.pickle
.
Test suites are a little more complicated because several suites can get set up in the same directory. In that case, it probably makes sense to have a pickle file for each suite and call compass run <suite>
, where <suite>
is the (hopefully unique) name of the suite.
Having struggled off and on yesterday and today, I have not been able to get a working MPAS-Ocean build on Grizzly with either gnu or intel and SCORPIO.
For gnu, here is the module stack I tried:
module purge
module load git
module load friendly-testing
module load gcc/7.4.0 openmpi/2.1.2 hdf5-parallel/1.8.16 pnetcdf/1.11.2 \
netcdf-h5parallel/4.7.3
...
I am able to build SCORPIO successfully.
Then, in debug mode, I get:
#0 0x2ad2ea4093ff in ???
#0 0x2ac6e34b83ff in ???
#1 0x8e7fbe in __ocn_time_integration_split_MOD_ocn_time_integration_split_init
at /users/xylar/climate/mpas_work/compass/compass/MPAS-Model/ocean/develop/src/core_ocean/mode_forward/mpas_ocn_time_integration_split.F:2272
#1 0x8e7fbe in __ocn_time_integration_split_MOD_ocn_time_integration_split_init
at /users/xylar/climate/mpas_work/compass/compass/MPAS-Model/ocean/develop/src/core_ocean/mode_forward/mpas_ocn_time_integration_split.F:2272
#2 0x8e421d in __ocn_forward_mode_MOD_ocn_forward_mode_init
at /users/xylar/climate/mpas_work/compass/compass/MPAS-Model/ocean/develop/src/core_ocean/mode_forward/mpas_ocn_forward_mode.F:265
#2 0x8e421d in __ocn_forward_mode_MOD_ocn_forward_mode_init
at /users/xylar/climate/mpas_work/compass/compass/MPAS-Model/ocean/develop/src/core_ocean/mode_forward/mpas_ocn_forward_mode.F:265
#3 0xa89389 in __ocn_core_MOD_ocn_core_init
at /users/xylar/climate/mpas_work/compass/compass/MPAS-Model/ocean/develop/src/core_ocean/driver/mpas_ocn_core.F:76
#3 0xa89389 in __ocn_core_MOD_ocn_core_init
at /users/xylar/climate/mpas_work/compass/compass/MPAS-Model/ocean/develop/src/core_ocean/driver/mpas_ocn_core.F:76
#4 0x40c321 in __mpas_subdriver_MOD_mpas_init
at /users/xylar/climate/mpas_work/compass/compass/MPAS-Model/ocean/develop/src/driver/mpas_subdriver.F:331
#4 0x40c321 in __mpas_subdriver_MOD_mpas_init
at /users/xylar/climate/mpas_work/compass/compass/MPAS-Model/ocean/develop/src/driver/mpas_subdriver.F:331
#5 0x4090af in mpas
#5 0x4090af in mpas
at /users/xylar/climate/mpas_work/compass/compass/MPAS-Model/ocean/develop/src/driver/mpas.F:14
#6 0x409110 in main
at /users/xylar/climate/mpas_work/compass/compass/MPAS-Model/ocean/develop/src/driver/mpas.F:10
at /users/xylar/climate/mpas_work/compass/compass/MPAS-Model/ocean/develop/src/driver/mpas.F:14
#6 0x409110 in main
at /users/xylar/climate/mpas_work/compass/compass/MPAS-Model/ocean/develop/src/driver/mpas.F:10
The error is on a division by zero here:
https://github.com/MPAS-Dev/MPAS-Model/blob/ocean/develop/src/core_ocean/mode_forward/mpas_ocn_time_integration_split.F#L2261
It seems to be caused by layerThickness
being zero everywhere in the halo because there hasn't been a halo update yet. Thus, the sum of layer thickness is zero as well.
For intel, I tried the same stack that Phil Wolfram suggested in his build (what is currently in the documentation: https://mpas-dev.github.io/compass/stable/machine_specific_instructions/lanl.html#building-scorpio-on-grizzly)
module purge
module load git
module load friendly-testing
module load intel/19.0.4 intel-mpi/2019.4 hdf5-parallel/1.8.16 pnetcdf/1.11.2 \
netcdf-h5parallel/4.7.3 mkl/2019.0.4
To be continued shortly...
In this conversation about CIME:
ESMCI/cime#3886 (comment)
@jhkennedy made clear that there are better ways for developers to run compass
as a package while still developing.
First, we have a yaml file that defines the development environment that users can easily install for themselves, e.g.:
conda env create -f ./environment.yml
This way, we don't have do maintain a list of dependencies in the documentation itself. We would still need to have a yaml file for the development environment and another for the conda recipe, as far as I can tell.
Second, we should install the package to our own test environment in edit mode with:
python -m pip install -e .
This way, we can just do conda list
, etc. instead of python -m conda list
but you can still edit the code (in the repo) and have it update in the conda environment (which just points to the repo as I understand it).
It would be nice if Step.add_model_as_input()
could be called from init like other methods (e.g. Step.add_input_file()
) instead of in Step.setup()
. That would save a lot of steps from needing a setup()
method at all.
This line really means the compass path now, not mpas_model:
general.config.ocean:39:
mpas_model = FULL_PATH_TO_MPAS_MODEL_REPO
But it is only used in two places:
ocean/soma/32km/default/config_analysis.xml:3:
<add_link source_path="mpas_model" source="testing_and_setup/compass/ocean/soma/32km/default/analysis/check_particle_sampling.py" dest="check_particle_sampling.py"/>
ocean/ziso/20km/default/config_forward.xml:8:
<add_link source_path="mpas_model" source="testing_and_setup/compass/ocean/scripts/LIGHTparticles/make_particle_file.py" dest="make_particles.py"/>
so I think we can just get rid of that config flag completely, as we should be able to specify those paths in a different way. Also, the testing_and_setup/compass
is incorrect now anyway.
In the https://github.com/MPAS-Dev/compass/tree/legacy format, it is obvious to the user that the run steps are contained in run.py, and it is easy to copy from that script to the command line, or to only run parts of it.
In this new setup, it is not obvious how we see and alter the steps that occur in the command
python -m compass run
See conversation at #76
The seemingly eternal battle to get conda-forge
MPI working on supported machines continues. Anvil is refusing to run ESMF_RegridWeightGen
in parallel. The error message isn't particularly important. It's clearly just that Anvil doesn't like the conda-forge mpirun
.
Right now, any tests run in compass must first generate their own mesh and initial condition files. This is fine for QU240, but for other global resolutions the init step can be lengthy, so it is more practical to point to cached initial condition files on disk. We could have a cached location on each supported machine (most already do), and compass could download files if they are not already there. We could point to E3SM files already here:
https://web.lcrc.anl.gov/public/e3sm/inputdata/ocn/mpas-o/
or add files here:
https://web.lcrc.anl.gov/public/e3sm/mpas_standalonedata/mpas-ocean/
My motivation here is performance testing at varying resolutions, but this could be useful for any test or stand-alone simulation at resolutions higher that QU240.
This may be the case in other global_ocean
test cases as well.
The offending step is ocean/global_ocean/QU240/init/base_mesh
. The links that shouldn't be there are
arctic_high_res_region.geojson
north_mid_res_region.geojson
These two test cases fail with gnu and intel debug. They pass with optimized:
ocean/ice_shelf_2d/5km/z-star/restart_test
test execution: ERROR
see: case_outputs/ocean_ice_shelf_2d_5km_z-star_restart_test.log
ocean/ice_shelf_2d/5km/z-level/restart_test
test execution: ERROR
see: case_outputs/ocean_ice_shelf_2d_5km_z-level_restart_test.log
Using compass f5dcaf9 and E3SM 0590a25151
This issue replaces MPAS-Dev/MPAS-Model#534
The file ocean/global_ocean/template_high_frequency_output.xml
should be removed. It is currently not used.
That one was added by mistake. This one is used and correct:
ocean/templates/analysis_members/high_frequency_output.xml
@mark-petersen will remove that other one when he gets a chance.
It appears that the MLD analysis member was turned on in legacy compass in this PR MPAS-Dev/MPAS-Model#445 and set to output results every time step. I presume this was a debugging step for that particular PR that got committed by mistake. We inherited this in the current compass
:
compass/compass/ocean/tests/global_ocean/namelist.forward
Lines 19 to 20 in 36166e7
This creates some enormous output files an likely slows down the performance in compass
fairly substantially.
Add a run of a QU240 test case (e.g. performance) using the unculled base mesh. This will ensure that MPAS-Ocean still works when land is masked using maxLevelCell <= 0
.
The test could run as normal, then map the non-land cells to the same indices the culled version to perform bit-for-bit checks.
This was prompted from a discussion here:
MPAS-Dev/MPAS-Model#804 (comment)
I am seeing failures in the following:
EC30to60/PHC/files_for_e3sm
ECwISC30to60/PHC/files_for_e3sm
* Running diagnostics_files
Failed
Traceback (most recent call last):
File "/home/ac.xylar/chrysalis/miniconda3/envs/fix_ec30to60_dyn/bin/compass", line 33, in <module>
sys.exit(load_entry_point('compass', 'console_scripts', 'compass')())
File "/gpfs/fs1/home/ac.xylar/mpas-work/compass/fix_ec30to60_dynamical_adjustment/compass/__main__.py", line 62, in main
commands[args.command]()
File "/gpfs/fs1/home/ac.xylar/mpas-work/compass/fix_ec30to60_dynamical_adjustment/compass/run.py", line 272, in main
run_test_case(args.steps, args.no_steps)
File "/gpfs/fs1/home/ac.xylar/mpas-work/compass/fix_ec30to60_dynamical_adjustment/compass/run.py", line 226, in run_test_case
test_case.run()
File "/gpfs/fs1/home/ac.xylar/mpas-work/compass/fix_ec30to60_dynamical_adjustment/compass/testcase.py", line 166, in run
self._run_step(step, new_log_file)
File "/gpfs/fs1/home/ac.xylar/mpas-work/compass/fix_ec30to60_dynamical_adjustment/compass/testcase.py", line 273, in _run_step
step.run()
File "/gpfs/fs1/home/ac.xylar/mpas-work/compass/fix_ec30to60_dynamical_adjustment/compass/ocean/tests/global_ocean/files_for_e3sm/diagnostics_files.py", line 66, in run
make_diagnostics_files(self.config, self.logger, mesh_short_name,
File "/gpfs/fs1/home/ac.xylar/mpas-work/compass/fix_ec30to60_dynamical_adjustment/compass/ocean/tests/global_ocean/files_for_e3sm/diagnostics_files.py", line 102, in make_diagnostics_files
_make_moc_masks(mesh_short_name, logger, cores)
File "/gpfs/fs1/home/ac.xylar/mpas-work/compass/fix_ec30to60_dynamical_adjustment/compass/ocean/tests/global_ocean/files_for_e3sm/diagnostics_files.py", line 302, in _make_moc_masks
dsMasksAndTransects = add_moc_southern_boundary_transects(
File "/home/ac.xylar/chrysalis/miniconda3/envs/fix_ec30to60_dyn/lib/python3.8/site-packages/mpas_tools/ocean/moc.py", line 103, in add_moc_southern_boundary_transects
_extract_southern_boundary(dsMesh, dsMask, latBuffer=3.*numpy.pi/180.,
File "/home/ac.xylar/chrysalis/miniconda3/envs/fix_ec30to60_dyn/lib/python3.8/site-packages/mpas_tools/ocean/moc.py", line 179, in _extract_southern_boundary
_get_edge_sequence_on_boundary(startEdge, edgeSign, edgesOnVertex,
File "/home/ac.xylar/chrysalis/miniconda3/envs/fix_ec30to60_dyn/lib/python3.8/site-packages/mpas_tools/ocean/moc.py", line 372, in _get_edge_sequence_on_boundary
assert(nextEdge != -1)
AssertionError
This would save some trouble of having to use importlib.resources
in individual steps, and fits the style of URLs and databases already handled by this method.
I don't yet know what the cause is or when this behavior started. There have been updates to libnetcdf
and xarray
, among other related libraries, since the last time I tried to run the SOwISC12to60
test cases. My guess is that it's something specific to the NC_FORMAT_64BIT_OFFSET
format that we continue to have to use because of E3SM standards.
So far, I have only seen the behavior during the SOwISC12to60/PHC/init/ssh_adjustment
step on Chrysalis. I don't believe other test cases we currently have will face this problem either because they don't use xarray to write large files (e.g. WC14) or because they don't have meshes this large.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.