Coder Social home page Coder Social logo

metoffice / inline_model_metrics Goto Github PK

View Code? Open in Web Editor NEW
2.0 2.0 0.0 3.81 MB

A tenten project Rose app to run TempestExtremes inline with a climate model.

Home Page: https://inline-model-metrics.readthedocs.io/

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%

inline_model_metrics's People

Contributors

jonseddon avatar robertsmalcolm avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar

inline_model_metrics's Issues

Specify the input data

Need to figure out how to specify the input data, and hence configure the IO part of the detect step

Check performance

Once the code has settled, look at the performance, perhaps at several different resolutions. All inline activities (on the critical path) must run in the time that it takes the model to run so that we don't end up with a backlog of data on disk.

Read the Docs Iris is pinned to 2.4 because of lack of recent Afterburner conda-forge package

In the Read the Docs conda environment in .readthedocs_conda_env.yml, Iris has been pinned to version 2.4. This is because the most recent version of metoffice-afterburner available on conda-forge is 1.3.2b1.post4, which is limited to Iris<3.0. Once a more recent version of Afterburner has been released to conda-forge then this pin of Iris for Rtds should be removed.

Is it possible to remove the hardcoded filenames?

The structure of the filenames is hardcoded into the Python code. To allow use with different ways of outputting the files from the UM, or even different climate models, could the structure of the filenames be specified in rose-app.conf?

Conversion period in netCDF files does not equal model resubmission period

The pt file (from which the netCDF conversion runs) - I think the conversion happens on the frequency of this file renewal (currently 1 month), but this could be different from the model resubmission period, and then it becomes difficult to tie together the model timestep and the data. It would be better if the pt file has the same renewal as the resubmit period, but I don't know how to automate that.

Parallelisation of tracking

Running all recipes in series at high resolution will take too long. So at some point we need to figure out the best way to parallelise (at least the identification), noting that multiple read access to the same file may cause problems.

Separate tracking and file handling logic

At the moment there's a single class that does everything. To isolate the tracking from the filenames and other logic then an alternative approach could be:

One class that is passed the time period, filenames, etc. that then calls TempestExtremes.

An outer class that's specific to the UM that handles the logic of the filenames and what period to track.

An extra intermediate class could be one that does the regridding and generates the files that are required by the small class that calls TempestExtremes.

Add updating instructions

In the Installing section of the users' guide add a description of how to update to the latest copy.

Modify tracking one step behind model using suite.rc

To run the tracking when the netCDF files are known to be available there is currently some logic in suite.rc:

postproc[+{{EXPT_RESUB}}] => tracking

this is bad because log files have been deleted by then and for other reasons.

It would be better to have logic in the Python that looks at the model resubmission period and the stream reinitialisation period that knows what files have become available and need to be tracked.

An approach similar to postproc that creates .arch files (perhaps .track) to know which files have been tracked could be used.

StitchNodes --format argument deprecated

StitchNodes currently gives the warning:

WARNING: --format is deprecated.  Consider using --in_fmt and --in_connect instead

Consider changing the call to these newer options.

Read the Docs gets tempest-helper from PyPI

In .readthedocs_conda_env.yml tempest-helper is installed through pip from PyPI rather than conda because RtD's conda install process was giving the error:

ResolvePackageNotFound: 
  - tempest-helper

tempest-helper is available from conda-forge and the same conda environment was not giving this error when running locally. Perhaps RtD was caching conda-forge as tempest-helper had been added relatively recently.

At some point the RtD conda environment should be updated to see if tempest-helper can be installed from conda rather than PyPI.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.