reactionmechanismgenerator / rmg-tests Goto Github PK
View Code? Open in Web Editor NEWContinous Integration Testing Platform for RMG-Py
Continous Integration Testing Platform for RMG-Py
HEAD is now at 9d3c4df... Deprecate $CONDA_ENV_PATH in .travis.yml
Fetching package metadata .........
Solving package specifications: ..........
Fetching packages ...
libgcc-5.2.0-0 100% |################################| Time: 0:00:00 36.15 MB/s
glib-2.43.0-0. 100% |################################| Time: 0:00:00 48.87 MB/s
pandas-0.18.1- 100% |################################| Time: 0:00:00 50.73 MB/s
harfbuzz-0.9.3 100% |################################| Time: 0:00:00 22.74 MB/s
pango-1.39.0-0 100% |################################| Time: 0:00:00 30.07 MB/s
coolprop-6.0.1 100% |################################| Time: 0:00:00 39.94 MB/s
graphviz-2.38. 100% |################################| Time: 0:00:00 36.49 MB/s
Extracting packages ...
[ COMPLETE ]|###################################################| 100%
Linking packages ...
[ COMPLETE ]|###################################################| 100%
No psutil available.
To proceed, please conda install psutil#
# To activate this environment, use:
# $ source activate rmg_env
#
# To deactivate this environment, use:
# $ source deactivate
#
test version of RMG: /home/travis/build/ReactionMechanismGenerator/code/tested/RMG-Py
could not find environment: tested
I believe the following happened:
In the install.sh
of RMG-tests, we create an anaconda environment tested
from scratch by fetching the dependencies defined within the RMG-Py repository.
For some unknown reason, the psutil
dependency could not be fetched, resulting in an incomplete creation of the tested
environment, thereby crashing the Travis build.
All the P-dep simulation cases (eg 5) run during the Travis CI build suffer from the same type of error:
Test model folder: /home/travis/build/ReactionMechanismGenerator/RMG-tests/testing/testmodel/eg5
Traceback (most recent call last):
File "/home/travis/build/ReactionMechanismGenerator/code/tested/RMG-Py/scripts/checkModels.py", line 288, in <module>
main()
File "/home/travis/build/ReactionMechanismGenerator/code/tested/RMG-Py/scripts/checkModels.py", line 80, in main
check(name, benchChemkin, benchSpeciesDict, testChemkin, testSpeciesDict)
File "/home/travis/build/ReactionMechanismGenerator/code/tested/RMG-Py/scripts/checkModels.py", line 98, in check
errorReactions = checkReactions(commonReactions, uniqueReactionsTest, uniqueReactionsOrig)
File "/home/travis/build/ReactionMechanismGenerator/code/tested/RMG-Py/scripts/checkModels.py", line 186, in checkReactions
[printReaction(rxn) for rxn in uniqueReactionsTest]
File "/home/travis/build/ReactionMechanismGenerator/code/tested/RMG-Py/scripts/checkModels.py", line 263, in printReaction
logger.error('rxn: {}\t\tfamily: {}'.format(rxn, rxn.family))
AttributeError: 'PDepReaction' object has no attribute 'family'
It is not clear what all the examples we have in RMG-tests are supposed to test. We should probably review what features we want each job to test. Here is a list of the ones we are currently running on Travis:
eg1
: ethane pyrolysis, no p-depeg3
: liquid octane oxidation, octane as solvent, very light pruningeg5
: n-heptane pyrolysis with p-depeg6
: ethane oxidation with p-depeg7
ethane pyrolysis, no pdep, tighter addToCore tolerance, but looser finish criteria than eg1MCH
: methyl cyclohexane oxidation, no pdepNC
: methyl amine oxidation, no pdepsolvent_hexane
: liquid hexane oxidation, hexane as solventmethane
: methane oxidation, no pdepAlthough I don't know the story behind all of these, it seems the pairs eg1
/eg7
, eg5
/eg6
, eg3
/solvet_hexane
, and methane
/eg6
have some degree of unnecessary redundancy.
Features currently tested:
Features not currently tested sufficiently:
while we are asleep we should use that time to trigger more extensive builds and tests.
After running RMG-tests, if I try to delete the repository, with the dangerous rm -r *
, I get an error that some files are write-protected. I can get rid of them with the more dangerous rm -rf *
. Ideally, the program should be able to be safely cleaned without force removing protected files.
Maybe having a script for cleaning the directory safely would be helpful in avoiding removal of important files.
I ran a local job and I think it had an error, since it didn't create all the files in data_dir
. However, the slurm.out
file didn't state where the error occurred. It would be helpful for the slurm output to mention where the error likely occurred. Increased logging may be helpful. If there is already a way to get logging at the source of the error, that would be helpful.
RMG-Py has already moved to an updated testing system that no longer uses this repository. When ReactionMechanismGenerator/RMG-database#635 is merged on RMG-database, we will no longer have any use for this repository and should archive it to indicate to other developers and anyone else who happens across it that it is out of use.
Tracking:
These warning pop up in recent Travis builds.
I believe these warnings refer to the calls in canteraModel .
The MCH example is timing out most runs right now:
The command "./run.sh examples/eg1 no" exited with 0.
571.08s$ ./run.sh examples/MCH no
Running examples/MCH test case
Benchmark code directory: /home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py
Running benchmark job
/home/travis/miniconda/envs/benchmark/lib/python2.7/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead.
warnings.warn(message, mplDeprecation, stacklevel=1)
Benchmark job timed out
The command "./run.sh examples/MCH no" exited with 1.
cache.2
store build cache
0.01s
31.45schange detected (content changed, file is created, or file is deleted):
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-database/.git/ORIG_HEAD
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/bridge/cclib2biopython.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/bridge/cclib2openbabel.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/bridge/__init__.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/__init__.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/method/calculationmethod.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/method/cda.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/method/cspa.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/method/dens
...
changes detected, packing new archive
.
.
.
.
uploading archive
after_failure
2.77s$ . ./after_failure.sh
Executing after script tasks...
GIT_NAME: Travis Deploy
GIT_EMAIL: [email protected]
To https://github.com/ReactionMechanismGenerator/RMG-tests.git
- [deleted] rmgdb-rulestotraining
Done. Your build exited with 1.
We should probably fix it or remove it.
The following error occurs in a Travis CI build of a branch of RMG-database, after it seemingly successfully installed the benchmark
conda environment.
SHA1: fd954f1528cf010e97fd56c4970601e571bc883d
Fetching package metadata: ....
Fetching package metadata: ....
src_prefix: '/home/travis/miniconda/envs/benchmark'
dst_prefix: '/home/travis/miniconda/envs/tested'
Packages: 67
Files: 0
An unexpected error has occurred, please consider sending the
following traceback to the conda GitHub issue tracker at:
https://github.com/conda/conda/issues
Include the output of the command 'conda info' in your report.
Traceback (most recent call last):
File "/home/travis/miniconda/bin/conda", line 6, in <module>
sys.exit(main())
File "/home/travis/miniconda/lib/python2.7/site-packages/conda/cli/main.py", line 139, in main
args_func(args, p)
File "/home/travis/miniconda/lib/python2.7/site-packages/conda/cli/main.py", line 146, in args_func
args.func(args, p)
File "/home/travis/miniconda/lib/python2.7/site-packages/conda/cli/main_create.py", line 49, in execute
install.install(args, parser, 'create')
File "/home/travis/miniconda/lib/python2.7/site-packages/conda/cli/install.py", line 247, in install
clone(args.clone, prefix, json=args.json, quiet=args.quiet, index=index)
File "/home/travis/miniconda/lib/python2.7/site-packages/conda/cli/install.py", line 84, in clone
quiet=quiet, index=index)
File "/home/travis/miniconda/lib/python2.7/site-packages/conda/misc.py", line 246, in clone_env
sorted_dists = r.dependency_sort(dists)
File "/home/travis/miniconda/lib/python2.7/site-packages/conda/resolve.py", line 729, in dependency_sort
depends = lookup(value)
File "/home/travis/miniconda/lib/python2.7/site-packages/conda/resolve.py", line 724, in lookup
return set(ms.name for ms in self.ms_depends(value + '.tar.bz2'))
File "/home/travis/miniconda/lib/python2.7/site-packages/conda/resolve.py", line 573, in ms_depends
deps = [MatchSpec(d) for d in self.index[fn].get('depends', [])]
KeyError: 'pydqed-1.0.1-py27_0.tar.bz2'
I think the intended purpose of RMG tests may be similar to unittests, where each one runs independently. Currently, if one example fails in RMG-tests, it is impossible to see the tests after that test which can hide multiple errors. Ideally, in my opinion RMG tests should mention an error message and go to the next example instead of quitting.
It looks like the Travis service was finally shut down.
https://www.travis-ci.org/github/ReactionMechanismGenerator/RMG-tests
We have had no regression tests in three weeks.
It is time to rewrite them on Github Actions.
Migrate RMG-tests into RMG-Py and deprecate this repo.
(1) if CI passes on RMG-Py, run the regression testing there by extending the GitHub action CI.yml
(2) we can create our 'benchmark' model on a nightly basis and use GitHub artifacts to use it as comparison throughout the day without having to recreate it everytime we want to run regression
(3) speed up the build by only building environment once
(4) enable forked repos to work
The unit testing has been moved into the GitHub actions in the RMG-Py repository, but the regression testing remains here. We should completely migrate this to also be part of the GitHub actions and deprecate RMG-tests.
Biggest limitation of the current workflow is that the regression testing will always fail on forked repos. This means that if want anyone outside of our groups to contribute, they needed to be added to the RMG user group and create a branch on RMG-Py itself. This basically means we cannot get other people to contribute.
Moving will also decrease the build time. Right now, we (1) build environment/install RMG to run unittests and (2) repeat this to run the regression tests (also using the slower conda
instead of mamba
).
We can achieve all the functionality of this repo in GitHub actions:
This would be a huge boost to maintainability -- all of the testing (unit tests + regression) would be consolidated in one actions file.
I am requesting some input on feasibility as well as my understanding of what this repo does. Please let me know if there is any part of the RMG-tests workflow I have missed or misunderstood. If there's something special about doing the regression testing here rather than in RMG-Py, I would like to discuss it.
Tagging some of you who show up most often on the git blame for the relevant scripts:
@sevyharris
@KEHANG
@Laxzal
@rwest
@nickvandewiele
The methane example has nitrogen as a reactive species, so it ends up creating species like N(=NN=[N])N=NN=NN=[N]
and N([N]N=NN=[N])(N=NN=NN=NN=[N])N=[N]
.
If nitrogen was intentionally set as a reactive species, then we probably need to set a species constraint on the maximum number of nitrogen atoms.
I think maybe one lesson from this is that the travis diff models may need to check 1) time for converge b) model core and edge changes between each iteration, extracted from the RMG.log into some sort of plot or analysis to show that we don't stall like this again.
Currently we parse the RMG-tests branch name in order to determine the correct RMG-Py/database branch to test. The parsing method currently relies on splitting the name on hyphens, which causes failures if the RMG-Py/database branch name contains hyphens.
It refers to the directory RMG-tests/testing/check/ which doesn't exist.
It looks like the messages from the RMG-tests refers to the wrong model. In build 952, where I fix the R_Addition_COm family, the travis build states the "original model" has new species and reactions associated with the R_Addition_COm family, instead of the test model.
The CI tests currently fail with the following error message:
Running benchmark job
Traceback (most recent call last):
File "/usr/share/miniconda/envs/benchmark/lib/python3.7/site-packages/julia/pseudo_python_cli.py", line 308, in main
python(**vars(ns))
File "/usr/share/miniconda/envs/benchmark/lib/python3.7/site-packages/julia/pseudo_python_cli.py", line 59, in python
scope = runpy.run_path(script, run_name="__main__")
File "/usr/share/miniconda/envs/benchmark/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/share/miniconda/envs/benchmark/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/share/miniconda/envs/benchmark/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmg.py", line 118, in <module>
main()
File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmg.py", line 112, in main
rmg.execute(**kwargs)
File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmgpy/rmg/main.py", line 707, in execute
self.initialize(**kwargs)
File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmgpy/rmg/main.py", line 519, in initialize
self.reaction_model.add_species_to_edge(spec)
File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmgpy/rmg/model.py", line 1146, in add_species_to_edge
self.edge.phase_system.phases["Default"].add_species(spec)
File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmgpy/rmg/reactors.py", line 272, in add_species
spec = to_rms(spc)
File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmgpy/rmg/reactors.py", line 516, in to_rms
return rms.Species(obj.label, obj.index, "", "", "", thermo, atomnums, bondnum, diff, rad, obj.molecule[0].multiplicity-1, obj.molecular_weight.value_si)
RuntimeError: <PyCall.jlwrap (in a Julia function called from Python)
JULIA: MethodError: no method matching ReactionMechanismSimulator.Species(::String, ::Int64, ::String, ::String, ::String, ::ReactionMechanismSimulator.NASA{ReactionMechanismSimulator.EmptyThermoUncertainty}, ::Dict{Any, Any}, ::Int64, ::ReactionMechanismSimulator.StokesDiffusivity{Float64}, ::Float64, ::Int64, ::Float64)
Closest candidates are:
ReactionMechanismSimulator.Species(::Any, ::Any, ::Any, ::Any, ::Any, ::T, ::Any, ::Any, ::N, ::Any, ::Any, ::Any, !Matched::N1, !Matched::N2) where {T<:ReactionMechanismSimulator.AbstractThermo, N<:ReactionMechanismSimulator.AbstractDiffusivity, N1<:ReactionMechanismSimulator.AbstractHenryLawConstant, N2<:ReactionMechanismSimulator.AbstractLiquidVolumetricMassTransferCoefficient} at /usr/share/miniconda/envs/benchmark/share/julia/site/packages/Parameters/MK0O4/src/Parameters.jl:526
Stacktrace:
[1] invokelatest(::Any, ::Any, ::Vararg{Any, N} where N; kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Base ./essentials.jl:708
[2] invokelatest(::Any, ::Any, ::Vararg{Any, N} where N)
@ Base ./essentials.jl:706
[3] _pyjlwrap_call(f::Type, args_::Ptr{PyCall.PyObject_struct}, kw_::Ptr{PyCall.PyObject_struct})
@ PyCall /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/callback.jl:28
[4] pyjlwrap_call(self_::Ptr{PyCall.PyObject_struct}, args_::Ptr{PyCall.PyObject_struct}, kw_::Ptr{PyCall.PyObject_struct})
@ PyCall /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/callback.jl:44
[5] macro expansion
@ /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/exception.jl:95 [inlined]
[6] #107
@ /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/pyfncall.jl:43 [inlined]
[7] disable_sigint
@ ./c.jl:458 [inlined]
[8] __pycall!
@ /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/pyfncall.jl:42 [inlined]
[9] _pycall!(ret::PyObject, o::PyObject, args::Tuple{Vector{String}}, nargs::Int64, kw::Ptr{Nothing})
@ PyCall /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/pyfncall.jl:29
[10] _pycall!
@ /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/pyfncall.jl:11 [inlined]
[11] #_#114
@ /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/pyfncall.jl:86 [inlined]
[12] (::PyObject)(args::Vector{String})
@ PyCall /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/pyfncall.jl:86
[13] top-level scope
@ none:4
[14] eval
@ ./boot.jl:360 [inlined]
[15] exec_options(opts::Base.JLOptions)
@ Base ./client.jl:261
[16] _start()
@ Base ./client.jl:485>
Benchmark job timed out
This occurs for every test case (aromatics, nitrogen, superminimal, etc.)
Recently, make eg6
identified the following differences between the benchmark model (RMG-Py v1.0.1 with RMG-database v1.0.2) and RMG-Py v1.0.3 for the edge reactions:
Non-identical kinetics!
tested: C=O(17) + C=O(17) => C1COO1(42)
original: C=O(17) + C=O(17) => C1COO1(42)
k(1bar)|300K |400K |500K |600K |800K |1000K |1500K |2000K
k(T): | -81.03| -59.91| -47.17| -38.62| -27.86| -21.32| -12.43| -7.87
k(T): | -43.29| -31.26| -24.02| -19.17| -13.03| -9.30| -4.27| -1.74
kinetics:
kinetics:
Non-identical kinetics!
tested: C=O(17) + C=O(17) => C1COO1(42)
original: C=O(17) + C=O(17) => C1COO1(42)
k(1bar)|300K |400K |500K |600K |800K |1000K |1500K |2000K
k(T): | -43.29| -31.26| -24.02| -19.17| -13.03| -9.30| -4.27| -1.74
k(T): | -81.03| -59.91| -47.17| -38.62| -27.86| -21.32| -12.43| -7.87
kinetics:
kinetics:
Looking back at the chemkin model, the two reactions are identified as duplicate reactions:
! Reaction index: Chemkin #96; RMG #152
! PDep reaction: PDepNetwork #25
CH2O(17)+CH2O(17)(+M)=>C1COO1(42)(+M) 1.000e+00 0.000 0.000
TCHEB/ 300.000 3000.000 /
PCHEB/ 0.001 98.692 /
CHEB/ 6 4/
CHEB/ -1.543e+01 -1.034e-02 -7.707e-03 -4.698e-03 /
CHEB/ 2.202e+01 8.282e-03 6.162e-03 3.744e-03 /
CHEB/ 1.758e-01 2.189e-04 1.716e-04 1.125e-04 /
CHEB/ 4.127e-02 1.905e-04 1.424e-04 8.714e-05 /
CHEB/ -4.568e-03 1.202e-04 8.996e-05 5.515e-05 /
CHEB/ -1.255e-02 6.705e-05 5.020e-05 3.079e-05 /
DUPLICATE
! Reaction index: Chemkin #100; RMG #158
! PDep reaction: PDepNetwork #26
CH2O(17)+CH2O(17)(+M)=>C1COO1(42)(+M) 1.000e+00 0.000 0.000
TCHEB/ 300.000 3000.000 /
PCHEB/ 0.001 98.692 /
CHEB/ 6 4/
CHEB/ -3.656e+01 -9.303e-03 -6.942e-03 -4.235e-03 /
CHEB/ 3.877e+01 6.608e-03 4.921e-03 2.993e-03 /
CHEB/ 3.886e-01 3.283e-04 2.503e-04 1.577e-04 /
CHEB/ 1.302e-01 2.733e-04 2.043e-04 1.250e-04 /
CHEB/ 3.979e-02 1.548e-04 1.159e-04 7.106e-05 /
CHEB/ 1.223e-02 8.269e-05 6.191e-05 3.796e-05 /
DUPLICATE
Hence, the identified differences are simply the result of switching both reactions, not because of a real discrepancy in the chemistry.
We should improve the way differences are detected, to avoid false positives like this one.
https://github.com/ReactionMechanismGenerator/DataDrivenEstimator.git
When I tried to reproduce the code here,I ran into a problem that looked like an authentication file (in config.cfg format) was missing to access the database. The error message is as follows:
pymongo.errors.ServerSelectionTimeoutError: rmg.mit.edu:27018: [Errno -2] Name or service not known, Timeout: 30s, Topology Description: <TopologyDescription id: 6388a7abbf21b29a7bc4e82e, topology_type: Single, servers: [<ServerDescription ('rmg.mit.edu', 27018) server_type: Unknown, rtt: None, error=AutoReconnect('rmg.mit.edu:27018: [Errno -2] Name or service not known')>]>
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.