Coder Social home page Coder Social logo

equinor / subscript Goto Github PK

View Code? Open in Web Editor NEW
15.0 25.0 34.0 68.71 MB

Equinor's collection of subsurface reservoir modelling scripts

Home Page: https://equinor.github.io/subscript/

License: GNU General Public License v3.0

Python 91.94% Perl 1.53% Shell 0.42% BitBake 6.11%

subscript's Introduction

subscript

subscript codecov Python Version License: GPL v3 Ruff

subscript is Equinor's collection of scripts used for subsurface reservoir modelling.


Quick Reference


Installation

subscript can be installed via pip

pip install subscript

Usage

As a collection of utilities subscript has many different invocations and use cases. A complete overview can be found in the documentation for a complete overview.

Note that some of these utilities may depend upon commercial third-party software.

Developing & Contributing

All contributions are welcome. Please see the Contributing document for more details and instructions for getting started.

Documentation

The documentation can be found at https://equinor.github.io/subscript

subscript's People

Contributors

alifbe avatar andreas-el avatar asnyv avatar audunsektnannr avatar berland avatar bkhegstad avatar dansava avatar eivindjahren avatar eivindsm avatar frodehk avatar jcrivenaes avatar jondequinor avatar larsevj avatar lilbe66 avatar maninfez avatar mareco701 avatar markusdregi avatar mferrera avatar mvasdan avatar oysteoh avatar oyvindeide avatar pinkwah avatar rnyb avatar sondreso avatar thomaram avatar tnatt avatar tralsos avatar vkip avatar wouterjdb avatar xjules avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

subscript's Issues

Support for endpoints in interp_relperm

Current version interpolate kr per saturation, and this will give weird and unwanted effect towards endpoints if they differ between base/low/high. Implement rigorous end-point scaling. Functionality should be implemented in pyscal and applied in interp_relperm

[csv_stack] Merge resulting columns

If a user stacks more than one "type" of column (with the keyword 'all' f.ex), it has been requested to squash all columns into one.

If the current output from csv_stack looks like this:

In [30]: df.head()
Out[30]: 
  Identifier        Date  RPR  Realization  WOPT  poro
0          1  2015-01-01  3.0          1.0   NaN   6.0
1          2  2015-01-01  4.0          1.0   NaN   6.0
2         A1  2015-01-01  NaN          1.0   1.0   6.0
3         A2  2015-01-01  NaN          1.0   2.0   6.0
4          1  2015-02-01  4.0          1.0   NaN   7.0

the columns can be squashed into this:

In [28]: df.head()
Out[28]: 
  Identifier        Date  Realization  poro Attribute  value
0          1  2015-01-01          1.0   6.0       RPR    3.0
1          2  2015-01-01          1.0   6.0       RPR    4.0
2         A1  2015-01-01          1.0   6.0      WOPT    1.0
3         A2  2015-01-01          1.0   6.0      WOPT    2.0
4          1  2015-02-01          1.0   7.0       RPR    4.0

ad-hoc code:

df = pd.read_csv('stacked.csv')
df.loc[~df.RPR.isna(), 'Attribute'] = 'RPR'
df.loc[~df.WOPT.isna(), 'Attribute'] = 'WOPT'
df['Value'] = df[['RPR', 'WOPT']].sum(axis=1)
del df[‘RPR’]
del df[‘WOPT’]

[ert plugin hook manager] Add tests to ensure executable bit

The executable bit must be set on files that are to be used as ERT forward models through the ERT plugin system, if not one gets errors like this:

** You do not have execute rights to:/prog/res/komodo/2020.06.rc1-py36/root/lib/python3.6/site-packages/subscript/sunsch/sunsch.py - job will not be available.
** Warning: job: 'SUNSCH' not available ... 

Write a test to check this for all jobs to be installed.

Write documentation for each tool

Write a markdown page for each tool, now that sphinx-setup is in place.

Update README with updated list of tools (or refer to an autogenerated list).

[summaryplot] Crash when reloading plot

When r is clicked in the terminal window while a plot is open, summaryplot is supposed to reload summary data from disk and replot, but this happens:

$ summaryplot TCPU *DATA
Menu: 'q' = quit, 'r' = reload plots
Reloading plot...
Traceback (most recent call last):
  File "/prog/res/komodo/2020.06.rc3-py36/root/bin/summaryplot", line 11, in <module>
    sys.exit(main())
  File "/prog/res/komodo/2020.06.rc3-py36/root/lib/python3.6/site-packages/subscript/summaryplot/summaryplot.py", line 635, in main
    plotprocess = Process(target=summaryplotter, args=args)
  File "/prog/res/komodo/2020.06.rc3-py36/root/lib/python3.6/multiprocessing/process.py", line 80, in __init__
    self._args = tuple(args)
TypeError: 'Namespace' object is not iterable

Error message: Program received signal:15

See file: /tmp/ert_abort_dump.havb.20200604-130932.log for more details of the crash.
Setting the environment variable "ERT_SHOW_BACKTRACE" will show the backtrace on stderr.

Port resscript.header

Assess whether to port the resscript header, and potentially do the port, merging it into already merged and ported scripts.

[sunsch] Comments are stripped when merging Eclipse schedule files

Comments like this in a schedule file that is merged using sunsch

DATES
  1 'NOV' 2022 /
/
-- Choke the water injector
WCONINJE
  'INJ-1' WAT OPEN 300 /
/

will be lost in the output. Conserving comments is an important feature request.

The problem is that the merging is done through the OPM-common parser, which strips all comments. Fixing this is non-trivial, it is not even well-defined where the comments belong in the final output, not even the date is well-defined in some circumstances. Additionally, comments may be at the end of lines, where they belong to a specific DeckRecord.

Bash-scripts in subscript

Should we move bash scripts from /project/res/bin to subscript? If so, how?

/project/res/bin/duf:

#!/bin/bash
#
# Calculates disk usage in current directory
# 
# Features:
#  * Sorts by size, largest file at bottom
#  * Converts to human readable format (megabytes, gigabytes, etc.)
#  * Includes hidden directories/files
#  * Prints total size
#
# H�vard Berland, OSE PTC RP, 2014

tmpfile=.duf$$

echo "Calculating disk usage (may take some time)..."
du -sk -- * .??* 2>/dev/null | sort -n > $tmpfile

cat $tmpfile | while read size fname; 
  do for unit in k M G T P E Z Y; 
    do if [ $size -lt 1024 ]; 
        then echo -e "${size}${unit}\t${fname}"; 
        break; 
    fi; 
    size=$(($size/1024)); 
  done; 
done | grep -v $tmpfile


echo -n "Total size: " 
cat $tmpfile | awk '{SUM+=$1} END {print SUM}' | while read size; 
  do for unit in k M G T P E Z Y; 
    do if [ $size -lt 1024 ]; 
        then echo -e "${size}${unit}"; 
        break; 
    fi; 
    size=$(($size/1024)); 
  done; 
done
rm -f $tmpfile

[merge_schedule] Not reproducing resscript merge_schedule

merge_schedule (in subscript) does not support having statements before the first DATES keyword. This was supported by the resscript-version.

Sunsch supports this, but it is a little bit tricky to fix in merge_schedule, as the yaml setup must be dependent on whether the first to-merge file actually has statements at the front.

If it has statements, then the yaml sent to sunsch should look like this:

        sunsch_config = { 
            "startdate": datetime.date(1900, 1, 1), 
            "init": args.inputfiles[0],
            "merge": args.inputfiles[1:],
        }   

but if args.inputfiles[0] starts right away with DATES, then init should be dropped and merge should contain all inputfiles.

[restartthinner] Rewrite to use libecl Python API

restartthinner can probably be rewritten to use the libecl Python API instead of the binary artifacts from libecl ecl_pack.x and ecl_unpack.x.

This will allow using the pip-installed libecl in CI.

credit: @dotfloat

Port nosim

nosim.py from resscript is very short, and can possibly be rewritten to use sunbeam to manipulate the deck.

csv_merge_ensembles gets progressively slower

csv_merge_ensemble's default mode of operation is to be memory conserviative, but this makes it progressively slower on "small" datasets (1 mb * 400 realizations will take minutes)

[restartthinner] error handling

restartthinner should exit cleanly when ecl_unpack is not available. Currently it will delete the UNRST and then give error message.

The default value for "-n" (None) will give a TypeError.

If accidentally called on the DATA file instead of the UNRST file, it will core-dump.

[eclcompress] Fails on VFP include file

Eclcompress happily compresses a VFP.INC file starting with

RPTRST
  BASIC=4 FREQ=5 SFIP /

RPTSCHED
  FIP=0 WELLS=2 WELSPECS /

VFPPROD                            
  10 2021.3 LIQ WCT GOR THP GRAT METRIC BHP /
  50 150 300 500 1000 1500 2000 3000 4000 5000 6500 8000 10000 /
  50 100 150 200 250 300 400 500 /
  0 0.1 0.2 0.3 0.4 0.5 0.65 0.8 0.95 /
  300 332 350 400 500 1000 2000 5000 10000 30000 /
  0 /
  1 1 1 1
  50.35 50.32 50.34 50.36 50.8 52.03 53.91 58.73 64.01 69.69 78.1 86.34 97.46 /
  1 1 2 1
  50.34 50.31 50.34 50.36 50.85 52.38 54.45 59.58 65.46 71.53 80.43 89.28 101.43 /
  1 1 3 1
...

with an 1+epsilon compression rate, but worse is that E100 gives errors when parsing the "compressed" file:

 @--  ERROR  AT TIME        0.0   DAYS    (31-OCT-2015):
 @           NOT ENOUGH      FLOW      VALUES ENTERED
 @           IN PRODUCTION WELL VFP TABLE 10.

 @--  ERROR  AT TIME        0.0   DAYS    (31-OCT-2015):
 @           TOO MANY     T.H.P.     VALUES READ
 @           IN PRODUCTION WELL VFP TABLE 10.
 @           UP TO    8 VALUES EXPECTED.

 @--  ERROR  AT TIME        0.0   DAYS    (31-OCT-2015):
 @           TOO MANY  LIFT QUANTITY VALUES READ
 @           IN PRODUCTION WELL VFP TABLE 10.
 @           UP TO    4 VALUES EXPECTED.

 @--  ERROR  AT TIME        0.0   DAYS    (31-OCT-2015):
 @           INTEGER OUT OF RANGE IN BHP VALUE LINE
 @           IN PRODUCTION WELL VFP TABLE  10.
 @           4 INTEGERS AND   0 BHP VALUES EXPECTED.

 @--  ERROR  AT TIME        0.0   DAYS    (31-OCT-2015):
 @           TOO MANY BHP VALUES READ IN LINE BEGINNING   1   1   1   1
 @           IN PRODUCTION WELL VFP TABLE  10.
 @           4 INTEGERS AND   0 BHP VALUES EXPECTED.
....

[summaryplot] Erroneous warning

This command works perfect, except for the erroneous warning. It does indeed find vectors named this, and they are plotted.

$ summaryplot -s WBHP:* GAR-0.DATA
WARNING:root:No summary or restart vectors matched WBHP:*
Menu: 'q' = quit, 'r' = reload plots

Logger root name

Log output from the typical subscript script could read as follows:

INFO:subscript.eclcompress.eclcompress:Compressing foo.grdecl ...
WARNING:subscript.eclcompress.eclcompress:Skipped foo.grdecl, compressed already

The logger root-name is here subscript.eclcompress.eclcompress, which is overly expressive, and comes from

logger = logging.getLogger(__name__)

which is present in many of the subscript tools.

To what format should it be changed, and how?

INFO:eclcompress:Compressing foo.grdecl ...

or

INFO:subscript.eclcompress:Compressing foo.grdecl ...

and should it be done through a code like this?

logger = logging.getLogger(__name__.split(".")[-1])

or something prettier?

__main__.py does not work

__main__.py tries to import the cli module which has been purged(?)

Should this be deleted, or should we do something sensible in __main__?

(this module gets executed if you do python -m subscript)

[sunsch] Configuration file format

The current configuration of the inserts read like this in the example:

 insert:
   - foo1.sch: # filename is read from this line unless filename is supplied
       date: 2020-01-01
   - randomidentifier:
       filename: foo1.sch
       date: 2021-01-01
   - foo1.sch:
       days: 100
   - randomid:
       days: 40
       string: "WCONHIST\n  A-4 OPEN ORAT 5000 /\n/"
   - substitutetest:
       days: 2
       filename: footemplate.sch
       substitute:  { ORAT: 3000, GRAT: 400000}
   - footemplate.sch:
       days: 10
       substitute:
         ORAT: 30000
         GRAT: 100

A problem with this is the keys on the first level of the list of dictionaries, foo1.sch, randomidentifier, randomid etc. These are only in used if they refer to filenames, and are nothing but noise in the case of randomid. This is likely to confuse users. The indendation required is also likely to lead to user errors.

Official doc is on: https://wiki.equinor.com/wiki/index.php/ResScript:Python:Scripts:sunsch

Documentation porting

Assess what to do with the per-script Wiki-documentation. Should it stay there or be ported to REST/Markdown?

If it should stay, how should we link to it (in a way that automatically generated docs include the link)?

[eclcompress] Slash preservation at end-of-line

eclcompress will change this:

SPECGRID
  214  669  49   1  F  /

GDORIENT
INC INC INC DOWN RIGHT /

into

SPECGRID
214 669 49 1 F
/
GDORIENT
INC
INC INC DOWN RIGHT
/

where at least the line-break before the line-ending / is erroneous, and will cause an Eclipse warning. Potentially the line-break within the GDORIENT deck record is wrong, and may cause the 4 last values to be defaulted.

This problem stems from eclcompress ignoring the fact that the position of a slash before or after a line-break is part of the syntax (which also makes it impossible to compress keyword data with multiple records, like VFP data).

Eclipse gives the following warning on the eclcompressed output:

    45 READING SPECGRID
    46 READING GDORIENT

 @--WARNING  AT TIME        0.0   DAYS    ( 1-JAN-1977):
 @           SPURIOUS DATA BEFORE COORD    KEYWORD
 @           /
    47 READING COORD

Port run_eclipse

run_eclipse is a wrapper around runeclipse (perl script, /project/res/bin).

It is heavily used, so we need to do something about it. It might be sensible to replace the underlying perl script as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.