meteokid / python-rpn Goto Github PK
View Code? Open in Web Editor NEWPython RPN (RPNpy) is a collection of Python modules and scripts developed at RPN for scientific use.
License: GNU Lesser General Public License v2.1
Python RPN (RPNpy) is a collection of Python modules and scripts developed at RPN for scientific use.
License: GNU Lesser General Public License v2.1
While attempting
import rpnpy.librmn.all as rmn
I get the following traceback
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/git/python-rpn/lib/rpnpy/librmn/__init__.py", line 31, in <module>
from rpnpy.version import *
ImportError: No module named version
Is this a missing package or have I set something up wrong?
For the scripts I'm working on, I only need the data ('d'
) output from fstluk
, not any of the record parameters from the header. Profiling indicates the call to fstprm
takes 10-20% of the read time in my usage, a non-negligible amount of time. It would be useful (at least in my case) to have some optional argument to fstluk
to bypass the fstprm
call, e.g., to do something like
rec = fstluk(funit, key, fstprm=False) # Skips fstprm, only has 'd' and 'shape' entries.
Is this something that could be feasibly added to the package?
As an aside: is this issue page the best place to put feature requests, or is the internal bugzilla the preferred medium?
To install python-rpn on a computer without access to the CMC intranet, I modifed installation steps. First, I executed http://scaweb.sca.uqam.ca/armnlib/repository/Linux_BOOTSTRAP-8.11-rmnlib15.2. Then I executed:
export SSM_DOMAIN_HOME=~/ssm-domains-base/setup/v_001
. ~/ECssm/etc/ssm.d/profile
. s.ssmuse.dot rmnlib-dev
export RPNPY_RMN_LIBPATH=~/ssm-domains-base/libs/rmnlib-dev/multi/lib/${EC_ARCH}/gfortran/
. ./.setenv.dot --dev
This allowed me to use fstopenall, fstinl and fstluk on a RPN Standard File. Should these steps be merged into https://github.com/meteokid/python-rpn/blob/master/README?
I also deleted these lines:
https://github.com/meteokid/python-rpn/blob/master/bin/.env_setup.dot#L68
https://github.com/meteokid/python-rpn/blob/master/ssmusedep-dev.bndl#L1
After installing the bootstrap suggested in a closed issue I get the following errors.
Global tag values:
ERROR: NO SHORTCUT FOUND FOR /ssm/net/hpcs/201402/02/base
ERROR: NO SHORTCUT FOUND FOR /ssm/net/hpcs/201402/02/intel13sp1u2
ERROR: NO SHORTCUT FOUND FOR /ssm/net/rpn/utils/15.2
ERROR: NO SHORTCUT FOUND FOR /ssm/net/rpn/libs/15.2
ERROR: NO SHORTCUT FOUND FOR /ssm/net/cmdn/vgrid/6.1.1/intel13sp1u2
ERROR: NO SHORTCUT FOUND FOR /ssm/net/hpcs/exp/aspgjdm/perftools
ERROR: NO SHORTCUT FOUND FOR /ssm/net/cmoi/base/20160901/
ERROR: NO SHORTCUT FOUND FOR ENV/rde/1.0.0
. .setenv.defaults.dot
./.setenv.dot: line 27: rdevar: command not found
When I write two records in sequence, and these records have the same metadata except for the date, the second write overwrites the first one.
The assert fails on ppp2 with 2.1.b3
import os
import numpy as np
import rpnpy.librmn.all as rmn
TARGET_FILE = '/tmp/test.fst'
if __name__ == '__main__':
if os.path.isfile(TARGET_FILE):
os.unlink(TARGET_FILE)
phony_data = np.zeros((10,10,1), order='FORTRAN')
grid = rmn.encodeGrid({
'grtyp': 'L',
'ni': phony_data.shape[0],
'nj': phony_data.shape[1],
'lat0': 0.0,
'lon0': 0.0,
'dlat': 1.0,
'dlon': 1.0,
})
deet = 3600
npas = 2
record = grid
record.update({
'd': phony_data,
'etiket': 'TEST',
'nomvar': 'XX',
'datev': rmn.newdate(rmn.NEWDATE_PRINT2STAMP, 20180302, 20000000),
'dateo': rmn.newdate(rmn.NEWDATE_PRINT2STAMP, 20180302, 18000000),
'deet': deet,
'npas': npas,
})
target_fst = rmn.fstopenall(TARGET_FILE, rmn.FST_RW)
rmn.fstecr(target_fst, record)
record['datev'] = rmn.newdate(rmn.NEWDATE_PRINT2STAMP, 20190302, 20000000)
record['dateo'] = rmn.newdate(rmn.NEWDATE_PRINT2STAMP, 20190302, 18000000)
rmn.fstecr(target_fst, record)
n_records = rmn.fstnbr(target_fst)
rmn.fstcloseall(target_fst)
assert(n_records == 2)
When using fstopenall
with an array of standard files, records beyond the first file are not accessible. I wrote a little reproduction script. Can you guys reproduce? I found this issue on PPP2 and my local machine.
import pathlib
import rpnpy.librmn.all as rmn
FST_PATH = pathlib.Path('/space/hall2/sitestore/eccc/cmod/prod/hubs/gridpt/dbase/caldas/hrdps_national/analysis/final/diag')
if __name__ == '__main__':
fsts = list((str(x) for x in FST_PATH.iterdir() if rmn.isFST(str(x))))[0:2]
print('Running test on two standard files: {}'.format(fsts))
first_fst = rmn.fstopenall(fsts[0])
first_fst_n_records = rmn.fstnbr(first_fst)
rmn.fstcloseall(first_fst)
second_fst = rmn.fstopenall(fsts[1])
second_fst_n_records = rmn.fstnbr(second_fst)
rmn.fstcloseall(second_fst)
merged_fst = rmn.fstopenall(fsts)
merged_fst_n_records = rmn.fstnbr(merged_fst)
rmn.fstcloseall(merged_fst)
# The following assert fails with rpnpy 2.1.b3 on ppp2
assert(merged_fst_n_records == first_fst_n_records + second_fst_n_records)
See: ECCC-ASTD-MRD/librmn@686e512
If Python-RPN is used with librmn 016.3, it immediately fails with an AttributeError
from ctypes, due to a missing crc32
symbol.
Salut,
dans la routines
bin/rpy.nml_set
bin/rpy.nml_get
Il y a un appel a "cleanname" or cleanname n est declarer nulle part.
Ca mene donc a une erreur.
Dans la cas de bin/rpy.nml_set, ne suffirai t'il pas de commenter la ligne (sa semble un oublie?). En tout cas, cette fonction redevient fonctionnel une fois cette ligne commentée.
Merci
I have some optimized routines from the fstd2nc
tool which I've found helpful when scanning through many (hundreds) of files on a routine basis, where even a small overhead can add up to a noticeable delay in the script. Maybe some of them might be useful within python-rpn
as utility functions? Below is a brief description of them.
The all_params method extracts all the record parameters at once, and returns a vectorized dictionary of the result. It scrapes the information directly out of the librmn data structures, and avoids the overhead of repeatedly calling fstprm
. Example:
In [1]: import os, sys
In [2]: import rpnpy.librmn.all as rmn
In [3]: from fstd2nc.extra import all_params
In [4]: ATM_MODEL_DFILES = os.getenv('ATM_MODEL_DFILES').strip()
In [5]: fileId = rmn.fstopenall(ATM_MODEL_DFILES+'/bcmk', rmn.FST_RO)
In [6]: p = all_params(fileId)
In [7]: print p
{'deet': array([900, 900, 900, ..., 900, 900, 900], dtype=int32), 'typvar': array(['P ', 'P ', 'P ', ..., 'P ', 'P ', 'X '], dtype='|S2'), 'lng': array([3762, 3762, 3762, ..., 3762, 3762, 13], dtype=int32), 'ni': array([200, 200, 200, ..., 200, 200, 1], dtype=int32), 'nj': array([100, 100, 100, ..., 100, 100, 1], dtype=int32), 'nbits': array([12, 12, 12, ..., 12, 12, 32], dtype=int8), 'swa': array([ 2335, 6097, 9859, ..., 4040669, 4044431, 4048193],
dtype=uint32), 'datyp': array([1, 1, 1, ..., 1, 1, 5], dtype=uint8), 'xtra1': array([354514400, 354514400, 354514400, ..., 354525200, 354525200,
354525200], dtype=uint32), 'xtra2': array([0, 0, 0, ..., 0, 0, 0], dtype=uint32), 'xtra3': array([0, 0, 0, ..., 0, 0, 0], dtype=uint32), 'ip2': array([ 0, 0, 0, ..., 12, 12, 0], dtype=int32), 'ip3': array([0, 0, 0, ..., 0, 0, 0], dtype=int32), 'ip1': array([ 0, 97642568, 97738568, ..., 93423264, 93423264, 44140192],
dtype=int32), 'key': array([ 1, 1025, 2049, ..., 2145288, 2146312, 2147336],
dtype=int32), 'ubc': array([0, 0, 0, ..., 0, 0, 0], dtype=uint16), 'npas': array([ 0, 0, 0, ..., 48, 48, 48], dtype=int32), 'nk': array([1, 1, 1, ..., 1, 1, 1], dtype=int32), 'ig4': array([0, 0, 0, ..., 0, 0, 0], dtype=int32), 'ig3': array([0, 0, 0, ..., 0, 0, 0], dtype=int32), 'ig2': array([ 0, 0, 0, ..., 0, 0, 1600], dtype=int32), 'ig1': array([ 0, 0, 0, ..., 0, 0, 800], dtype=int32), 'nomvar': array(['P0 ', 'TT ', 'TT ', ..., 'UU ', 'VV ', 'HY '], dtype='|S4'), 'datev': array([354514400, 354514400, 354514400, ..., 354525200, 354525200,
354525200], dtype=int32), 'dateo': array([354514400, 354514400, 354514400, ..., 354514400, 354514400,
354514400], dtype=int32), 'etiket': array(['G133K80P ', 'G133K80P ', 'G133K80P ', ...,
'G133K80P ', 'G133K80P ', 'G133K80P '], dtype='|S12'), 'grtyp': array(['G', 'G', 'G', ..., 'G', 'G', 'X'], dtype='|S1'), 'dltf': array([0, 0, 0, ..., 0, 0, 0], dtype=uint8)}
This can be combined with pandas
to get a convenient table of parameters:
In [8]: import pandas as pd
In [9]: p = pd.DataFrame(p)
In [10]: print p
dateo datev datyp deet dltf ... typvar ubc xtra1 xtra2 xtra3
0 354514400 354514400 1 900 0 ... P 0 354514400 0 0
1 354514400 354514400 1 900 0 ... P 0 354514400 0 0
2 354514400 354514400 1 900 0 ... P 0 354514400 0 0
3 354514400 354514400 1 900 0 ... P 0 354514400 0 0
4 354514400 354514400 1 900 0 ... P 0 354514400 0 0
... ... ... ... ... ... ... ... .. ... ... ...
2783 354514400 354525200 1 900 0 ... P 0 354525200 0 0
2784 354514400 354525200 1 900 0 ... P 0 354525200 0 0
2785 354514400 354525200 1 900 0 ... P 0 354525200 0 0
2786 354514400 354525200 1 900 0 ... P 0 354525200 0 0
2787 354514400 354525200 5 900 0 ... X 0 354525200 0 0
[2788 rows x 28 columns]
Having the parameters in this pandas.DataFrame
structure provides a more powerful tool for analysing the data. For instance, the pivot method could be used to quickly organize the records into multidimensional time/level structures.
The method is also about 20x faster than looping over fstprm
:
In [11]: %timeit map(rmn.fstprm, rmn.fstinl(fileId))
10 loops, best of 3: 84.3 ms per loop
In [12]: %timeit pd.DataFrame(all_params(fileId))
100 loops, best of 3: 4.46 ms per loop
80ms isn't much, but it adds up if you're scanning over hundreds (or thousands) of files.
The maybeFST function is a more compact version of isFST
which avoids any librmn calls such as c_wkoffit
. That library function can incur some overhead since it's testing for many different formats, not just FST.
When combined together, the maybeFST
and all_params
functions can allow a user to very quickly scan over many hundreds of files and get a snapshot of all the records inside.
The stamp2datetime function converts an array of RPN date stamps into datetime
objects. Useful in conjunction with all_params
to get date information quickly, e.g.:
In [13]: from fstd2nc.mixins.dates import stamp2datetime
In [14]: print stamp2datetime(p['datev'])
['2009-04-27T00:00:00.000000000' '2009-04-27T00:00:00.000000000'
'2009-04-27T00:00:00.000000000' ... '2009-04-27T12:00:00.000000000'
'2009-04-27T12:00:00.000000000' '2009-04-27T12:00:00.000000000']
In [15]: p['date'] = stamp2datetime(p['datev'])
In [16]: print p
dateo datev datyp deet ... xtra1 xtra2 xtra3 date
0 354514400 354514400 1 900 ... 354514400 0 0 2009-04-27 00:00:00
1 354514400 354514400 1 900 ... 354514400 0 0 2009-04-27 00:00:00
2 354514400 354514400 1 900 ... 354514400 0 0 2009-04-27 00:00:00
3 354514400 354514400 1 900 ... 354514400 0 0 2009-04-27 00:00:00
4 354514400 354514400 1 900 ... 354514400 0 0 2009-04-27 00:00:00
... ... ... ... ... ... ... ... ... ...
2783 354514400 354525200 1 900 ... 354525200 0 0 2009-04-27 12:00:00
2784 354514400 354525200 1 900 ... 354525200 0 0 2009-04-27 12:00:00
2785 354514400 354525200 1 900 ... 354525200 0 0 2009-04-27 12:00:00
2786 354514400 354525200 1 900 ... 354525200 0 0 2009-04-27 12:00:00
2787 354514400 354525200 5 900 ... 354525200 0 0 2009-04-27 12:00:00
[2788 rows x 29 columns]
The decode_ip1 function quickly decodes the levels from an array of ip1 values. For example:
In [18]: import numpy as np
In [19]: from fstd2nc.mixins.vcoords import decode_ip1
In [20]: print decode_ip1(p['ip1'])
[array([(2, 0.)], dtype=[('kind', '<i4'), ('level', '<f4')])
array([(5, 0.000125)], dtype=[('kind', '<i4'), ('level', '<f4')])
array([(5, 0.000221)], dtype=[('kind', '<i4'), ('level', '<f4')]) ...
array([(5, 1.)], dtype=[('kind', '<i4'), ('level', '<f4')])
array([(5, 1.)], dtype=[('kind', '<i4'), ('level', '<f4')])
array([(2, 0.1)], dtype=[('kind', '<i4'), ('level', '<f4')])]
In [21]: levels = np.concatenate(decode_ip1(p['ip1']))
In [22]: print levels
[(2, 0.00e+00) (5, 1.25e-04) (5, 2.21e-04) ... (5, 1.00e+00) (5, 1.00e+00)
(2, 1.00e-01)]
In [23]: print levels.dtype
[('kind', '<i4'), ('level', '<f4')]
In [24]: p['level'] = levels['level']
In [25]: p['kind'] = levels['kind']
In [26]: print p
dateo datev datyp deet dltf ... xtra2 xtra3 date level kind
0 354514400 354514400 1 900 0 ... 0 0 2009-04-27 00:00:00 0.000000 2
1 354514400 354514400 1 900 0 ... 0 0 2009-04-27 00:00:00 0.000125 5
2 354514400 354514400 1 900 0 ... 0 0 2009-04-27 00:00:00 0.000221 5
3 354514400 354514400 1 900 0 ... 0 0 2009-04-27 00:00:00 0.000382 5
4 354514400 354514400 1 900 0 ... 0 0 2009-04-27 00:00:00 0.000635 5
... ... ... ... ... ... ... ... ... ... ... ...
2783 354514400 354525200 1 900 0 ... 0 0 2009-04-27 12:00:00 0.995000 5
2784 354514400 354525200 1 900 0 ... 0 0 2009-04-27 12:00:00 0.995000 5
2785 354514400 354525200 1 900 0 ... 0 0 2009-04-27 12:00:00 1.000000 5
2786 354514400 354525200 1 900 0 ... 0 0 2009-04-27 12:00:00 1.000000 5
2787 354514400 354525200 5 900 0 ... 0 0 2009-04-27 12:00:00 0.100000 2
[2788 rows x 31 columns]
For completion, here's an example using the information from all the above steps to get multidimensional structures for a field. First, get the list of variables:
In [32]: print pd.unique(p.nomvar)
['P0 ' 'TT ' 'ES ' 'HU ' 'MX ' 'LA ' 'LO ' 'ME ' 'WE ' 'GZ '
'HR ' 'WW ' 'PN ' 'TD ' 'QC ' 'FC ' 'FQ ' 'IO ' 'OL ' 'UE '
'EN ' 'GL ' 'LH ' 'MF ' 'MG ' 'TM ' 'VG ' 'RC ' 'AL ' 'I0 '
'I1 ' 'SD ' 'I6 ' 'I7 ' 'I8 ' 'I9 ' 'AB ' 'AG ' 'AH ' 'AI '
'AP ' 'AR ' 'AS ' 'AU ' 'AV ' 'AW ' 'EV ' 'FS ' 'FV ' 'IC '
'IE ' 'IH ' 'IV ' 'NF ' 'SI ' 'I2 ' 'I3 ' 'I4 ' 'I5 ' 'DN '
'NR ' 'HF ' 'FL ' 'N0 ' 'O1 ' 'RT ' 'RY ' 'PR ' 'PE ' 'FR '
'RN ' 'SN ' 'PC ' 'FB ' 'N4 ' 'CX ' 'FI ' 'AD ' 'FN ' 'EI '
'H ' 'J9 ' 'TG ' 'Z0 ' 'NT ' 'K6 ' 'K4 ' 'U9 ' 'AE ' 'RZ '
'RR ' 'U4 ' 'L7 ' 'L8 ' 'IY ' 'UU ' 'VV ' 'HY ' '>> ' '^^ '
'VF ' 'GA ' 'J1 ' 'J2 ' 'Y7 ' 'Y8 ' 'Y9 ' 'ZP ' 'META' 'TS '
'TP ' 'HS ' 'LG ' 'ICEL' 'EMIB']
Pick UU:
In [46]: p = p.loc[p.dltf==0] # Ignore deleted records
In [47]: uu = p.loc[p.nomvar=='UU '] # Select UU records
In [48]: print uu
dateo datev datyp deet dltf ... xtra2 xtra3 date level kind
922 354514400 354514400 1 900 0 ... 0 0 2009-04-27 00:00:00 0.000125 5
924 354514400 354514400 1 900 0 ... 0 0 2009-04-27 00:00:00 0.000221 5
926 354514400 354514400 1 900 0 ... 0 0 2009-04-27 00:00:00 0.000382 5
928 354514400 354514400 1 900 0 ... 0 0 2009-04-27 00:00:00 0.000635 5
930 354514400 354514400 1 900 0 ... 0 0 2009-04-27 00:00:00 0.001010 5
... ... ... ... ... ... ... ... ... ... ... ...
2777 354514400 354525200 1 900 0 ... 0 0 2009-04-27 12:00:00 0.961000 5
2779 354514400 354525200 1 900 0 ... 0 0 2009-04-27 12:00:00 0.974000 5
2781 354514400 354525200 1 900 0 ... 0 0 2009-04-27 12:00:00 0.985000 5
2783 354514400 354525200 1 900 0 ... 0 0 2009-04-27 12:00:00 0.995000 5
2785 354514400 354525200 1 900 0 ... 0 0 2009-04-27 12:00:00 1.000000 5
[160 rows x 31 columns]
Organize the records by date/level:
In [49]: uu = uu.pivot(index='date',columns='level') # Organize by date/level
In [51]: print uu['ip1']
level 0.000125 0.000221 0.000382 ... 0.985000 0.995000 1.000000
date ...
2009-04-27 00:00:00 97642568 97738568 97899568 ... 95356840 95366840 93423264
2009-04-27 12:00:00 97642568 97738568 97899568 ... 95356840 95366840 93423264
[2 rows x 80 columns]
In [52]: print uu['ip2']
level 0.000125 0.000221 0.000382 ... 0.985000 0.995000 1.000000
date ...
2009-04-27 00:00:00 0 0 0 ... 0 0 0
2009-04-27 12:00:00 12 12 12 ... 12 12 12
[2 rows x 80 columns]
In [53]: print uu['key'] # The keys/handles for these record
level 0.000125 0.000221 0.000382 ... 0.985000 0.995000 1.000000
date ...
2009-04-27 00:00:00 1730561 1732609 1734657 ... 2150401 2152449 2154497
2009-04-27 12:00:00 1721352 1723400 1725448 ... 2141192 2143240 2145288
[2 rows x 80 columns]
I am trying to use Python-RPN outside the CMC Intranet, again. To produce librmnshared_015.2.so, I download and execute http://scaweb.sca.uqam.ca/armnlib/repository/Linux_BOOTSTRAP-8.11-rmnlib15.4.
When I try import rpnpy.librmn.all as rmn
, Python prints:
OSError: /home/konu/ssm-domains-base/libs/rmnlib-dev/rmnlib-git_1.0_multi/lib/Linux_x86-64/gfortran/librmnshared_015.2.so: undefined symbol: get_gossip_dir
The complete output of the Linux_BOOTSTRAP-8.11-rmnlib15.4 is here: https://gist.github.com/jeixav/ca93310d9b23d4c20a05ce5cdb1d3c97. While the script appears to run to completion, there are a few errors. Does anyone have suggestions to troubleshoot?
The operating system is Debian Stretch
It appears that some types of burp elements aren't being decoded in rpnpy.burpc
. Instead, the e_rval
array holds some random values (from uninitialized memory?).
Example of triggering this problem:
import rpnpy.librmn.all as rmn
import rpnpy.burpc.all as brp
import os
mypath = os.path.join(os.environ['ATM_MODEL_DFILES'],'bcmk_burp','2007021900.brp')
with brp.BurpcFile(mypath) as bfile:
rpt = bfile[0]
blk = rpt[1]
for i, ele in enumerate(blk):
print 'ELE', i
print ele
When I run this test on the science network, the "ELE 6" e_rval changes every time I run the script. The only differences I can see with this element is it has e_cvt
set to 0 (no conversion?). This issue is present in both 2.1.b3
and on the latest master branch.
This was causing sporadic failures for me in the test_brp_blk_getele_iter
unit test - if the garbage values happen to have a nan
in there, then the ==
comparison would fail. This only happened once in a while, so it took a while to figure out why tests that were working suddenly failed... fun times.
If I have some standard file XYZ
, I am unable to open it with fstopenall. Sample code:
>>> f = rmn.fstopenall('XYZ', verbose=True)
(fstopenall) Not a RPNSTD file: XYZ
(fstopenall) Problem Opening: XYZ
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "rpnpy/librmn/fstd98.py", line 304, in fstopenall
.format(str(paths)))
rpnpy.librmn.fstd98.FSTDError: fstopenall: unable to open any file in path ['XYZ']
Tested with 2.1.b2.
So I'm back to trying to install python-rpn on a CMC external computer. After following the steps outlined in Issue #2, with the following changes:
When running the tests I encounter the following issues:
The directories are not accessible externally so the tests are not portable:
The following three tests fail independantly (I think) of the missing files.
======================================================================
ERROR: test_14 (__main__.rpnpyCookbook)
----------------------------------------------------------------------
Traceback (most recent call last):
File "share/tests/test_cookbook.py", line 342, in test_14
import rpnpy.vgd.all as vgd
File "/home/doyle/git/rpnpy/lib/rpnpy/vgd/__init__.py", line 102, in <module>
(VGD_VERSION, VGD_LIBPATH, libvgd) = loadVGDlib()
File "/home/doyle/git/rpnpy/lib/rpnpy/vgd/__init__.py", line 90, in loadVGDlib
raise IOError(-1, 'Failed to find libdescrip.so: ', vgd_libfile)
IOError: [Errno -1] Failed to find libdescrip.so: : 'libdescripshared_6.0.0.so'
======================================================================
ERROR: test_22 (__main__.rpnpyCookbook)
----------------------------------------------------------------------
Traceback (most recent call last):
File "share/tests/test_cookbook.py", line 608, in test_22
sys.stderr.write("Problem opening the files: %s, %s\n" % (fileNameIn, fileNameOut))
NameError: global name 'sys' is not defined
======================================================================
ERROR: testfnomfclosKnownValues (__main__.LibrmnFilesKnownValues)
fnomfclos should give known result with known input
----------------------------------------------------------------------
Traceback (most recent call last):
File "share/tests/test_librmn_base.py", line 33, in testfnomfclosKnownValues
iout = rmn.fnom(mypath,rmn.FST_RO)
File "/home/doyle/git/rpnpy/lib/rpnpy/librmn/base.py", line 147, in fnom
raise RMNBaseError()
RMNBaseError
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.