Coder Social home page Coder Social logo

llnl / uedge Goto Github PK

View Code? Open in Web Editor NEW
31.0 14.0 20.0 452.01 MB

2D fluid simulation of plasma and neutrals in magnetic fusion devices

License: GNU Lesser General Public License v2.1

Makefile 0.02% Verilog 8.18% M 20.13% C 0.49% Mathematica 44.93% MATLAB 7.21% Fortran 7.77% Python 4.28% Shell 0.28% Jupyter Notebook 5.57% HTML 0.15% Objective-C 0.01% FreeBasic 0.05% Rich Text Format 0.75% Roff 0.19% C++ 0.01%
simulation math-physics

uedge's People

Contributors

bendudson avatar ganjinzero1 avatar holm10 avatar jguterl avatar llnl-fesp avatar orso82 avatar trognlien avatar umansky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

uedge's Issues

Plasma influx from the outer wall boundary due to ExB drifts

Is your feature request related to a problem? Please describe.

There would be a strong plasma influx due to ExB on the outer wall near the outer target plate when turning on drifts and the ion Grad-B drift is into the divertor. The net effect of the plasma influx is to increase recycled neutrals in the inner divertor.

This effect of plasma influx caused by ExB seems to be quite substantial in the KSTAR case I have been running. I haven't tested the effect with reverse drifts and not sure if it has a similar effect as the case with forward B. The effect doesn't seem to be big for all the DIII-D cases I have ran, however, it remains to be confirmed.

Describe the solution you'd like

One possible solution could be to introduce a flux limiter to limit the plasma influx caused by drifts even though it is somewhat random or artificial.

Running multiple simulations in a single Python script

I am trying to run multiple PyUEDGE simulations in a loop using the Jupyter notebook. Here is the pseudocode:

from uedge import *
...
for i in range(n):
  #update some parameters in the bbb package
  bbb.exmain()
  #save some measurements

I am noticing some very interesting results. Once a certain threshold (on the order of 10) for n is exceeded, the majority of the later simulations do not run to the full 30 iterations, converging very quickly with iterm = 1 and very low fnrm values. This wouldn't be an issue in and of itself, but when I run a certain initial configuration in an environment where 100 simulations have already been run in the manner above, the measurements for te, ti, etc. are different than what I get when I run the same configuration in a fresh environment where no prior simulations were run. As a control, when I run the same initial configuration in two separate, fresh environments, I get identical measurements, so this indicates that something is going wrong.

To me, it seems like some backend variables are not being properly reset between the sequential executions of bbb.exmain(). I am new to UEDGE, so any help on this issue would be great.

Removing obsolete(?) python scripts

The following scripts in pyscripts/ are possibly obsolete, and should be removed from the codebase unless needed by the physics solver:

  • paws.py
  • rdinitdt.py
  • rdcontd.py
  • uexec.py
  • osfun.py
  • ruthere.py

Can @llnl-fesp confirm whether any of these are required functions or legacy functions?

Moving to python 3

Here are some reasons I think we would benefit from dropping support for python 2:

  • Python 2 relies on dead code: the important libraries on this list (including numpy and matplotlib) have pledged to stop updating their python 2 versions before 2020
  • Uniform codebase: right now we have a mix of routines, some designed for python 2, others for python 3. Sometimes when you want to use a new script, you need to port it yourself.
  • Simplified codebase: remove code that does different things depending on the python version. A smaller and simpler codebase is a more maintainable codebase.
  • Old python 2 code would not be gone forever: it's still in the git history if anyone needs to use it

Drawbacks:

  • In my experience, Totalview can only show the Fortran source when uedge is used with python 2

I think this task would be pretty simple using the 2to3 utility, and I would be happy to make a pull request.

Let me know your thoughts on this, and if there are any drawbacks I'm unaware of.

Xerrab calls in PyUEDGE induces Segfault

Describe the bug
Xerrab calls in the UEDGE source induces Segfaults for the Python version.

Expected behavior
Output error message, return user to prompt.

Additional context
Exacerbates minor bugs to Segfaults, see e.g. #48

Remove multiple redefinitions of variables when reading equilibrium files?

Some of the parameters defined in the EFIT a- and g-files are re-defined multiple times when reading the files. This causes confusion in case there are issues with any such parameter.

E.g. simagx, rmagx, and zmagx are defined in multiple places in both the a- and g-file

read (iunit,1040) rmagx,zmagx,simagx,taumhd

read(iunit,2020) rmagx,zmagx,simagx,sibdry,bcentr

read(iunit,2020) cpasma,simagx,xdum,rmagx,xdum

read(iunit,2020) zmagx,xdum,sibdry,xdum,xdum

Proposed solution:
Define each variable once during reading of the files, store other duplicate definition in dummy variables. Currently, the last instance of each variable in the g-file are used, although the EFIT files sometimes only define the variables in the first row. Thus, I suggest only the first occurrences in the g-file are used, the remainder being read into dummy variables. This way, the dependency on the a-file is minimized.

Alternatively, manual definitions could be used, using simple Python scripts to pick variables (such as magnetic axis and X-point locations) could be utilized.

Currently, only strike-point locations seem to be uniquely defined in the a-file, and are the only required values from the a-file. An internal routine (using e.g. Python) could be implemented to completely remove the need for an a-file.

Segmentation fault in pyexamples/d3dHsm

On multiple systems, doing python runcase.py in pyexamples/d3dHsm produces a segmentation fault when bbb.exmain() is run at this line. Here are some inconclusive results of debugging it, in case they make more sense to someone else.

Output of script

$ python runcase.py
Forthon edition
 UEDGE $Name: V7_08_03 $
 Wrote file "gridue" with runid:    EFITD    09/07/90      # 66832 ,2384ms

 ***** Grid generation has been completed
  Updating Jacobian, npe =                      1
 iter=    0 fnrm=      9.216531402973144     nfe=      1


 nksol ---  iterm = 1.
            maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
            the maximum norm function.  u is probably an
            approximate root of f.
 Interpolants created; mype =                   -1
 Wrote file "gridue" with runid:    EFITD    09/07/90      # 66832 ,2384ms

 ***** Grid generation has been completed
  Updating Jacobian, npe =                      1
 iter=    0 fnrm=      9.216531400941010     nfe=      1
  Updating Jacobian, npe =                      2
Fatal Python error: Segmentation fault

Current thread 0x000000010d8f75c0 (most recent call first):
  File "runcase.py", line 46 in <module>
[1]    92229 segmentation fault  python runcase.py

I modified the script, so ignore the line numbers in the output. The segmentation fault happens at the line linked in the first paragraph.

lldb output

$ lldb /usr/local/bin/python
(lldb) target create "/usr/local/bin/python"
Traceback (most recent call last):
  File "<input>", line 1, in <module>
  File "/usr/local/Cellar/python@2/2.7.16/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 52, in <module>
    import weakref
  File "/usr/local/Cellar/python@2/2.7.16/Frameworks/Python.framework/Versions/2.7/lib/python2.7/weakref.py", line 14, in <module>
    from _weakref import (
ImportError: cannot import name _remove_dead_weakref
Current executable set to '/usr/local/bin/python' (x86_64).
(lldb) target stop-hook add
Enter your stop hook command(s).  Type 'DONE' to end.
> bt
> disassemble --pc
Stop hook #1 added.
(lldb) run runcase.py
* thread #1, stop reason = signal SIGSTOP
  * frame #0: 0x0000000100005000 dyld`_dyld_start

dyld`_dyld_start:
->  0x100005000 <+0>: popq   %rdi
    0x100005001 <+1>: pushq  $0x0
    0x100005003 <+3>: movq   %rsp, %rbp
    0x100005006 <+6>: andq   $-0x10, %rsp

Process 89747 launched: '/usr/local/bin/python' (x86_64)
* thread #2, stop reason = exec
  * frame #0: 0x0000000100005000 dyld`_dyld_start

dyld`_dyld_start:
->  0x100005000 <+0>: popq   %rdi
    0x100005001 <+1>: pushq  $0x0
    0x100005003 <+3>: movq   %rsp, %rbp
    0x100005006 <+6>: andq   $-0x10, %rsp

Process 89747 stopped
* thread #2, stop reason = exec
    frame #0: 0x0000000100005000 dyld`_dyld_start
dyld`_dyld_start:
->  0x100005000 <+0>: popq   %rdi
    0x100005001 <+1>: pushq  $0x0
    0x100005003 <+3>: movq   %rsp, %rbp
    0x100005006 <+6>: andq   $-0x10, %rsp
Target 0: (Python) stopped.
(lldb) c
Process 89747 resuming
Forthon edition
 UEDGE $Name: V7_08_03 $
 Wrote file "gridue" with runid:    EFITD    09/07/90      # 66832 ,2384ms

 ***** Grid generation has been completed
  Updating Jacobian, npe =                      1
 iter=    0 fnrm=      9.216531402973144     nfe=      1


 nksol ---  iterm = 1.
            maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
            the maximum norm function.  u is probably an
            approximate root of f.
 Interpolants created; mype =                   -1
 Wrote file "gridue" with runid:    EFITD    09/07/90      # 66832 ,2384ms

 ***** Grid generation has been completed
  Updating Jacobian, npe =                      1
 iter=    0 fnrm=      9.216531400941010     nfe=      1
  Updating Jacobian, npe =                      2
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x114b09010)
  * frame #0: 0x000000010d84a7ba uedgeC.so`daxpy_ + 298
    frame #1: 0x000000010d84b6dc uedgeC.so`dgbfa_ + 1036
    frame #2: 0x000000010d84cd3f uedgeC.so`dgbco_ + 383
    frame #3: 0x000000010d6812e3 uedgeC.so`jac_lu_decomp_ + 467
    frame #4: 0x000000010d707909 uedgeC.so`psetnk_ + 2153
    frame #5: 0x000000010d8219ec uedgeC.so`model_ + 1580
    frame #6: 0x000000010d822c9b uedgeC.so`nksol_ + 4091
    frame #7: 0x000000010d75107c uedgeC.so`uedriv_ + 13932
    frame #8: 0x000000010d742685 uedgeC.so`exmain_ + 405
    frame #9: 0x000000010d57d437 uedgeC.so`bbb_exmain + 55
    frame #10: 0x0000000100155f66 Python`PyEval_EvalFrameEx + 19200
    frame #11: 0x000000010015124c Python`PyEval_EvalCodeEx + 1540
    frame #12: 0x0000000100150c42 Python`PyEval_EvalCode + 32
    frame #13: 0x0000000100172633 Python`run_mod + 49
    frame #14: 0x00000001001726da Python`PyRun_FileExFlags + 130
    frame #15: 0x0000000100172259 Python`PyRun_SimpleFileExFlags + 719
    frame #16: 0x0000000100183cd4 Python`Py_Main + 3136
    frame #17: 0x00007fff6d9fa3d5 libdyld.dylib`start + 1
    frame #18: 0x00007fff6d9fa3d5 libdyld.dylib`start + 1

uedgeC.so`daxpy_:
->  0x10d84a7ba <+298>: movsd  0x10(%rcx,%rdx), %xmm2    ; xmm2 = mem[0],zero
    0x10d84a7c0 <+304>: movhpd 0x18(%rsi,%rdx), %xmm0    ; xmm0 = xmm0[0],mem[0]
    0x10d84a7c6 <+310>: movsd  (%rcx,%rdx), %xmm3        ; xmm3 = mem[0],zero
    0x10d84a7cb <+315>: mulpd  %xmm1, %xmm0

Process 89747 stopped
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x114b09010)
    frame #0: 0x000000010d84a7ba uedgeC.so`daxpy_ + 298
uedgeC.so`daxpy_:
->  0x10d84a7ba <+298>: movsd  0x10(%rcx,%rdx), %xmm2    ; xmm2 = mem[0],zero
    0x10d84a7c0 <+304>: movhpd 0x18(%rsi,%rdx), %xmm0    ; xmm0 = xmm0[0],mem[0]
    0x10d84a7c6 <+310>: movsd  (%rcx,%rdx), %xmm3        ; xmm3 = mem[0],zero
    0x10d84a7cb <+315>: mulpd  %xmm1, %xmm0
Target 0: (Python) stopped.

pudb output

Segfault happens after line shown in _Forthon.py.

pudb

Reproducibility

Mac

  • macOS 10.14
  • Apple LLVM version 10.0.1 (clang-1001.0.46.4)
  • GNU Fortran (GCC) 6.3.0

MIT Engaging cluster

  • centOS 7
  • GNU Fortran (GCC) 6.2.0

Red Hat virtual machine

  • RHEL 7.6
  • GNU Fortran (GCC) 4.8.5

Tried this because there is no segfault for Maxim on Red Hat 7.7 with gfortran 4.8.5. The segfault appears on the Red Hat VM I tested, so this bug doesn't seem to depend on the compiler version or operating system. Maybe it depends on certain environment variables being set?

Anaconda python

I've been using regular python and pip --user to install things, but I also tried Anaconda python and encountered the same issue.

Grid generation error in v8.0 when com.nxxpt>0

Error not present in v7, where code crashes when trying to generate grids with additional X-points added.

Minimal working (crashing) example attached, sample output below. Error not always reproducible (sometimes executes well --> memory allocation issue?)

In [1]: import griderror

Uedge version 8.0.0, an update is available to 8.0.0.1

# UEDGE version: 8.0.0
***** Grid generation has been completed
smoothx: no intersec'n of j= 31 with str line for iy= 1
zsh: segmentation fault ipython

griderror.tar.gz

anny interest adding the omp/mpi features?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Manual install fails when source path contains space

Installing uedge with python setup.py install fails when the uedge source directory is located in a path where one or more of the directories contain a space. Tested in MacOS and linux. In the generated build/Makefile.com, the full path of uedge/com is not escaped properly.

This is the output of python setup.py install:

running install
running build
(export PYTHONPATH=.:./pyscripts; python2 convertor)
directory list =['com', 'aph', 'api', 'bbb', 'dce', 'flx', 'grd', 'svr', 'wdf', 'pfb', 'idf', 'psi', 'std', 'dst']
Output directory already exists; proceeding
Entering directory com
processing for file type M2F
processing directorycomto directory./com
converting file com/comutil.m to ./com/comutil.F
converting file com/brent.m to ./com/brent.F
converting file com/blasext.m to ./com/blasext.F
converting file com/mnbrak.m to ./com/mnbrak.F
converting file com/misc.m to ./com/misc.F
Entering directory aph
processing for file type M2F
processing directoryaphto directory./aph
converting file aph/aphrates.m to ./aph/aphrates.F
converting file aph/aphread.m to ./aph/aphread.F
Entering directory api
processing for file type M2F
processing directoryapito directory./api
converting file api/apisorc.m to ./api/apisorc.F
converting file api/apip93.m to ./api/apip93.F
converting file api/fmombal.m to ./api/fmombal.F
converting file api/fimp.m to ./api/fimp.F
converting file api/inelrates.m to ./api/inelrates.F
converting file api/sputt.m to ./api/sputt.F
converting file api/apifcn.m to ./api/apifcn.F
Entering directory bbb
processing for file type M2F
processing directorybbbto directory./bbb
converting file bbb/griddubl.m to ./bbb/griddubl.F
converting file bbb/convert.m to ./bbb/convert.F
converting file bbb/odesolve.m to ./bbb/odesolve.F
converting file bbb/geometry.m to ./bbb/geometry.F
converting file bbb/odesetup.m to ./bbb/odesetup.F
converting file bbb/ext_neutrals.m to ./bbb/ext_neutrals.F
converting file bbb/potencur.m to ./bbb/potencur.F
converting file bbb/boundary.m to ./bbb/boundary.F
converting file bbb/s.m to ./bbb/s.F
converting file bbb/oderhs.m to ./bbb/oderhs.F
converting file bbb/turbulence.m to ./bbb/turbulence.F
Entering directory dce
processing for file type M2F
processing directorydceto directory./dce
Entering directory flx
processing for file type M2F
processing directoryflxto directory./flx
converting file flx/flxwrit.m to ./flx/flxwrit.F
converting file flx/flxcomp.m to ./flx/flxcomp.F
converting file flx/flxdriv.m to ./flx/flxdriv.F
converting file flx/dvshdf5.m to ./flx/dvshdf5.F
converting file flx/flxread.m to ./flx/flxread.F
Entering directory grd
processing for file type M2F
processing directorygrdto directory./grd
converting file grd/grdwrit.m to ./grd/grdwrit.F
converting file grd/grdcomp.m to ./grd/grdcomp.F
converting file grd/grddriv.m to ./grd/grddriv.F
converting file grd/grdinit.m to ./grd/grdinit.F
converting file grd/grdread.m to ./grd/grdread.F
Entering directory svr
processing for file type M2F
processing directorysvrto directory./svr
converting file svr/svrut4.m to ./svr/svrut4.F
converting file svr/svrut3.m to ./svr/svrut3.F
converting file svr/uoa.m to ./svr/uoa.F
converting file svr/daspk.m to ./svr/daspk.F
converting file svr/svrut2.m to ./svr/svrut2.F
converting file svr/nksol.m to ./svr/nksol.F
converting file svr/vodpk.m to ./svr/vodpk.F
converting file svr/svrut1.m to ./svr/svrut1.F
Entering directory wdf
processing for file type M2F
processing directorywdfto directory./wdf
converting file wdf/wdf.m to ./wdf/wdf.F
Entering directory pfb
processing for file type M2F
processing directorypfbto directory./pfb
Entering directory idf
processing for file type M2F
processing directoryidfto directory./idf
Entering directory psi
processing for file type M2F
processing directorypsito directory./psi
Entering directory std
processing for file type M2F
processing directorystdto directory./std
Entering directory dst
processing for file type M2F
processing directorydstto directory./dst
(cd com;Forthon --builddir ../build -a  -v --fargs "-Ofast -DFORTHON -cpp -finit-local-zero"  -f blasext.F com brent.F comutil.F misc.F mnbrak.F dsum.f dummy_py.f error.f getmsg.f ssum.f )
Building package com
make[1]: *** No rule to make target `/Users/sean/Desktop/test', needed by `compymodule.c'.  Stop.
make: *** [compy.so] Error 2
running build_py
creating build/lib.macosx-10.15-x86_64-2.7
creating build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/convert1.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uefacets.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/ueplotdata.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uewall.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uedgeutils.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/testpfb.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uedgeplots.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uereadgrid.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/rdcontdt.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/ueplot.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/convert.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/rundt.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/pdb_restore.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/pdb_save.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/__init__.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uefcifc.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/__version__.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/hdf5.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/pdb2h5.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/bas2py_rules.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/convertvsh5.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uexec.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/sources.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uegenlineout.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/ueparallel.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/PRpyt.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/cdf4.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/double.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/plotEdgeStd.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/PWpyt.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/parallel.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/rdinitdt.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/ruthere.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uedge_lists.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/filelists.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uedge.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/ue2vsh5.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
creating build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/create_database.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/reconv.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/ue_plot.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/read_gridue.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/__init__.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/ue_animate.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/utils.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/conv_step.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
running build_ext
building 'uedge.uedgeC' extension
creating build/temp.macosx-10.15-x86_64-2.7
creating build/temp.macosx-10.15-x86_64-2.7/build
creating build/temp.macosx-10.15-x86_64-2.7/com
clang -fno-strict-aliasing -fno-common -dynamic -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DWITH_NUMERIC=0 -DFORTHON_PKGNAME="uedgeC" -Ibuild -I/usr/local/lib/python2.7/site-packages/numpy/core/include -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python@2/2.7.17/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c uedgeC_Forthon.c -o build/temp.macosx-10.15-x86_64-2.7/uedgeC_Forthon.o
clang -fno-strict-aliasing -fno-common -dynamic -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DWITH_NUMERIC=0 -DFORTHON_PKGNAME="uedgeC" -Ibuild -I/usr/local/lib/python2.7/site-packages/numpy/core/include -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python@2/2.7.17/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c build/Forthon.c -o build/temp.macosx-10.15-x86_64-2.7/build/Forthon.o
clang: error: no such file or directory: 'build/Forthon.c'
clang: error: no input files
error: command 'clang' failed with exit status 1```

Possible typo in grid interpolation

There is a suspicious line in this polintp poloidal interpolation function:
https://github.com/LLNL/UEDGE/blob/master/bbb/griddubl.m#L1169

        do 30 ixo = ixos, ixof
            if(xo(ixo,iy).gt.xn(ix,iy) .or. ixo.eq.ixof) goto 25
            ixl = ixo
  20    continue
  25    continue

The 30 label is shared with an outer loop, and the 20 label is not used in this function.

The corresponding lines in the radintp radial interpolation function
https://github.com/LLNL/UEDGE/blob/master/bbb/griddubl.m#L1108
are:

            do 20 iyo = iyos, iyof
               if(yo(ix,iyo).gt.yn(ix,iy) .or. iyo.eq.iyof) goto 25
               iyl = iyo
  20        continue
  25        continue

I suspect that the radintp version is correct, and the polintp do loop should use 20 rather than 30 or be rewritten as

do ixo = ixos, ixof
   if(xo(ixo,iy).gt.xn(ix,iy) .or. ixo.eq.ixof) exit
   ixl = ixo
end do

Note: I think this bug, if it is a bug, causes extra work to be done but doesn't affect the result: The interpolation is performed for every value of ixo (rather than just one), but the last result computed has the correct value of ixl.

bbb.uede_ver out-of-sync with uedge.__version__

Describe the bug
bbb.uedge_ver returns another version than the current (actual) version returned by uedge.version
https://github.com/LLNL/UEDGE/blob/e98e807ede1062ed83788a74ca6a5c9c7c31c0aa/bbb/bbb.v#L3861C1-L3861C1

To Reproduce
import uedge
uedge.bbb.uedge_ver
uedge.version

Expected behavior
Both return the same version of the code

Additional context
Is there a way to ensure that uedge_ver stays up-to-date with uedge.version without updating the source code at every new version release?

Tricky recursive glitch in sfsetnk

The routine sfsetnk is calling the routine to construct the jacobian jac_calc using the argument sf, which is also the dummy output argument of sfsetnk used to assign sfscal in the calls to sfsetnk (line 266,289,297 in odesolve.m).

The argument sf in the call to jac_cal, however, corresponds to the dummy argument wk which is a worker array in jac_calc and is thus modified in jac_calc. Meanwhile, the value of sfscal is also used in jac_calc to build the jacobian, leading to a recursive assignment of sfscal during the call to jac_calc. A numerical glitch may thus appear depending on the compiler and the compilation option (e.g. fno-automatic).

A simple fix is to defined temporary worker array in sfsetnk to be used when calling jac_calc:
real wk(neq)
...
wk(1:neq)=0 ! just for precaution
call jac_calc (neq, tp, yl, yldot0, lbw, ubw, wk,
. nnzmx, jac, jacj, jaci)

bbb.iterm = 7?

I am currently running multiple UEDGE simulations in Colab with isbohmcalc = 0, i.e. spatially varying transport coefficients. Here is the pseudocode:

from uedge import *
...
for i in range(n):
  #update some parameters in the bbb package
  bbb.exmain()
  #save some measurements

Recently, I have been running into the following issue with some simulations:

nksol ---  iterm = 7.
            there was a breakdown in the krylov
            iteration.  this will likely only occur when
            the jacobian matrix j or j*(p-inverse) is ill-
            conditioned.  if this error return occurs with
            mf=2, try either mf=1 or mf=3 instead.

The current simulation ends, but this issue prevents any further simulations from running successfully. The code essentially stalls during the next simulation, with the for loop process never terminating. I was wondering if there is any way to prevent to prevent this issue from occurring, or prevent UEDGE from stalling and enable it to move on to the next simulation.

Also, the error message seems to be outdated as the only options for bbb.mf are 21, 22, 24, and 25.

compile a version of UEDGE with OMFIT support on GA IRIS cluster

@llnl-fesp @umansky I would like to compile a version of UEDGE with the OMFIT support (e4c491f) on the GA IRIS cluster. I was given access to the /fusion/projects/codes/uedge folder on IRIS, but I wanted to coordinate with before making a new directory and starting to maintain a separate version of UEDGE just for usage via OMFIT.

Since @llnl-fesp was very prompt in merging e4c491f into master, I wonder if you could just update UEDGE on IRIS to the latest version :)

Please let me know how you'd like to proceed. Thanks!

segmentation fault UEDGE with python3 on MacOS

On MacOS Catalina 10.15.2 with Python 3.7.6 and Forthon 0.8.41

Set of instructions leading to the segmentation fault (reproducible):

  • git clone https://github.com/LLNL/UEDGE.git
  • cd UEDGE
  • python setup.py clean
  • python setup.py build
  • python setup.py install
  • cd pyexamples/box2
  • add the following function in runcase.py (since execfile is deprecated in python 3)
def execfile(filepath, globals=None, locals=None):
    if globals is None:
        globals = {}
    globals.update({
        "__file__": filepath,
        "__name__": "__main__",
    })
    with open(filepath, 'rb') as file:
        exec(compile(file.read(), filepath, 'exec'), globals, locals)

-python runcase.py

output:

Screen Shot 2020-01-22 at 9 08 25 AM

Result of backtrace with lldb:

Screen Shot 2020-01-22 at 9 09 18 AM

Possible Glitch oderhs.m

Line 7934 in oderhs.m:
if(isngonxy(ix,iy,ifld) == 1) then
should read
if(isngonxy(ix,iy,igsp) == 1) then
?

Built UEDGE version segfault on OPEN statements

          On Singe the Anaconda 3 includes libgfortran.so.4. The problem occurs when another compiler is selected that uses libgfortran.so.5. Here is the output of ldd on the flx package shared object:

└──$ ldd flxpy.cpython-37m-x86_64-linux-gnu.so
linux-vdso.so.1 => (0x00007ffdd9134000)
libgfortran.so.4 => /usr/local/anaconda/anaconda3/lib/libgfortran.so.4 (0x00007ff294e79000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ff294bae000)
libc.so.6 => /lib64/libc.so.6 (0x00007ff2947e0000)
libquadmath.so.0 => /usr/local/anaconda/anaconda3/lib/libquadmath.so.0 (0x00007ff294e13000)
libm.so.6 => /lib64/libm.so.6 (0x00007ff2944de000)
libgcc_s.so.1 => /usr/local/anaconda/anaconda3/lib/libgcc_s.so.1 (0x00007ff294dff000)
/lib64/ld-linux-x86-64.so.2 (0x00007ff294dca000)

Note that libgfortran.so.4 is used in spite of a version 5 compiler. This is not compatible and can easily cause a crash. Working on a solution. For now the workaround is to compile with version 7 or older.

Originally posted by @llnl-fesp in #25 (comment)

Bug in oderhs.m gas thermal conduction calculation for running with diffusive neutrals + impurities

Describe the bug

In oderhs.m, lines 3102-3147 code that calculate the gas thermal conduction coefficients (populating hcxg/hcyg, appearing to be an approximation for molecules) uses terms hard-coded to fluid species ifld=2 (e.g. ni(ix,iy,2) for variables "naavex/y", intended to be for neutral atoms (e.g. lines 3118 and 3123).

This is correct for when inertial neutrals model is active, but is incorrect for diffusive neutrals (when atoms do not populate the ni variable). This has been accounted for in the code by the condition "if (nisp >= 2) ", line 3102.

However, this fix only works when there are no impurity species (or additional plasma species in general). For example, running deuterium plasma with diffusive neutrals + Ne impurity, ni(ix,iy,2) does not correspond to neutral atoms but instead to the first neon charge state. But the condition nisp >= 2 is satisfied, so the gas thermal conduction coefficients code runs and populates hcxg/hcyg, but using the neon density. The resulting error is mostly small (not approaching MW scale), but hard to say if it will always be.

To Reproduce
Steps to reproduce the behavior:

  1. Run a case with multiple ion fluid species and diffusive neutrals
  2. Check hcxg/hcyg to show they are none-zero

Suggested fix
Change line 3102 from "if (nisp >= 2) " to "if (isupgon(1) .eq. 1)" - i.e. only runs when inertial neutral model is active, rather than when there are more than 2 ion species present.

Infinite loop in nksol.m subroutine model

I've been encountering a problem where certain cases "hang". The text output stops, the code continues running, but nothing else happens (for example, the intermediate hdf5 savefiles do not get updated). Here is an example of the output (I'm using rundt.py):

$ python2 runcase.py
 Read file "gridue  " with runid:  FREEGS     09/01/2020        # 0  0ms

File attributes:
('     written on: ', 'Mon Apr 13 17:51:10 2020')
('        by code: ', 'UEDGE')
('    physics tag: ', array(['$Name:  $'], dtype='|S80'))
 UEDGE $Name:  $
*** For isimpon=2, set afracs, not afrac ***
 Read file "gridue  " with runid:  FREEGS     09/01/2020        # 0  0ms

  Updating Jacobian, npe =                      1
 iter=    0 fnrm=     0.2571840422417168     nfe=      1


 nksol ---  iterm = 1.
            maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
            the maximum norm function.  u is probably an
            approximate root of f.
*---------------------------------------------------------*
Need to take initial step with Jacobian; trying to do here
*---------------------------------------------------------*
*** For isimpon=2, set afracs, not afrac ***
 Read file "gridue  " with runid:  FREEGS     09/01/2020        # 0  0ms

  Updating Jacobian, npe =                      1
 iter=    0 fnrm=     0.2571840422417166     nfe=      1


 nksol ---  iterm = 1.
            maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
            the maximum norm function.  u is probably an
            approximate root of f.
initial fnrm =2.5718E-01
--------------------------------------------------------------------
--------------------------------------------------------------------

*** Number time-step changes = 1 New time-step = 1.0000E-10
rundt time elapsed: 0:00:08
--------------------------------------------------------------------
*** For isimpon=2, set afracs, not afrac ***
 iter=    0 fnrm=     0.2571840422417166     nfe=      1
(stays like this forever)
^Z

It is also possible to get the code to continue by compiling without the -Ofast flag in Makefile.Forthon. In this case, you start seeing fnrm= nan but the output continues until dtreal goes below dtkill and the code quits by itself.

I examined the case with -Ofast which hangs. I found that the pandf subroutine was being called many times. By adding some print statements and going up the call stack, I found this infinite loop inside subroutine model in nksol.m:

 10   continue
      ipcur = 0
      write(STDOUT,*) 'sbb if pthrsh=',pthrsh,'gt onept5=',onept5
      write(STDOUT,*) '       and ipflg=',ipflg,'ne 0'
      if ( (pthrsh .gt. onept5) .and. (ipflg .ne. 0) ) then
        ier = 0
        write(STDOUT,*) 'sbb call psetnk'
        call pset (n, u, savf, su, sf, x, f, wm(locwmp), iwm(locimp),
     *             ier)
        npe = npe + 1
        ipcur = 1
        nnipset = nni
        if (ier .ne. 0) then
          iersl = 8
          return
          endif
        endif
c-----------------------------------------------------------------------
c     load x with -f(u).
c-----------------------------------------------------------------------
      do 100 i = 1,n
 100    x(i) = -savf(i)
c-----------------------------------------------------------------------
c     call solpk to solve j*x = -f using the appropriate krylov
c     algorithm.
c-----------------------------------------------------------------------
      call solpk (n,wm,lenwm,iwm,leniwm,u,savf,x,su,sf,f,jac,psol)
      write(STDOUT,*) 'sbb after solpk iersl=', iersl,'ipflg=',ipflg,
     * 'ipcur=',ipcur 
      if (iersl .lt. 0) then
c nonrecoverable error from psol.  set iersl and return.
         iersl = 9
         return
         endif
      if ( (iersl .gt. 0) .and. (ipflg .ne. 0) ) then
        if (ipcur .eq. 0) go to 10
        endif

I also put print statements around pandf1 calls in oderhs.m. The pandf1 calls followed by line numbers (messed up by the additional lines I inserted into the file) correspond to other pandf1 calls in the jac_calc subroutine.

c ... Beginning of execution for call rhsdpk (by daspk), check constraints
      entry rhsdpk (neq, t, yl, yldot, ifail)
      
      if (icflag .gt. 0 .and. t .gt. 0.) then     
         if (icflag .eq. 2) rlxl = rlx
         do 6 i = 1, neq     
            ylchng(i) = yl(i) - ylprevc(i)
 6       continue
         call cnstrt (neq,ylprevc,ylchng,icnstr,tau,rlxl,ifail,ivar) 
         if (ifail .ne. 0) then
            call remark ('***Constraint failure in DASPK, dt reduced***')
            write (*,*) 'variable index = ',ivar,'   time = ',t
            goto 20
         endif
      else
         ifail = 0
      endif
      call scopy (neq, yl, 1, ylprevc, 1)  #put yl into ylprevc 

 8    tloc = t
      write(STDOUT,*) 'sbb pandf1 -1 -1 goto'
      go to 10

c ... Beginning of execution for call rhsnk (by nksol).
      entry rhsnk (neq, yl, yldot)
      tloc = 0.

c ... Calculate right-hand sides for interior and boundary points.
ccc 10   call convsr_vo (-1,-1, yl)  # test new convsr placement
ccc      call convsr_aux (-1,-1, yl) # test new convsr placement
      write(STDOUT,*) 'sbb pandf1 -1 -1 sequential'
 10   call pandf1 (-1, -1, 0, neq, tloc, yl, yldot)

 20   continue
      return
      end

This produces the following output:

$ python2 runcase.py
 Read file "gridue  " with runid:  FREEGS     09/01/2020        # 0  0ms                       

File attributes:
('     written on: ', 'Mon Apr 13 17:51:10 2020')
('        by code: ', 'UEDGE')
('    physics tag: ', array(['$Name:  $'], dtype='|S80'))
 UEDGE $Name:  $                                                                       
*** For isimpon=2, set afracs, not afrac ***
 Read file "gridue  " with runid:  FREEGS     09/01/2020        # 0  0ms                       

 sbb pandf1 -1 -1 sequential
  Updating Jacobian, npe =                      1
 sbb pandf1 8449
 sbb pandf1 8518
 sbb pandf1 8449
 sbb pandf1 8518
 sbb pandf1 8449
 sbb pandf1 8518
(more pandf1 messages...)
 sbb pandf1 8449
 sbb pandf1 8518
 sbb pandf1 8524
 sbb pandf1 -1 -1 sequential
 sbb nksol
 sbb icntnu=                    0
 sbb pandf1 -1 -1 sequential
 iter=    0 fnrm=     0.2571840410398497     nfe=      1


 nksol ---  iterm = 1.
            maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
            the maximum norm function.  u is probably an
            approximate root of f.
 sbb ffun
 sbb pandf1 -1 -1 goto
*---------------------------------------------------------*
Need to take initial step with Jacobian; trying to do here
*---------------------------------------------------------*
*** For isimpon=2, set afracs, not afrac ***
 Read file "gridue  " with runid:  FREEGS     09/01/2020        # 0  0ms                       

 sbb pandf1 -1 -1 sequential
  Updating Jacobian, npe =                      1
 sbb pandf1 8449
 sbb pandf1 8518
 sbb pandf1 8449
 sbb pandf1 8518
 sbb pandf1 8449
 sbb pandf1 8518
(more pandf1 messages...)
 sbb pandf1 8518
 sbb pandf1 8449
 sbb pandf1 8518
 sbb pandf1 8449
 sbb pandf1 8518
 sbb pandf1 8449
 sbb pandf1 8518
 sbb pandf1 8524
 sbb pandf1 -1 -1 sequential
 sbb nksol
 sbb icntnu=                    0
 sbb pandf1 -1 -1 sequential
 iter=    0 fnrm=     0.2571840410398498     nfe=      1


 nksol ---  iterm = 1.
            maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
            the maximum norm function.  u is probably an
            approximate root of f.
 sbb ffun
 sbb pandf1 -1 -1 goto
initial fnrm =2.5718E-01
--------------------------------------------------------------------
--------------------------------------------------------------------
 
*** Number time-step changes = 1 New time-step = 1.0000E-10
rundt time elapsed: 0:00:10
--------------------------------------------------------------------
*** For isimpon=2, set afracs, not afrac ***
 sbb pandf1 -1 -1 sequential
 sbb nksol
 sbb icntnu=                    1
 sbb pandf1 -1 -1 sequential
 iter=    0 fnrm=     0.2571840410398498     nfe=      1
 sbb call model                     1
 sbb model
 sbb if pthrsh=   0.0000000000000000      gt onept5=   1.5000000000000000     
        and ipflg=                    1 ne 0
 sbb solpk
 sbb spimgr
 sbb atv
 sbb call f
 sbb pandf1 -1 -1 sequential
 sbb after solpk iersl=                    1 ipflg=                    1 ipcur=                    0
 sbb if pthrsh=   0.0000000000000000      gt onept5=   1.5000000000000000     
        and ipflg=                    1 ne 0
 sbb solpk
 sbb spimgr
 sbb atv
 sbb call f
 sbb pandf1 -1 -1 sequential
 sbb after solpk iersl=                    1 ipflg=                    1 ipcur=                    0
 sbb if pthrsh=   0.0000000000000000      gt onept5=   1.5000000000000000     
        and ipflg=                    1 ne 0

(and so on until ctrl-Z)

Because pthrsh is always 0, ipcur never gets set to anything other than 0, which is required in order to stop looping.

pthrsh is set to 0 if icntnu != 0. icntnu indicates if this is a continuation call to nksol that makes use of old values.

Another interesting thing is that the arguments supplied to pandf1 change around the time of the hang:

 pandf1 xc=49  yc=33
 pandf1 xc=49  yc=33
 pandf1 xc=49  yc=33
 pandf1 xc=49  yc=33
 pandf1 xc=49  yc=33
 pandf1 xc=49  yc=33
 pandf1 xc=49  yc=33
 pandf1 xc=-1  yc=-1
 pandf1 xc=-1  yc=-1
 iter=    0 fnrm=     0.2571840410398498     nfe=      1


 nksol ---  iterm = 1.
            maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
            the maximum norm function.  u is probably an
            approximate root of f.
 pandf1 xc=-1  yc=-1
 pandf1 xc=-1  yc=-1
initial fnrm =2.5718E-01
--------------------------------------------------------------------
--------------------------------------------------------------------
 
*** Number time-step changes = 1 New time-step = 1.0000E-10
rundt time elapsed: 0:00:09
--------------------------------------------------------------------
*** For isimpon=2, set afracs, not afrac ***
 pandf1 xc=-1  yc=-1
 pandf1 xc=-1  yc=-1
 iter=    0 fnrm=     0.2571840410398498     nfe=      1
 pandf1 xc=-1  yc=-1
 pandf1 xc=-1  yc=-1
 pandf1 xc=-1  yc=-1
 pandf1 xc=-1  yc=-1
 pandf1 xc=-1  yc=-1
 pandf1 xc=-1  yc=-1
 pandf1 xc=-1  yc=-1
 pandf1 xc=-1  yc=-1
 pandf1 xc=-1  yc=-1
(and so on until ctrl-Z)

-1 is not an invalid argument, but apparently means "full RHS evaluation" rather than "poloidal/radial index of perturbed variable for Jacobian calc".

Also tested a case which does not hang (even with -Ofast) and found that it also has a long period of pandf(-1, -1, ...) calls where the following form repeats many times but eventually the code moves on:

 sbb atv
 sbb call f
 sbb pandf1 -1 -1 sequential
 sbb pandf1 xc=                   -1  yc=                   -1

From the absence of certain print statements (compare to end of the 4th code block in this post), we can see that these calls are being made from a different loop, supporting the claim that the loop identified above is the one that needs fixing.

Also ran the hanging case for several minutes to make sure that it was entirely pandf(-1, -1, ...) calls and it didn't start doing other things.

Also checked out Jerome Guterl's pandf issue but this seems to be a problem inside pandf, not outside, as we have here.

Jupyter notebook iphibcc = 0

Newbie trying out the jupyter example notebook using Google Colab (pip install forthon mppl uedge directly in the notebook). I ran into an issue right away, under "Run a Case" I get the following error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-27-3253ef9ba79b> in <module>()
      1 from uedge import *
----> 2 bbb.exmain()   #
      3 print("Usual iteration output should be in terminal window running Jupyter Notebook")
      4 print("nksol ---  iterm = ",bbb.iterm)

RuntimeError: **INPUT ERROR: only iphibcc=1,2,3 available

From the boundary.m code, it seems to be true: iphibcc==0 is not allowed. I change by hand:

bbb.iphibcc = 1

And it works fine. Just wondering if I have something wrong with version installation? Or is this simply an outdated input file?

Segfault for ictnunk=1 when case is converged

Describe the bug
When a case is converged (fnrm < ftol) and icntnunk is set to 1, the following exmain call causes a segfault

To Reproduce

  1. Navigate to a case that is converged
  2. Ensure the case is well converged by executing exmain until fnrm<ftol
  3. Set bbb.ictnunk=1
  4. Call exmain
  5. Segfault occurs

For the Slab_geometry test case in the pytests, the following commands will reproduce the bug:
from rd_slabH_in_w_h5 import *;bbb.exmain();bbb.exmain();bbb.icntnunk=1;bbb.exmain()

Expected behavior
Execute exmain, print initial fnrm, set iterm=1, return to prompt

Additional context
Has been encountered by several users at various points of using the code, most commonly associated with time-dependent runs where ictnunk is actively switched on and off. Bug does not appear to occur for basis versions.

Inconsistency of thermal force coefficient alpha_e implementation in code vs equation in manual + references

Describe the issue
Lines 5341-5344 in oderhs.m perform the calculation of the alpha_e thermal force coefficients with impurity species.

In the code, all terms in this equation use zi or zeffv:

alfe(ifld) = cfgte*2.2*zi(ifld)**2*(1+.52*zeffv) /
     .                      ( (1+2.65*zeffv)*(1+.285*zeffv)*zeffv )

But in the UEDGE manual (see Appendix A), the equation is given differently as:

alfe(ifld) = cfgte*2.2*zi(ifld)**2*(1+.52*z0) /
     .                      ( (1+2.65*zeffv)*(1+.285*zeffv)*zeffv )

using a z0 term in the numerator, where z0 = ne*zeff/nh − 1.

And both appear inconsistent with the two references cited in the manual for the coefficients. E.g. in Igitkhanov, all terms in alpha_e use zeff (including where zi term is used above).
No idea what the correct form should be, but looks like an inconsistency that would warrant some attention.

Improper resetting of pandf during jacobian calculation

There are two variables in pandf subroutine which are not reinitalized between each call:

  • the variable resmo in the following block for neutral species (zi(ifld)=0). I suggest to add an "else" statement to set resmo to zero when zi=0 (see cJG:suggested correction). The residual of the momentum for the gas species is thus wrong when nusp>1 which is the case when nhsp>1.

      do 99 iy = j2, j5
         do 98 ix = i2, i5
            ix2 = ixp1(ix,iy)
            if (zi(ifld) .ne. 0) then  # additions only for charged ions
               dp1 =  cngmom(ifld)*(1/fac2sp)*
                        ( ng(ix2,iy,1)*tg(ix2,iy,1)-
                          ng(ix ,iy,1)*tg(ix ,iy,1) )
               resmo(ix,iy,ifld) = 0.
               resmo(ix,iy,ifld) =
                    smoc(ix,iy,ifld)
                  + smov(ix,iy,ifld) * up(ix,iy,ifld)
                  - cfneut * cfneutsor_mi * sx(ix,iy) * rrv(ix,iy) * dp1
                  - cfneut * cfneutsor_mi * cmwall(ifld)*0.5*(ng(ix,iy,1)+ng(ix2,iy,1))
                        * mi(ifld)*up(ix,iy,ifld)*0.5
                        *(nucx(ix,iy,1)+nucx(ix2,iy,1))*volv(ix,iy)
                  + cmneut * cmneutsor_mi * uesor_up(ix,iy,ifld)
                  + cfmsor*(msor(ix,iy,ifld) + msorxr(ix,iy,ifld)) #### IJ 2017: needs *cfneut for multi-charge state ions & MC neutrals?
                  + volmsor(ix,iy,ifld)
                  + cfvisxneov*visvol_v(ix,iy,ifld)
                  + cfvisxneoq*visvol_q(ix,iy,ifld)
    c  Add drag with cold, stationary impurity neutrals
               if (ifld > nhsp) then
                 resmo(ix,iy,ifld) = resmo(ix,iy,ifld) - cfupimpg*
                      0.25*mi(ifld)*(ni(ix,iy,ifld)+ni(ix2,iy,ifld))*
                       ( nucxi(ix,iy,ifld)+nucxi(ix2,iy,ifld)+
                        nueli(ix,iy,ifld)+nueli(ix2,iy,ifld) )*
                         up(ix,iy,ifld)*volv(ix,iy)
                endif
     cJG: suggest correction
            else
               resmo(ix,iy,ifld) = 0
            endif
    
  • the electron power background variable pwrebkg is not reset properly when xinc!=0 or yinc!=0. That should not be a problem though as it is to prevent small values of pwrebkg. But that should be fixed for the pupose of code validation.

Missing python files in latest version

The latest version seems to be missing some python files. They are listed in the MANIFEST, and other files try to import them, but can't. This breaks many scripts, such as those which try to save or load HDF5 files.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.