llnl / uedge Goto Github PK
View Code? Open in Web Editor NEW2D fluid simulation of plasma and neutrals in magnetic fusion devices
License: GNU Lesser General Public License v2.1
2D fluid simulation of plasma and neutrals in magnetic fusion devices
License: GNU Lesser General Public License v2.1
Is your feature request related to a problem? Please describe.
There would be a strong plasma influx due to ExB on the outer wall near the outer target plate when turning on drifts and the ion Grad-B drift is into the divertor. The net effect of the plasma influx is to increase recycled neutrals in the inner divertor.
This effect of plasma influx caused by ExB seems to be quite substantial in the KSTAR case I have been running. I haven't tested the effect with reverse drifts and not sure if it has a similar effect as the case with forward B. The effect doesn't seem to be big for all the DIII-D cases I have ran, however, it remains to be confirmed.
Describe the solution you'd like
One possible solution could be to introduce a flux limiter to limit the plasma influx caused by drifts even though it is somewhat random or artificial.
I am trying to run multiple PyUEDGE simulations in a loop using the Jupyter notebook. Here is the pseudocode:
from uedge import *
...
for i in range(n):
#update some parameters in the bbb package
bbb.exmain()
#save some measurements
I am noticing some very interesting results. Once a certain threshold (on the order of 10) for n is exceeded, the majority of the later simulations do not run to the full 30 iterations, converging very quickly with iterm = 1
and very low fnrm
values. This wouldn't be an issue in and of itself, but when I run a certain initial configuration in an environment where 100 simulations have already been run in the manner above, the measurements for te, ti, etc. are different than what I get when I run the same configuration in a fresh environment where no prior simulations were run. As a control, when I run the same initial configuration in two separate, fresh environments, I get identical measurements, so this indicates that something is going wrong.
To me, it seems like some backend variables are not being properly reset between the sequential executions of bbb.exmain()
. I am new to UEDGE, so any help on this issue would be great.
The following scripts in pyscripts/ are possibly obsolete, and should be removed from the codebase unless needed by the physics solver:
Can @llnl-fesp confirm whether any of these are required functions or legacy functions?
Here are some reasons I think we would benefit from dropping support for python 2:
Drawbacks:
I think this task would be pretty simple using the 2to3 utility, and I would be happy to make a pull request.
Let me know your thoughts on this, and if there are any drawbacks I'm unaware of.
Describe the bug
Xerrab calls in the UEDGE source induces Segfaults for the Python version.
Expected behavior
Output error message, return user to prompt.
Additional context
Exacerbates minor bugs to Segfaults, see e.g. #48
Some of the parameters defined in the EFIT a- and g-files are re-defined multiple times when reading the files. This causes confusion in case there are issues with any such parameter.
E.g. simagx, rmagx, and zmagx are defined in multiple places in both the a- and g-file
Line 333 in d4926df
Line 440 in d4926df
Line 441 in d4926df
Line 442 in d4926df
Proposed solution:
Define each variable once during reading of the files, store other duplicate definition in dummy variables. Currently, the last instance of each variable in the g-file are used, although the EFIT files sometimes only define the variables in the first row. Thus, I suggest only the first occurrences in the g-file are used, the remainder being read into dummy variables. This way, the dependency on the a-file is minimized.
Alternatively, manual definitions could be used, using simple Python scripts to pick variables (such as magnetic axis and X-point locations) could be utilized.
Currently, only strike-point locations seem to be uniquely defined in the a-file, and are the only required values from the a-file. An internal routine (using e.g. Python) could be implemented to completely remove the need for an a-file.
On multiple systems, doing python runcase.py
in pyexamples/d3dHsm
produces a segmentation fault when bbb.exmain()
is run at this line. Here are some inconclusive results of debugging it, in case they make more sense to someone else.
$ python runcase.py
Forthon edition
UEDGE $Name: V7_08_03 $
Wrote file "gridue" with runid: EFITD 09/07/90 # 66832 ,2384ms
***** Grid generation has been completed
Updating Jacobian, npe = 1
iter= 0 fnrm= 9.216531402973144 nfe= 1
nksol --- iterm = 1.
maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
the maximum norm function. u is probably an
approximate root of f.
Interpolants created; mype = -1
Wrote file "gridue" with runid: EFITD 09/07/90 # 66832 ,2384ms
***** Grid generation has been completed
Updating Jacobian, npe = 1
iter= 0 fnrm= 9.216531400941010 nfe= 1
Updating Jacobian, npe = 2
Fatal Python error: Segmentation fault
Current thread 0x000000010d8f75c0 (most recent call first):
File "runcase.py", line 46 in <module>
[1] 92229 segmentation fault python runcase.py
I modified the script, so ignore the line numbers in the output. The segmentation fault happens at the line linked in the first paragraph.
$ lldb /usr/local/bin/python
(lldb) target create "/usr/local/bin/python"
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/usr/local/Cellar/python@2/2.7.16/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 52, in <module>
import weakref
File "/usr/local/Cellar/python@2/2.7.16/Frameworks/Python.framework/Versions/2.7/lib/python2.7/weakref.py", line 14, in <module>
from _weakref import (
ImportError: cannot import name _remove_dead_weakref
Current executable set to '/usr/local/bin/python' (x86_64).
(lldb) target stop-hook add
Enter your stop hook command(s). Type 'DONE' to end.
> bt
> disassemble --pc
Stop hook #1 added.
(lldb) run runcase.py
* thread #1, stop reason = signal SIGSTOP
* frame #0: 0x0000000100005000 dyld`_dyld_start
dyld`_dyld_start:
-> 0x100005000 <+0>: popq %rdi
0x100005001 <+1>: pushq $0x0
0x100005003 <+3>: movq %rsp, %rbp
0x100005006 <+6>: andq $-0x10, %rsp
Process 89747 launched: '/usr/local/bin/python' (x86_64)
* thread #2, stop reason = exec
* frame #0: 0x0000000100005000 dyld`_dyld_start
dyld`_dyld_start:
-> 0x100005000 <+0>: popq %rdi
0x100005001 <+1>: pushq $0x0
0x100005003 <+3>: movq %rsp, %rbp
0x100005006 <+6>: andq $-0x10, %rsp
Process 89747 stopped
* thread #2, stop reason = exec
frame #0: 0x0000000100005000 dyld`_dyld_start
dyld`_dyld_start:
-> 0x100005000 <+0>: popq %rdi
0x100005001 <+1>: pushq $0x0
0x100005003 <+3>: movq %rsp, %rbp
0x100005006 <+6>: andq $-0x10, %rsp
Target 0: (Python) stopped.
(lldb) c
Process 89747 resuming
Forthon edition
UEDGE $Name: V7_08_03 $
Wrote file "gridue" with runid: EFITD 09/07/90 # 66832 ,2384ms
***** Grid generation has been completed
Updating Jacobian, npe = 1
iter= 0 fnrm= 9.216531402973144 nfe= 1
nksol --- iterm = 1.
maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
the maximum norm function. u is probably an
approximate root of f.
Interpolants created; mype = -1
Wrote file "gridue" with runid: EFITD 09/07/90 # 66832 ,2384ms
***** Grid generation has been completed
Updating Jacobian, npe = 1
iter= 0 fnrm= 9.216531400941010 nfe= 1
Updating Jacobian, npe = 2
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x114b09010)
* frame #0: 0x000000010d84a7ba uedgeC.so`daxpy_ + 298
frame #1: 0x000000010d84b6dc uedgeC.so`dgbfa_ + 1036
frame #2: 0x000000010d84cd3f uedgeC.so`dgbco_ + 383
frame #3: 0x000000010d6812e3 uedgeC.so`jac_lu_decomp_ + 467
frame #4: 0x000000010d707909 uedgeC.so`psetnk_ + 2153
frame #5: 0x000000010d8219ec uedgeC.so`model_ + 1580
frame #6: 0x000000010d822c9b uedgeC.so`nksol_ + 4091
frame #7: 0x000000010d75107c uedgeC.so`uedriv_ + 13932
frame #8: 0x000000010d742685 uedgeC.so`exmain_ + 405
frame #9: 0x000000010d57d437 uedgeC.so`bbb_exmain + 55
frame #10: 0x0000000100155f66 Python`PyEval_EvalFrameEx + 19200
frame #11: 0x000000010015124c Python`PyEval_EvalCodeEx + 1540
frame #12: 0x0000000100150c42 Python`PyEval_EvalCode + 32
frame #13: 0x0000000100172633 Python`run_mod + 49
frame #14: 0x00000001001726da Python`PyRun_FileExFlags + 130
frame #15: 0x0000000100172259 Python`PyRun_SimpleFileExFlags + 719
frame #16: 0x0000000100183cd4 Python`Py_Main + 3136
frame #17: 0x00007fff6d9fa3d5 libdyld.dylib`start + 1
frame #18: 0x00007fff6d9fa3d5 libdyld.dylib`start + 1
uedgeC.so`daxpy_:
-> 0x10d84a7ba <+298>: movsd 0x10(%rcx,%rdx), %xmm2 ; xmm2 = mem[0],zero
0x10d84a7c0 <+304>: movhpd 0x18(%rsi,%rdx), %xmm0 ; xmm0 = xmm0[0],mem[0]
0x10d84a7c6 <+310>: movsd (%rcx,%rdx), %xmm3 ; xmm3 = mem[0],zero
0x10d84a7cb <+315>: mulpd %xmm1, %xmm0
Process 89747 stopped
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x114b09010)
frame #0: 0x000000010d84a7ba uedgeC.so`daxpy_ + 298
uedgeC.so`daxpy_:
-> 0x10d84a7ba <+298>: movsd 0x10(%rcx,%rdx), %xmm2 ; xmm2 = mem[0],zero
0x10d84a7c0 <+304>: movhpd 0x18(%rsi,%rdx), %xmm0 ; xmm0 = xmm0[0],mem[0]
0x10d84a7c6 <+310>: movsd (%rcx,%rdx), %xmm3 ; xmm3 = mem[0],zero
0x10d84a7cb <+315>: mulpd %xmm1, %xmm0
Target 0: (Python) stopped.
Segfault happens after line shown in _Forthon.py
.
Tried this because there is no segfault for Maxim on Red Hat 7.7 with gfortran 4.8.5. The segfault appears on the Red Hat VM I tested, so this bug doesn't seem to depend on the compiler version or operating system. Maybe it depends on certain environment variables being set?
I've been using regular python and pip --user
to install things, but I also tried Anaconda python and encountered the same issue.
Error not present in v7, where code crashes when trying to generate grids with additional X-points added.
Minimal working (crashing) example attached, sample output below. Error not always reproducible (sometimes executes well --> memory allocation issue?)
In [1]: import griderror
Uedge version 8.0.0, an update is available to 8.0.0.1
# UEDGE version: 8.0.0
***** Grid generation has been completed
smoothx: no intersec'n of j= 31 with str line for iy= 1
zsh: segmentation fault ipython
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Installing uedge with python setup.py install
fails when the uedge source directory is located in a path where one or more of the directories contain a space. Tested in MacOS and linux. In the generated build/Makefile.com
, the full path of uedge/com
is not escaped properly.
This is the output of python setup.py install
:
running install
running build
(export PYTHONPATH=.:./pyscripts; python2 convertor)
directory list =['com', 'aph', 'api', 'bbb', 'dce', 'flx', 'grd', 'svr', 'wdf', 'pfb', 'idf', 'psi', 'std', 'dst']
Output directory already exists; proceeding
Entering directory com
processing for file type M2F
processing directorycomto directory./com
converting file com/comutil.m to ./com/comutil.F
converting file com/brent.m to ./com/brent.F
converting file com/blasext.m to ./com/blasext.F
converting file com/mnbrak.m to ./com/mnbrak.F
converting file com/misc.m to ./com/misc.F
Entering directory aph
processing for file type M2F
processing directoryaphto directory./aph
converting file aph/aphrates.m to ./aph/aphrates.F
converting file aph/aphread.m to ./aph/aphread.F
Entering directory api
processing for file type M2F
processing directoryapito directory./api
converting file api/apisorc.m to ./api/apisorc.F
converting file api/apip93.m to ./api/apip93.F
converting file api/fmombal.m to ./api/fmombal.F
converting file api/fimp.m to ./api/fimp.F
converting file api/inelrates.m to ./api/inelrates.F
converting file api/sputt.m to ./api/sputt.F
converting file api/apifcn.m to ./api/apifcn.F
Entering directory bbb
processing for file type M2F
processing directorybbbto directory./bbb
converting file bbb/griddubl.m to ./bbb/griddubl.F
converting file bbb/convert.m to ./bbb/convert.F
converting file bbb/odesolve.m to ./bbb/odesolve.F
converting file bbb/geometry.m to ./bbb/geometry.F
converting file bbb/odesetup.m to ./bbb/odesetup.F
converting file bbb/ext_neutrals.m to ./bbb/ext_neutrals.F
converting file bbb/potencur.m to ./bbb/potencur.F
converting file bbb/boundary.m to ./bbb/boundary.F
converting file bbb/s.m to ./bbb/s.F
converting file bbb/oderhs.m to ./bbb/oderhs.F
converting file bbb/turbulence.m to ./bbb/turbulence.F
Entering directory dce
processing for file type M2F
processing directorydceto directory./dce
Entering directory flx
processing for file type M2F
processing directoryflxto directory./flx
converting file flx/flxwrit.m to ./flx/flxwrit.F
converting file flx/flxcomp.m to ./flx/flxcomp.F
converting file flx/flxdriv.m to ./flx/flxdriv.F
converting file flx/dvshdf5.m to ./flx/dvshdf5.F
converting file flx/flxread.m to ./flx/flxread.F
Entering directory grd
processing for file type M2F
processing directorygrdto directory./grd
converting file grd/grdwrit.m to ./grd/grdwrit.F
converting file grd/grdcomp.m to ./grd/grdcomp.F
converting file grd/grddriv.m to ./grd/grddriv.F
converting file grd/grdinit.m to ./grd/grdinit.F
converting file grd/grdread.m to ./grd/grdread.F
Entering directory svr
processing for file type M2F
processing directorysvrto directory./svr
converting file svr/svrut4.m to ./svr/svrut4.F
converting file svr/svrut3.m to ./svr/svrut3.F
converting file svr/uoa.m to ./svr/uoa.F
converting file svr/daspk.m to ./svr/daspk.F
converting file svr/svrut2.m to ./svr/svrut2.F
converting file svr/nksol.m to ./svr/nksol.F
converting file svr/vodpk.m to ./svr/vodpk.F
converting file svr/svrut1.m to ./svr/svrut1.F
Entering directory wdf
processing for file type M2F
processing directorywdfto directory./wdf
converting file wdf/wdf.m to ./wdf/wdf.F
Entering directory pfb
processing for file type M2F
processing directorypfbto directory./pfb
Entering directory idf
processing for file type M2F
processing directoryidfto directory./idf
Entering directory psi
processing for file type M2F
processing directorypsito directory./psi
Entering directory std
processing for file type M2F
processing directorystdto directory./std
Entering directory dst
processing for file type M2F
processing directorydstto directory./dst
(cd com;Forthon --builddir ../build -a -v --fargs "-Ofast -DFORTHON -cpp -finit-local-zero" -f blasext.F com brent.F comutil.F misc.F mnbrak.F dsum.f dummy_py.f error.f getmsg.f ssum.f )
Building package com
make[1]: *** No rule to make target `/Users/sean/Desktop/test', needed by `compymodule.c'. Stop.
make: *** [compy.so] Error 2
running build_py
creating build/lib.macosx-10.15-x86_64-2.7
creating build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/convert1.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uefacets.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/ueplotdata.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uewall.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uedgeutils.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/testpfb.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uedgeplots.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uereadgrid.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/rdcontdt.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/ueplot.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/convert.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/rundt.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/pdb_restore.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/pdb_save.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/__init__.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uefcifc.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/__version__.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/hdf5.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/pdb2h5.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/bas2py_rules.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/convertvsh5.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uexec.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/sources.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uegenlineout.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/ueparallel.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/PRpyt.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/cdf4.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/double.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/plotEdgeStd.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/PWpyt.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/parallel.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/rdinitdt.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/ruthere.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uedge_lists.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/filelists.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/uedge.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
copying pyscripts/ue2vsh5.py -> build/lib.macosx-10.15-x86_64-2.7/uedge
creating build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/create_database.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/reconv.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/ue_plot.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/read_gridue.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/__init__.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/ue_animate.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/utils.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
copying pyscripts/contrib/conv_step.py -> build/lib.macosx-10.15-x86_64-2.7/uedge/contrib
running build_ext
building 'uedge.uedgeC' extension
creating build/temp.macosx-10.15-x86_64-2.7
creating build/temp.macosx-10.15-x86_64-2.7/build
creating build/temp.macosx-10.15-x86_64-2.7/com
clang -fno-strict-aliasing -fno-common -dynamic -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DWITH_NUMERIC=0 -DFORTHON_PKGNAME="uedgeC" -Ibuild -I/usr/local/lib/python2.7/site-packages/numpy/core/include -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python@2/2.7.17/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c uedgeC_Forthon.c -o build/temp.macosx-10.15-x86_64-2.7/uedgeC_Forthon.o
clang -fno-strict-aliasing -fno-common -dynamic -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DWITH_NUMERIC=0 -DFORTHON_PKGNAME="uedgeC" -Ibuild -I/usr/local/lib/python2.7/site-packages/numpy/core/include -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python@2/2.7.17/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c build/Forthon.c -o build/temp.macosx-10.15-x86_64-2.7/build/Forthon.o
clang: error: no such file or directory: 'build/Forthon.c'
clang: error: no input files
error: command 'clang' failed with exit status 1```
There is a suspicious line in this polintp
poloidal interpolation function:
https://github.com/LLNL/UEDGE/blob/master/bbb/griddubl.m#L1169
do 30 ixo = ixos, ixof
if(xo(ixo,iy).gt.xn(ix,iy) .or. ixo.eq.ixof) goto 25
ixl = ixo
20 continue
25 continue
The 30
label is shared with an outer loop, and the 20
label is not used in this function.
The corresponding lines in the radintp
radial interpolation function
https://github.com/LLNL/UEDGE/blob/master/bbb/griddubl.m#L1108
are:
do 20 iyo = iyos, iyof
if(yo(ix,iyo).gt.yn(ix,iy) .or. iyo.eq.iyof) goto 25
iyl = iyo
20 continue
25 continue
I suspect that the radintp
version is correct, and the polintp
do loop should use 20
rather than 30
or be rewritten as
do ixo = ixos, ixof
if(xo(ixo,iy).gt.xn(ix,iy) .or. ixo.eq.ixof) exit
ixl = ixo
end do
Note: I think this bug, if it is a bug, causes extra work to be done but doesn't affect the result: The interpolation is performed for every value of ixo (rather than just one), but the last result computed has the correct value of ixl.
Describe the bug
bbb.uedge_ver returns another version than the current (actual) version returned by uedge.version
https://github.com/LLNL/UEDGE/blob/e98e807ede1062ed83788a74ca6a5c9c7c31c0aa/bbb/bbb.v#L3861C1-L3861C1
To Reproduce
import uedge
uedge.bbb.uedge_ver
uedge.version
Expected behavior
Both return the same version of the code
Additional context
Is there a way to ensure that uedge_ver stays up-to-date with uedge.version without updating the source code at every new version release?
Describe the bug
These two lines look suspicious (caught by compilers warnings)
Do we really want to cast them into int?
The routine sfsetnk is calling the routine to construct the jacobian jac_calc using the argument sf, which is also the dummy output argument of sfsetnk used to assign sfscal in the calls to sfsetnk (line 266,289,297 in odesolve.m).
The argument sf in the call to jac_cal, however, corresponds to the dummy argument wk which is a worker array in jac_calc and is thus modified in jac_calc. Meanwhile, the value of sfscal is also used in jac_calc to build the jacobian, leading to a recursive assignment of sfscal during the call to jac_calc. A numerical glitch may thus appear depending on the compiler and the compilation option (e.g. fno-automatic).
A simple fix is to defined temporary worker array in sfsetnk to be used when calling jac_calc:
real wk(neq)
...
wk(1:neq)=0 ! just for precaution
call jac_calc (neq, tp, yl, yldot0, lbw, ubw, wk,
. nnzmx, jac, jacj, jaci)
I am currently running multiple UEDGE simulations in Colab with isbohmcalc = 0
, i.e. spatially varying transport coefficients. Here is the pseudocode:
from uedge import *
...
for i in range(n):
#update some parameters in the bbb package
bbb.exmain()
#save some measurements
Recently, I have been running into the following issue with some simulations:
nksol --- iterm = 7.
there was a breakdown in the krylov
iteration. this will likely only occur when
the jacobian matrix j or j*(p-inverse) is ill-
conditioned. if this error return occurs with
mf=2, try either mf=1 or mf=3 instead.
The current simulation ends, but this issue prevents any further simulations from running successfully. The code essentially stalls during the next simulation, with the for loop process never terminating. I was wondering if there is any way to prevent to prevent this issue from occurring, or prevent UEDGE from stalling and enable it to move on to the next simulation.
Also, the error message seems to be outdated as the only options for bbb.mf
are 21, 22, 24, and 25.
@llnl-fesp @umansky I would like to compile a version of UEDGE with the OMFIT support (e4c491f) on the GA IRIS cluster. I was given access to the /fusion/projects/codes/uedge
folder on IRIS, but I wanted to coordinate with before making a new directory and starting to maintain a separate version of UEDGE just for usage via OMFIT.
Since @llnl-fesp was very prompt in merging e4c491f into master, I wonder if you could just update UEDGE on IRIS to the latest version :)
Please let me know how you'd like to proceed. Thanks!
On MacOS Catalina 10.15.2 with Python 3.7.6 and Forthon 0.8.41
Set of instructions leading to the segmentation fault (reproducible):
def execfile(filepath, globals=None, locals=None):
if globals is None:
globals = {}
globals.update({
"__file__": filepath,
"__name__": "__main__",
})
with open(filepath, 'rb') as file:
exec(compile(file.read(), filepath, 'exec'), globals, locals)
-python runcase.py
output:
Screen Shot 2020-01-22 at 9 08 25 AM
Result of backtrace with lldb:
Line 7934 in oderhs.m:
if(isngonxy(ix,iy,ifld) == 1) then
should read
if(isngonxy(ix,iy,igsp) == 1) then
?
On Singe the Anaconda 3 includes libgfortran.so.4. The problem occurs when another compiler is selected that uses libgfortran.so.5. Here is the output of ldd on the flx package shared object:
└──$ ldd flxpy.cpython-37m-x86_64-linux-gnu.so
linux-vdso.so.1 => (0x00007ffdd9134000)
libgfortran.so.4 => /usr/local/anaconda/anaconda3/lib/libgfortran.so.4 (0x00007ff294e79000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ff294bae000)
libc.so.6 => /lib64/libc.so.6 (0x00007ff2947e0000)
libquadmath.so.0 => /usr/local/anaconda/anaconda3/lib/libquadmath.so.0 (0x00007ff294e13000)
libm.so.6 => /lib64/libm.so.6 (0x00007ff2944de000)
libgcc_s.so.1 => /usr/local/anaconda/anaconda3/lib/libgcc_s.so.1 (0x00007ff294dff000)
/lib64/ld-linux-x86-64.so.2 (0x00007ff294dca000)
Note that libgfortran.so.4 is used in spite of a version 5 compiler. This is not compatible and can easily cause a crash. Working on a solution. For now the workaround is to compile with version 7 or older.
Originally posted by @llnl-fesp in #25 (comment)
Describe the bug
In oderhs.m, lines 3102-3147 code that calculate the gas thermal conduction coefficients (populating hcxg/hcyg, appearing to be an approximation for molecules) uses terms hard-coded to fluid species ifld=2 (e.g. ni(ix,iy,2) for variables "naavex/y", intended to be for neutral atoms (e.g. lines 3118 and 3123).
This is correct for when inertial neutrals model is active, but is incorrect for diffusive neutrals (when atoms do not populate the ni variable). This has been accounted for in the code by the condition "if (nisp >= 2) ", line 3102.
However, this fix only works when there are no impurity species (or additional plasma species in general). For example, running deuterium plasma with diffusive neutrals + Ne impurity, ni(ix,iy,2) does not correspond to neutral atoms but instead to the first neon charge state. But the condition nisp >= 2 is satisfied, so the gas thermal conduction coefficients code runs and populates hcxg/hcyg, but using the neon density. The resulting error is mostly small (not approaching MW scale), but hard to say if it will always be.
To Reproduce
Steps to reproduce the behavior:
Suggested fix
Change line 3102 from "if (nisp >= 2) " to "if (isupgon(1) .eq. 1)" - i.e. only runs when inertial neutral model is active, rather than when there are more than 2 ion species present.
I've been encountering a problem where certain cases "hang". The text output stops, the code continues running, but nothing else happens (for example, the intermediate hdf5 savefiles do not get updated). Here is an example of the output (I'm using rundt.py
):
$ python2 runcase.py
Read file "gridue " with runid: FREEGS 09/01/2020 # 0 0ms
File attributes:
(' written on: ', 'Mon Apr 13 17:51:10 2020')
(' by code: ', 'UEDGE')
(' physics tag: ', array(['$Name: $'], dtype='|S80'))
UEDGE $Name: $
*** For isimpon=2, set afracs, not afrac ***
Read file "gridue " with runid: FREEGS 09/01/2020 # 0 0ms
Updating Jacobian, npe = 1
iter= 0 fnrm= 0.2571840422417168 nfe= 1
nksol --- iterm = 1.
maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
the maximum norm function. u is probably an
approximate root of f.
*---------------------------------------------------------*
Need to take initial step with Jacobian; trying to do here
*---------------------------------------------------------*
*** For isimpon=2, set afracs, not afrac ***
Read file "gridue " with runid: FREEGS 09/01/2020 # 0 0ms
Updating Jacobian, npe = 1
iter= 0 fnrm= 0.2571840422417166 nfe= 1
nksol --- iterm = 1.
maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
the maximum norm function. u is probably an
approximate root of f.
initial fnrm =2.5718E-01
--------------------------------------------------------------------
--------------------------------------------------------------------
*** Number time-step changes = 1 New time-step = 1.0000E-10
rundt time elapsed: 0:00:08
--------------------------------------------------------------------
*** For isimpon=2, set afracs, not afrac ***
iter= 0 fnrm= 0.2571840422417166 nfe= 1
(stays like this forever)
^Z
It is also possible to get the code to continue by compiling without the -Ofast
flag in Makefile.Forthon
. In this case, you start seeing fnrm= nan
but the output continues until dtreal
goes below dtkill
and the code quits by itself.
I examined the case with -Ofast
which hangs. I found that the pandf
subroutine was being called many times. By adding some print statements and going up the call stack, I found this infinite loop inside subroutine model
in nksol.m
:
10 continue
ipcur = 0
write(STDOUT,*) 'sbb if pthrsh=',pthrsh,'gt onept5=',onept5
write(STDOUT,*) ' and ipflg=',ipflg,'ne 0'
if ( (pthrsh .gt. onept5) .and. (ipflg .ne. 0) ) then
ier = 0
write(STDOUT,*) 'sbb call psetnk'
call pset (n, u, savf, su, sf, x, f, wm(locwmp), iwm(locimp),
* ier)
npe = npe + 1
ipcur = 1
nnipset = nni
if (ier .ne. 0) then
iersl = 8
return
endif
endif
c-----------------------------------------------------------------------
c load x with -f(u).
c-----------------------------------------------------------------------
do 100 i = 1,n
100 x(i) = -savf(i)
c-----------------------------------------------------------------------
c call solpk to solve j*x = -f using the appropriate krylov
c algorithm.
c-----------------------------------------------------------------------
call solpk (n,wm,lenwm,iwm,leniwm,u,savf,x,su,sf,f,jac,psol)
write(STDOUT,*) 'sbb after solpk iersl=', iersl,'ipflg=',ipflg,
* 'ipcur=',ipcur
if (iersl .lt. 0) then
c nonrecoverable error from psol. set iersl and return.
iersl = 9
return
endif
if ( (iersl .gt. 0) .and. (ipflg .ne. 0) ) then
if (ipcur .eq. 0) go to 10
endif
I also put print statements around pandf1
calls in oderhs.m
. The pandf1
calls followed by line numbers (messed up by the additional lines I inserted into the file) correspond to other pandf1
calls in the jac_calc
subroutine.
c ... Beginning of execution for call rhsdpk (by daspk), check constraints
entry rhsdpk (neq, t, yl, yldot, ifail)
if (icflag .gt. 0 .and. t .gt. 0.) then
if (icflag .eq. 2) rlxl = rlx
do 6 i = 1, neq
ylchng(i) = yl(i) - ylprevc(i)
6 continue
call cnstrt (neq,ylprevc,ylchng,icnstr,tau,rlxl,ifail,ivar)
if (ifail .ne. 0) then
call remark ('***Constraint failure in DASPK, dt reduced***')
write (*,*) 'variable index = ',ivar,' time = ',t
goto 20
endif
else
ifail = 0
endif
call scopy (neq, yl, 1, ylprevc, 1) #put yl into ylprevc
8 tloc = t
write(STDOUT,*) 'sbb pandf1 -1 -1 goto'
go to 10
c ... Beginning of execution for call rhsnk (by nksol).
entry rhsnk (neq, yl, yldot)
tloc = 0.
c ... Calculate right-hand sides for interior and boundary points.
ccc 10 call convsr_vo (-1,-1, yl) # test new convsr placement
ccc call convsr_aux (-1,-1, yl) # test new convsr placement
write(STDOUT,*) 'sbb pandf1 -1 -1 sequential'
10 call pandf1 (-1, -1, 0, neq, tloc, yl, yldot)
20 continue
return
end
This produces the following output:
$ python2 runcase.py
Read file "gridue " with runid: FREEGS 09/01/2020 # 0 0ms
File attributes:
(' written on: ', 'Mon Apr 13 17:51:10 2020')
(' by code: ', 'UEDGE')
(' physics tag: ', array(['$Name: $'], dtype='|S80'))
UEDGE $Name: $
*** For isimpon=2, set afracs, not afrac ***
Read file "gridue " with runid: FREEGS 09/01/2020 # 0 0ms
sbb pandf1 -1 -1 sequential
Updating Jacobian, npe = 1
sbb pandf1 8449
sbb pandf1 8518
sbb pandf1 8449
sbb pandf1 8518
sbb pandf1 8449
sbb pandf1 8518
(more pandf1 messages...)
sbb pandf1 8449
sbb pandf1 8518
sbb pandf1 8524
sbb pandf1 -1 -1 sequential
sbb nksol
sbb icntnu= 0
sbb pandf1 -1 -1 sequential
iter= 0 fnrm= 0.2571840410398497 nfe= 1
nksol --- iterm = 1.
maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
the maximum norm function. u is probably an
approximate root of f.
sbb ffun
sbb pandf1 -1 -1 goto
*---------------------------------------------------------*
Need to take initial step with Jacobian; trying to do here
*---------------------------------------------------------*
*** For isimpon=2, set afracs, not afrac ***
Read file "gridue " with runid: FREEGS 09/01/2020 # 0 0ms
sbb pandf1 -1 -1 sequential
Updating Jacobian, npe = 1
sbb pandf1 8449
sbb pandf1 8518
sbb pandf1 8449
sbb pandf1 8518
sbb pandf1 8449
sbb pandf1 8518
(more pandf1 messages...)
sbb pandf1 8518
sbb pandf1 8449
sbb pandf1 8518
sbb pandf1 8449
sbb pandf1 8518
sbb pandf1 8449
sbb pandf1 8518
sbb pandf1 8524
sbb pandf1 -1 -1 sequential
sbb nksol
sbb icntnu= 0
sbb pandf1 -1 -1 sequential
iter= 0 fnrm= 0.2571840410398498 nfe= 1
nksol --- iterm = 1.
maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
the maximum norm function. u is probably an
approximate root of f.
sbb ffun
sbb pandf1 -1 -1 goto
initial fnrm =2.5718E-01
--------------------------------------------------------------------
--------------------------------------------------------------------
*** Number time-step changes = 1 New time-step = 1.0000E-10
rundt time elapsed: 0:00:10
--------------------------------------------------------------------
*** For isimpon=2, set afracs, not afrac ***
sbb pandf1 -1 -1 sequential
sbb nksol
sbb icntnu= 1
sbb pandf1 -1 -1 sequential
iter= 0 fnrm= 0.2571840410398498 nfe= 1
sbb call model 1
sbb model
sbb if pthrsh= 0.0000000000000000 gt onept5= 1.5000000000000000
and ipflg= 1 ne 0
sbb solpk
sbb spimgr
sbb atv
sbb call f
sbb pandf1 -1 -1 sequential
sbb after solpk iersl= 1 ipflg= 1 ipcur= 0
sbb if pthrsh= 0.0000000000000000 gt onept5= 1.5000000000000000
and ipflg= 1 ne 0
sbb solpk
sbb spimgr
sbb atv
sbb call f
sbb pandf1 -1 -1 sequential
sbb after solpk iersl= 1 ipflg= 1 ipcur= 0
sbb if pthrsh= 0.0000000000000000 gt onept5= 1.5000000000000000
and ipflg= 1 ne 0
(and so on until ctrl-Z)
Because pthrsh
is always 0, ipcur
never gets set to anything other than 0, which is required in order to stop looping.
pthrsh
is set to 0 if icntnu != 0
. icntnu
indicates if this is a continuation call to nksol that makes use of old values.
Another interesting thing is that the arguments supplied to pandf1
change around the time of the hang:
pandf1 xc=49 yc=33
pandf1 xc=49 yc=33
pandf1 xc=49 yc=33
pandf1 xc=49 yc=33
pandf1 xc=49 yc=33
pandf1 xc=49 yc=33
pandf1 xc=49 yc=33
pandf1 xc=-1 yc=-1
pandf1 xc=-1 yc=-1
iter= 0 fnrm= 0.2571840410398498 nfe= 1
nksol --- iterm = 1.
maxnorm(sf*f(u)) .le. ftol, where maxnorm() is
the maximum norm function. u is probably an
approximate root of f.
pandf1 xc=-1 yc=-1
pandf1 xc=-1 yc=-1
initial fnrm =2.5718E-01
--------------------------------------------------------------------
--------------------------------------------------------------------
*** Number time-step changes = 1 New time-step = 1.0000E-10
rundt time elapsed: 0:00:09
--------------------------------------------------------------------
*** For isimpon=2, set afracs, not afrac ***
pandf1 xc=-1 yc=-1
pandf1 xc=-1 yc=-1
iter= 0 fnrm= 0.2571840410398498 nfe= 1
pandf1 xc=-1 yc=-1
pandf1 xc=-1 yc=-1
pandf1 xc=-1 yc=-1
pandf1 xc=-1 yc=-1
pandf1 xc=-1 yc=-1
pandf1 xc=-1 yc=-1
pandf1 xc=-1 yc=-1
pandf1 xc=-1 yc=-1
pandf1 xc=-1 yc=-1
(and so on until ctrl-Z)
-1 is not an invalid argument, but apparently means "full RHS evaluation" rather than "poloidal/radial index of perturbed variable for Jacobian calc".
Also tested a case which does not hang (even with -Ofast
) and found that it also has a long period of pandf(-1, -1, ...)
calls where the following form repeats many times but eventually the code moves on:
sbb atv
sbb call f
sbb pandf1 -1 -1 sequential
sbb pandf1 xc= -1 yc= -1
From the absence of certain print statements (compare to end of the 4th code block in this post), we can see that these calls are being made from a different loop, supporting the claim that the loop identified above is the one that needs fixing.
Also ran the hanging case for several minutes to make sure that it was entirely pandf(-1, -1, ...)
calls and it didn't start doing other things.
Also checked out Jerome Guterl's pandf issue but this seems to be a problem inside pandf
, not outside, as we have here.
Newbie trying out the jupyter example notebook using Google Colab (pip install forthon mppl uedge directly in the notebook). I ran into an issue right away, under "Run a Case" I get the following error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-27-3253ef9ba79b> in <module>()
1 from uedge import *
----> 2 bbb.exmain() #
3 print("Usual iteration output should be in terminal window running Jupyter Notebook")
4 print("nksol --- iterm = ",bbb.iterm)
RuntimeError: **INPUT ERROR: only iphibcc=1,2,3 available
From the boundary.m code, it seems to be true: iphibcc==0 is not allowed. I change by hand:
bbb.iphibcc = 1
And it works fine. Just wondering if I have something wrong with version installation? Or is this simply an outdated input file?
Describe the bug
When a case is converged (fnrm < ftol) and icntnunk is set to 1, the following exmain call causes a segfault
To Reproduce
For the Slab_geometry test case in the pytests, the following commands will reproduce the bug:
from rd_slabH_in_w_h5 import *;bbb.exmain();bbb.exmain();bbb.icntnunk=1;bbb.exmain()
Expected behavior
Execute exmain, print initial fnrm, set iterm=1, return to prompt
Additional context
Has been encountered by several users at various points of using the code, most commonly associated with time-dependent runs where ictnunk is actively switched on and off. Bug does not appear to occur for basis versions.
Describe the issue
Lines 5341-5344 in oderhs.m perform the calculation of the alpha_e thermal force coefficients with impurity species.
In the code, all terms in this equation use zi or zeffv:
alfe(ifld) = cfgte*2.2*zi(ifld)**2*(1+.52*zeffv) /
. ( (1+2.65*zeffv)*(1+.285*zeffv)*zeffv )
But in the UEDGE manual (see Appendix A), the equation is given differently as:
alfe(ifld) = cfgte*2.2*zi(ifld)**2*(1+.52*z0) /
. ( (1+2.65*zeffv)*(1+.285*zeffv)*zeffv )
using a z0 term in the numerator, where z0 = ne*zeff/nh − 1.
And both appear inconsistent with the two references cited in the manual for the coefficients. E.g. in Igitkhanov, all terms in alpha_e use zeff (including where zi term is used above).
No idea what the correct form should be, but looks like an inconsistency that would warrant some attention.
Test to see who receives the email. Let me ([email protected]) know if you do.
I was not able to compile UEDGE from the latest commit, I had to go back to a previous commit that @adams-ga knew it was working.
Specifically, I was able to compile UEDGE on OSX from this branch (it is based on the latest master version) https://github.com/orso82/UEDGE/tree/omfit_pyuedge_compiles but not this https://github.com/orso82/UEDGE/tree/omfit_pyuedge (it is based on 3f48970)
There are two variables in pandf subroutine which are not reinitalized between each call:
the variable resmo in the following block for neutral species (zi(ifld)=0). I suggest to add an "else" statement to set resmo to zero when zi=0 (see cJG:suggested correction). The residual of the momentum for the gas species is thus wrong when nusp>1 which is the case when nhsp>1.
do 99 iy = j2, j5
do 98 ix = i2, i5
ix2 = ixp1(ix,iy)
if (zi(ifld) .ne. 0) then # additions only for charged ions
dp1 = cngmom(ifld)*(1/fac2sp)*
( ng(ix2,iy,1)*tg(ix2,iy,1)-
ng(ix ,iy,1)*tg(ix ,iy,1) )
resmo(ix,iy,ifld) = 0.
resmo(ix,iy,ifld) =
smoc(ix,iy,ifld)
+ smov(ix,iy,ifld) * up(ix,iy,ifld)
- cfneut * cfneutsor_mi * sx(ix,iy) * rrv(ix,iy) * dp1
- cfneut * cfneutsor_mi * cmwall(ifld)*0.5*(ng(ix,iy,1)+ng(ix2,iy,1))
* mi(ifld)*up(ix,iy,ifld)*0.5
*(nucx(ix,iy,1)+nucx(ix2,iy,1))*volv(ix,iy)
+ cmneut * cmneutsor_mi * uesor_up(ix,iy,ifld)
+ cfmsor*(msor(ix,iy,ifld) + msorxr(ix,iy,ifld)) #### IJ 2017: needs *cfneut for multi-charge state ions & MC neutrals?
+ volmsor(ix,iy,ifld)
+ cfvisxneov*visvol_v(ix,iy,ifld)
+ cfvisxneoq*visvol_q(ix,iy,ifld)
c Add drag with cold, stationary impurity neutrals
if (ifld > nhsp) then
resmo(ix,iy,ifld) = resmo(ix,iy,ifld) - cfupimpg*
0.25*mi(ifld)*(ni(ix,iy,ifld)+ni(ix2,iy,ifld))*
( nucxi(ix,iy,ifld)+nucxi(ix2,iy,ifld)+
nueli(ix,iy,ifld)+nueli(ix2,iy,ifld) )*
up(ix,iy,ifld)*volv(ix,iy)
endif
cJG: suggest correction
else
resmo(ix,iy,ifld) = 0
endif
the electron power background variable pwrebkg is not reset properly when xinc!=0 or yinc!=0. That should not be a problem though as it is to prevent small values of pwrebkg. But that should be fixed for the pupose of code validation.
The latest version seems to be missing some python files. They are listed in the MANIFEST, and other files try to import them, but can't. This breaks many scripts, such as those which try to save or load HDF5 files.
pyscripts/ruthere.py
listed here: https://github.com/LLNL/UEDGE/blob/master/MANIFEST#L203
imported here: https://github.com/LLNL/UEDGE/blob/master/pyscripts/rdcontdt.py#L9
pyscripts/uexec.py
listed here: https://github.com/LLNL/UEDGE/blob/master/MANIFEST#L220
imported here: https://github.com/LLNL/UEDGE/blob/master/pyscripts/rdcontdt.py#L10
pyscripts/uedge_lists.py
listed here: https://github.com/LLNL/UEDGE/blob/master/MANIFEST#L209
imported here: https://github.com/LLNL/UEDGE/blob/master/pyscripts/hdf5.py#L6
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.