sherpa / sherpa Goto Github PK
View Code? Open in Web Editor NEWFit models to your data in Python with Sherpa.
Home Page: https://sherpa.readthedocs.io
License: GNU General Public License v3.0
Fit models to your data in Python with Sherpa.
Home Page: https://sherpa.readthedocs.io
License: GNU General Public License v3.0
Apologies for bringing up a trivial and annoying issue.
In my editor (PyCharm) some Sherpa code looks mis-indented:
This is how it should look:
https://github.com/sherpa/sherpa/blob/c9f412a3c3d0a193ba5a65027f7630f058fe41a5/sherpa/models/parameter.py#L243
The PEP8 recommendation is to use 4 spaces and never tabs.
Are you willing to change this?
Would you prefer I make a pull request or do this yourself?
(It's easy to do in batch mode e.g. with the autopep8
tool or a Python editor and then only review the diff.)
Feature request: please port Sherpa to support Python 3.
I know this is a big project that will take time and resources.
But I wanted to file the issue to make it clear that this is important to us (Gammapy, used to analyse data from HESS, CTA and possibly other telescopes).
Our motivation is that we want to build Gammapy on Sherpa and it's the only package in our stack that doesn't support Python 3 (see our dependency stack here). If Gammapy doesn't work on Python 3 it can't in good faith be put forward as a possible analysis package for new projects like CTA. E.g. the ctapipe package that's being proposed as the official low-level CTA processing pipeline (doesn't use Sherpa) will be Python 3 only. So getting there within the next ~ year is important to us.
I'm willing to help with this in the coming months, but when it comes to the Fortran / C extensions, my skills and familiarity with the Sherpa code base (same for the other Gammapy developers as far as I know) are probably not sufficient to make this happen.
When I work around issue #13, then python2.7 setup.py build_ext --inplace
succeeds, but on import I get this error:
$ python2.7 -c 'import sherpa.all'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "sherpa/all.py", line 35, in <module>
from sherpa.data import *
File "sherpa/data.py", line 29, in <module>
from sherpa.utils.err import DataErr, NotImplementedErr
File "sherpa/utils/__init__.py", line 37, in <module>
from sherpa.utils._psf import extract_kernel, normalize, set_origin, \
ImportError: No module named _psf
Most extension modules are created OK, but for some reason _psf.so
is not:
$ find . -name '*.so'
./sherpa/astro/models/_modelfcts.so
./sherpa/astro/utils/_pileup.so
./sherpa/astro/utils/_utils.so
./sherpa/estmethods/_est_funcs.so
./sherpa/models/_modelfcts.so
./sherpa/optmethods/_minim.so
./sherpa/optmethods/_minpack.so
./sherpa/optmethods/_saoopt.so
./sherpa/optmethods/_tstoptfct.so
./sherpa/stats/_statfcts.so
./sherpa/utils/_utils.so
./sherpa/utils/integration.so
Maybe there's an issue how the extensions to build are defined here?
https://github.com/sherpa/sherpa/blob/master/helpers/extensions/__init__.py
As mentioned in the 4.7 release notes, the new datastack
module fails to load when matplotlib
is not installed.
This is a side effect of the decision to have the IO and plotting backends as soft dependencies, while in CIAO there is always a back-end installed.
Moreover, if matplotlib
is installed, but pyfits
is not, the datastack
module will indeed be loaded, but the tests will fail because pyfits
is not available.
The datastack
should check for the presence of the backends and act accordingly.
Installing matplotlib
and pyfits
works around the issue.
This is import a bug #13898 from an old bug tracking system.
Currently Sherpa plots of counts vs channel are given by
set_analysis(1,"channel","counts", factor=0)
plot_data()
However, in this case the Y axis shows counts/channel and this is not what one expects. Here the counts are divided by the channel number (or channel number in a group), so for a high number of channel the values seem smaller. What we want is to plot the counts in each channel.
I can generated such plot for PHA data set using
add_curve(get_data().channel, get_data().counts))
In the case of grouped data this would need to be slightly different.
Seg.Fault occurs when group() and ignore() leaves exactly 2 channels,
with exactly 2 degrees of freedom in the model. That should be ok -- but
if not, then we can throw an exception.
This appeared fixed on Linux, but still shows up on Mac OS 10.9 as reported by Jamie.
The 12.9.0 release of Xspec includes this change:
- Special characters have been removed from all model parameter names
and (generally) replaced with underscores. This is to make their name
strings more easilty accessible in PyXspec.
Historically we have manually changed the parameter names so that they can be used as Python field names, but it does not look like there was a single set of rules or guide lines for the translation. One possibility is for us to use the new Xspec names, perhaps with temporary support for the existing names as aliases for the new names (to not break backward compatibility, but mark these as deprecated, if this is possible).
A diff of the model.dat
file between 12.8.2a and 12.9.0 shows some of the changes (note that there are also changes to the limits, and a new model, but that's out of scope of this issue):
% diff heasoft-6.16/Xspec/src/manager/model.dat heasoft-6.17/Xspec/src/manager/model.dat
254c254
< tau-y " " 1.0 0.05 0.05 3.0 3.0 0.1
---
> tau_y " " 1.0 0.05 0.05 3.0 3.0 0.1
256c256
< H/R_cyl " " 1.0 0.5 0.5 2.0 2.0 -1.
---
> HovR_cyl " " 1.0 0.5 0.5 2.0 2.0 -1.
295c295
< Ab>He " " 1.0 0.1 0.1 10. 10.0 -0.01
---
> Ab_met " " 1.0 0.1 0.1 10. 10.0 -0.01
345,346c345,346
< Lc/Ld " " 0.1 0. 0. 10. 10. 1.e-2
< fin " " 1.e-1 0.0 0.0 1. 1. -1e-6
---
> LcovrLd " " 0.1 0. 0. 10. 10. 1.e-2
> fin " " 1.e-1 0.0 0.0 1. 1. -1e-6
386c386
< l_h/l_s " " 1. 1e-6 1e-6 1.e6 1.e6 0.01
---
> l_hovl_s " " 1. 1e-6 1e-6 1.e6 1.e6 0.01
389c389
< l_nt/l_h " " 0.5 0. 0. 0.9999 0.9999 0.01
---
> l_ntol_h " " 0.5 0. 0. 0.9999 0.9999 0.01
399c399
< Ab>He " " 1.0 0.1 0.1 10. 10.0 -0.01
---
> Ab_met " " 1.0 0.1 0.1 10. 10.0 -0.01
408c408
< l_h/l_s " " 1. 1e-6 1e-6 1.e6 1.e6 0.01
---
> l_hovl_s " " 1. 1e-6 1e-6 1.e6 1.e6 0.01
411c411
< l_nt/l_h " " 0.5 0. 0. 0.9999 0.9999 0.01
---
> l_ntol_h " " 0.5 0. 0. 0.9999 0.9999 0.01
421c421
< Ab>He " " 1.0 0.1 0.1 10. 10.0 -0.01
---
> Ab_met " " 1.0 0.1 0.1 10. 10.0 -0.01
456c456
< <kT> keV 1.0 0.0808 0.0808 79.9 79.9 0.01
---
> meankT keV 1.0 0.0808 0.0808 79.9 79.9 0.01
460,465c460,465
< D kpc 10.0 0.0 0.0 10000. 10000. -1
< i deg 0.0 0.0 0.0 90.0 90.0 -1.
< Mass solar 1.0 0.0 0.0 100.0 100.0 0.01
< Mdot 1e18 1.0 0.0 0.0 100.0 100.0 0.01
< Tcl/Tef " " 1.7 1.0 1.0 10.0 10.0 -1.
< refflag " " 1.0 -1.0 -1.0 1.0 1.0 -1.
---
> D kpc 10.0 0.0 0.0 10000. 10000. -1
> i deg 0.0 0.0 0.0 90.0 90.0 -1.
> Mass solar 1.0 0.0 0.0 100.0 100.0 0.01
> Mdot 1e18 1.0 0.0 0.0 100.0 100.0 0.01
> TclovTef " " 1.7 1.0 1.0 10.0 10.0 -1.
> refflag " " 1.0 -1.0 -1.0 1.0 1.0 -1.
485c485
< Tcol/Teff " " 1.5 1.0 1.0 2.0 2.0 -1.
---
> TcoloTeff " " 1.5 1.0 1.0 2.0 2.0 -1.
493c493
< lineE keV 6.4 -0.1 0.1 0.1 100.0 100.0
---
> lineE keV 6.4 0.1 0.1 100.0 100.0 -0.1
497c497
< a " " 0.998 0.0 0.0 0.998 0.998 0.1
---
> a " " 0.998 0.01 0.01 0.998 0.998 0.1
554a555,559
> nlapec 3 0. 1.e20 C_nlapec add 0
> kT keV 1. 0.008 0.008 64.0 64.0 .01
> Abundanc " " 1. 0. 0. 5. 5. -0.001
> Redshift " " 0. -0.999 -0.999 10. 10. -0.01
>
582c587
< redshift " " 0.1 1.0 1.0 1.5 2.0 0.1
---
> redshift " " 0.1 1.0e-5 1.0e-5 1.5 2.0 0.1
626c631
< logL/LEdd " " -1. -10. -10. 2 2 0.01
---
> logLoLEdd " " -1. -10. -10. 2 2 0.01
641c646
< logL/LEdd " " -1. -10. -10. 2 2 0.01
---
> logLoLEdd " " -1. -10. -10. 2 2 0.01
691c696
< HighECut keV 95. 0.01 1. 50. 200. -0.01
---
> HighECut keV 95. 0.01 1. 100. 200. -0.01
818c823
< He/H " " 1.0 0. 0. 100. 100. 0.01
---
> HeovrH " " 1.0 0. 0. 100. 100. 0.01
873c878
< <kT> keV 1.0 0.0808 0.0808 79.9 79.9 0.01
---
> meankT keV 1.0 0.0808 0.0808 79.9 79.9 0.01
1119c1124
< <kT> keV 1.0 0.0808 0.0808 79.9 79.9 0.01
---
> meankT keV 1.0 0.0808 0.0808 79.9 79.9 0.01
1426c1431
< E_B-V " " 0.05 0. 0. 10. 10. 0.001
---
> E_BmV " " 0.05 0. 0. 10. 10. 0.001
1512c1517
< E_B-V " " 0.05 0. 0. 10. 10. 0.001
---
> E_BmV " " 0.05 0. 0. 10. 10. 0.001
1563c1568
< lx/ld " " 0.3 0.02 0.02 100 100 0.02
---
> lxovrld " " 0.3 0.02 0.02 100 100 0.02
1580c1585
< "z" " " 0.0 0.0 0.0 1.e5 1.e6 1.e-6
---
> z " " 0.0 0.0 0.0 1.e5 1.e6 1.e-6
1584c1589
< E_B-V " " 0.1 0.0 0.0 100. 100. 0.01
---
> E_BmV " " 0.1 0.0 0.0 100. 100. 0.01
1623c1628
< E_B-V " " 0.05 0. 0. 10. 10. 0.001
---
> E_BmV " " 0.05 0. 0. 10. 10. 0.001
1628c1633
< E_B-V " " 0.1 0.0 0.0 100. 100. 0.01
---
> E_BmV " " 0.1 0.0 0.0 100. 100. 0.01
1712c1717
< Sig@6keV keV 1.00 0.0 0.0 10. 20. .05
---
> Sig_6keV keV 1.00 0.0 0.0 10. 20. .05
1748c1753
< Sig@6keV keV 1.00 0.0 0.0 10. 20. .05
---
> Sig_6keV keV 1.00 0.0 0.0 10. 20. .05
Some docstrings for the optimization methods seem to be out of sync with the actual code. Most likely this happened because the code was changed at some point but the documentation was not, or only partially updated, resulting in the mismatch.
In particular, the docstrings claim that the default values for some parameters are the square root of the double precision epsilon, although this does not seem to be true, at least not for all of the optimization methods. For instance, in the LevMar
method docstring it says that the default for ftol
is sqrt(DBL_EPSILON) ~ 1.19209289551e-07
. There are several issues with this:
1.192..e-07
is not the double precision (assuming 64 bit floating point) epsilon, but the float32 one.1.192..e-07
is indeed the epsilon itself, not its square root.LevMar
implementation seems to be using the float32 epsilon as its default, rather than its square root. So may other methods.See https://github.com/sherpa/sherpa/blob/master/sherpa/optmethods/__init__.py#L351
vs
https://github.com/sherpa/sherpa/blob/master/sherpa/optmethods/optfcts.py#L37 and https://github.com/sherpa/sherpa/blob/master/sherpa/optmethods/optfcts.py#L368
This is an extension to the Issue #41.
cstat requires the background model to account for the background. In case when there is no model the background contribution can be accounted for in the 'wstat' defined in the XSPEC documentation.
https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XSappendixStatistics.html
wstat could be used for low counts Chandra spectra, so its addition is desirable.
The ui.unpack_pha
and ui.save_data
functions fail when:
a) the PyFITS backend is in use
b) the PHA file uses the LONGSTRN OGIP convention, as CIAO does
The error raised is ValueError: Illegal value: ...
.
_EDIT_: it's not the LONGSTRN
keyword that is the problem, it is the presence of "commentary" cards (i.e. key=HISTORY, COMMENT, or blank) which causes the problem. Chandra files have both COMMENT and HISTORY cards, and the failure reported above is due to the COMMENT cards that follow the LONGSTRN
keyword in the header.
There are deprecation warnings, but these are mentioned in #27 so are not considered here.
I have installed pyfits using conda (using the conda.binstar.org/sherpa channel), so that I have
% conda list pyfits
# packages in environment at /export/local/anaconda/envs/sherpa-master:
#
pyfits 3.3.0 np18py27_1
With the latest master branch of Sherpa, I get, when run from its top-level directory:
% ipython
Python 2.7.10 |Continuum Analytics, Inc.| (default, May 28 2015, 17:04:42)
Type "copyright", "credits" or "license" for more information.
IPython 3.1.0 -- An enhanced Interactive Python.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import sherpa
In [2]: sherpa.__version__
Out[2]: '4.7+427.g57e2d92'
In [3]: from sherpa.astro import ui
WARNING: imaging routines will not be available,
failed to import sherpa.image.ds9_backend due to
'RuntimeErr: DS9Win unusable: Could not find ds9 on your PATH'
WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available
In [4]: ui.load_pha('sherpa/astro/datastack/tests/data/3c273.pi')
WARNING: systematic errors were not found in file 'sherpa/astro/datastack/tests/data/3c273.pi'
statistical errors were found in file 'sherpa/astro/datastack/tests/data/3c273.pi'
but not used; to use them, re-read with use_errors=True
read ARF file sherpa/astro/datastack/tests/data/3c273.arf
read RMF file sherpa/astro/datastack/tests/data/3c273.rmf
WARNING: systematic errors were not found in file 'sherpa/astro/datastack/tests/data/3c273_bg.pi'
statistical errors were found in file 'sherpa/astro/datastack/tests/data/3c273_bg.pi'
but not used; to use them, re-read with use_errors=True
read background file sherpa/astro/datastack/tests/data/3c273_bg.pi
So far, so good. If I call unpack_pha
on this dataset, I get PyFITS deprecation warnings (see #27), but also an error (note that you get essentially the same warnings and error if you call
ui.save_data('/tmp/src.fits', ascii=False, clobber=True)
here, which isn't too surprising because I think save_data
is implemented in terms of pack_pha
):
In [5]: out = ui.pack_pha(1)
/export/local/anaconda/envs/sherpa-master/lib/python2.7/site-packages/pyfits/card.py:70: PyfitsDeprecationWarning: The CardList class has been deprecated; all its former functionality has been subsumed by the Header class, so CardList objects should not be directly created. See the PyFITS 3.1.0 CHANGELOG for more details.
PyfitsDeprecationWarning)
sherpa/astro/io/pyfits_backend.py:896: PyfitsDeprecationWarning: The append function is deprecated as of version 3.1 and may be removed in a future version.
Use :meth:`Header.append` instead.
hdrlist.append(pyfits.Card( str(key.upper()), header[key] ))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-bb035208f5de> in <module>()
----> 1 out = ui.pack_pha(1)
<string> in pack_pha(id)
/export/sherpa/sherpa-master/sherpa/astro/ui/utils.pyc in pack_pha(self, id)
4716
4717 """
-> 4718 return sherpa.astro.io.pack_pha(self._get_pha_data(id))
4719
4720 ### DOC-TODO: labelling as AstroPy HDUList; i.e. assuming conversion
/export/sherpa/sherpa-master/sherpa/astro/io/__init__.pyc in pack_pha(dataset)
410 def pack_pha(dataset):
411 data, col_names, hdr = _pack_pha( dataset )
--> 412 return backend.set_pha_data('', data, col_names, hdr, packup=True)
/export/sherpa/sherpa-master/sherpa/astro/io/pyfits_backend.pyc in set_pha_data(filename, data, col_names, header, ascii, clobber, packup)
894 if header[key] is None:
895 continue
--> 896 hdrlist.append(pyfits.Card( str(key.upper()), header[key] ))
897
898 collist = []
/export/local/anaconda/envs/sherpa-master/lib/python2.7/site-packages/pyfits/card.pyc in __init__(self, keyword, value, comment, **kwargs)
445 self.keyword = keyword
446 if value is not None:
--> 447 self.value = value
448
449 if comment is not None:
/export/local/anaconda/envs/sherpa-master/lib/python2.7/site-packages/pyfits/card.pyc in value(self, value)
567 (float, complex, bool, Undefined, np.floating,
568 np.integer, np.complexfloating, np.bool_)):
--> 569 raise ValueError('Illegal value: %r.' % value)
570
571 if isinstance(value, float) and (np.isnan(value) or np.isinf(value)):
ValueError: Illegal value: This FITS file may contain long string keyword values that are
continued over multiple keywords. The HEASARC convention uses the &
character at the end of each substring which is then continued
on the next keyword which has the name CONTINUE..
It looks like PyFITS either doesn't support, or we need to "turn on the necessary flags/switches/..." to do so, the LONGSTRN OGIP convention that CIAO uses in its headers:
% dmlist sherpa/astro/datastack/tests/data/3c273.pi header,clean,raw
...
LONGSTRN = OGIP 1.0 / The HEASARC Long String Convention may be used.
COMMENT = This FITS file may contain long string keyword values that are /
COMMENT = continued over multiple keywords. The HEASARC convention uses the & /
COMMENT = character at the end of each substring which is then continued /
COMMENT = on the next keyword which has the name CONTINUE. /
...
Given that the PyFITS code read this in (see below), suggests that there may be support for it (or there's some issue with how COMMENT keys are being processed):
In [1]: from sherpa.astro import ui
WARNING: imaging routines will not be available,
failed to import sherpa.image.ds9_backend due to
'RuntimeErr: DS9Win unusable: Could not find ds9 on your PATH'
WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available
In [2]: ui.load_pha('sherpa/astro/datastack/tests/data/3c273.pi')
WARNING: systematic errors were not found in file 'sherpa/astro/datastack/tests/data/3c273.pi'
statistical errors were found in file 'sherpa/astro/datastack/tests/data/3c273.pi'
but not used; to use them, re-read with use_errors=True
read ARF file sherpa/astro/datastack/tests/data/3c273.arf
read RMF file sherpa/astro/datastack/tests/data/3c273.rmf
WARNING: systematic errors were not found in file 'sherpa/astro/datastack/tests/data/3c273_bg.pi'
statistical errors were found in file 'sherpa/astro/datastack/tests/data/3c273_bg.pi'
but not used; to use them, re-read with use_errors=True
read background file sherpa/astro/datastack/tests/data/3c273_bg.pi
In [3]: d = ui.get_data()
In [4]: d.header
d.header
In [4]: d.header['LONGSTRN']
Out[4]: 'OGIP 1.0'
In [5]: d.header['COMMENT']
Out[5]:
This FITS file may contain long string keyword values that are
continued over multiple keywords. The HEASARC convention uses the &
character at the end of each substring which is then continued
on the next keyword which has the name CONTINUE.
In [6]:
Using numpy 1.10.0, the test_histo test will fail with
E TypeError: histogram() got an unexpected keyword argument 'new'
This is in sherpa/astro/tests/test_astro.py
.
The numpy 1.9 release notes - currently at http://docs.scipy.org/doc/numpy/release.html as I couldn't find a versioned URI - has this in the 'Future changes' section
The Sherpa code has been somewhat lax about numpy warnings - some modules override the current value "globally" (i.e. without restricting to a particular function), and this can lead to odd behaviour depending on the order modules are imported. I have not checked to see if we have any code that ignores the warning mentioned above, but it should probably be done since numpy 1.10.0 has just been released: https://mail.scipy.org/pipermail/scipy-user/2015-October/036709.html
A run of the test suite with numpy 1.10.0 doesn't seem to show any problem related to this particular change.
SegFault with fake_pha on some systems was reported.
The original bug and the data are located here:
/data/lenin2/Bugs/Sherpa/Nustar/HD_15934
and the script 'test2.sph' fails on MAC.
This bug is specific to X-ray spectral analysis.
In CIAO version the screen output when running on the command line shows the following:
% sherpa
-----------------------------------------------------
Welcome to Sherpa: CXC's Modeling and Fitting Package
-----------------------------------------------------
CIAO 4.6 Sherpa version 2 Wednesday, January 22, 2014
sherpa-1> set_source("faked", "xsapec.A")
sherpa-2> fake_pha("faked", arf="tvcol_A.arf", rmf="tvcol_A.rmf", exposure=1.e8,
grouped=False)
/soft/ciao-4.6/binexe/sherpa: line 199: 28184 Segmentation fault (core
dumped) ipython --profile sherpa
This is import of a bug #13623 from the old bug tracking system.
The PSF origin is related to the centroid of a kernel. Here is a summary of the current limitations that
need to be revisited.
Here is a text listed in the bug report:
"The thread http://cxc.harvard.edu/sherpa/threads/2dpsf/ has text on the use of the "origin" parameter of the PSF model. By default, Sherpa assumes the pixel with the most counts is the center of the PSF image. If this is not true, the user can reset the kernel origin by adding the origin parameter to the PSF model. As said in the thread: For example, if DS9 was used to find that the brightest pixel is at (122, 131), psf.origin should be set equal to '(121, 130)'. This is necessary because Sherpa internally stores data, PSF, and kernel information as NumPy arrays - which start counting from element 0 - whereas DS9 uses pixel coordinates, with the first pixel having a value of (1,1). This is true, and correct advice as far as it goes.
But in recent Level 3 testing, we have found a further limitation: the axes of the PSF must both be even numbers for this to work properly. For example, if the axes of the PSF image are [84, 72], this advice works because the length of each axis is an even number. But for axes of [83, 72], [84, 71] or [83, 71], this advice will not work -- one or both axes have lengths that are odd numbers. "
I have not confirmed the limitation of the PSF size to even number of pixel size.
Using the rc1 version of matplotlib, I am unable to plot since the pylab backend is reported not to have loaded, even though it does work (i.e. the plot from line 1 is displayed), as shown below. Note that the test suite (python setup.py test
) passes, so it's obviously not testing out the plotting backends particularly well.
% ipython --pylab
Python 2.7.10 |Continuum Analytics, Inc.| (default, Sep 15 2015, 14:50:01)
Type "copyright", "credits" or "license" for more information.
IPython 4.0.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
Using matplotlib backend: Qt4Agg
In [1]: plt.plot([1,2,5,8])
Out[1]: [<matplotlib.lines.Line2D at 0x7f20b44c8d90>]
In [2]: from sherpa.astro import ui
WARNING: failed to import sherpa.plot.pylab_backend; plotting routines will not be available
The problem (or at least the first one, there could be more) is that the way we access default options no-longer works in 1.5.
In [1]: import sherpa.plot.pylab_backend
WARNING: failed to import sherpa.plot.pylab_backend; plotting routines will not be available
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-1-06f45ada1491> in <module>()
----> 1 import sherpa.plot.pylab_backend
/home/naridge/data/my-sherpa/sherpa-matplotlib/sherpa/plot/pylab_backend.py in <module>()
95 overplot=False, clearwindow=True,
96 yerrorbars=False,
---> 97 ecolor=_errorbar_defaults['ecolor'],
98 capsize=_errorbar_defaults['capsize'],
99 barsabove=_errorbar_defaults['barsabove'],
KeyError: 'ecolor'
From earlier in the file we have
_errorbar_defaults = get_keyword_defaults(pylab.Axes.errorbar)
Using matplotlib 1.5rc1 I get
In [2]: import pylab
In [3]: import sherpa.utils
In [4]: sherpa.utils.get_keyword_defaults(pylab.Axes.errorbar)
Out[4]: {}
whereas the output for 1.4.3 I get
In [1]: import pylab
In [2]: import sherpa.utils
In [3]: sherpa.utils.get_keyword_defaults(pylab.Axes.errorbar)
Out[3]:
{'barsabove': False,
'capsize': 3,
'capthick': None,
'ecolor': None,
'elinewidth': None,
'errorevery': 1,
'fmt': u'',
'lolims': False,
'uplims': False,
'xerr': None,
'xlolims': False,
'xuplims': False,
'yerr': None}
I installed 1.5rc1 via
% conda create -c tacaswell python=2.7 astropy ipython matplotlib
and the issue is probably related to the 'Axes and artist' point at http://matplotlib.org/devdocs/api/api_changes.html#changes-in-1-5-0
I'm hoping that it's just a case of finding where to extract this information from (and that it's a backwards-compatible change).
The XSpec models come in additive, multiplicative, convolution, pileup, and mixing-model types. Sherpa already supports additive and multiplicative. It does not support the one pileup model provided by XSpec, but does have its own version of this functionality. This leaves convolution and mixing-model types. This request is about the convolution models only, which are documented at http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/manual/Convolution.html
The _xspec.cc
file already contains interfaces to the XSpec convolution models. To support there use in Sherpa we need Python classes.
I have an initial set of classes in my branch at https://github.com/DougBurke/sherpa/tree/feature/xspec-convolution-models - which is based on the support I wrote for these models in the CIAO contributed-script package. It is not ready for a PR, as I want to experiment a bit more.
Note that to properly use these models from Sherpa will require a rethink of how model evaluation is done (I will write up thoughts on this in another issue at some point in the near future). That is, some XSpec convolution models can be used as is, but some aren't going to be useful until further changes are made in Sherpa itself.
The gist at https://gist.github.com/DougBurke/65dd0716afc061a03d63 contains the output of a script (and the script itself) that shows the cflux
and cpflux
models being used - see http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/manual/XSmodelCflux.html and http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/manual/XSmodelCpflux.html using DougBurke@ccd2ccf
It appears that, if you have numpy 1.9.2 installed, sherpa_test
will produce the following on-screen warning (originally discussed as part of PR #29 but has been pulled out into a separate issue). I don't think anything is wrong here, but
.................................ssssssssssssssssssssssssssss.......sss..ssssssssssss./home/naridge/data/my-sherpa/sherpa-omar/sherpa/estmethods/__init__.py:369: RuntimeWarning: invalid value encountered in sqrt
diag = (sigma * numpy.sqrt(inv_info)).diagonal()
....sssssssssss..........................................................ssssss...........................sssss.sssssss.......................................................
----------------------------------------------------------------------
This does not happen if numpy 1.8.1 is installed.
Running sherpa_test -v 3
shows that the message is from test_covar
test_covar (sherpa.estmethods.tests.test_estmethods.test_estmethods)/home/naridge/data/my-sherpa/sherpa-omar/sherpa/estmethods/__init__.py:369: RuntimeWarning: invalid value encountered in sqrt
diag = (sigma * numpy.sqrt(inv_info)).diagonal()
... ok
However, I get this warning from both versions of numpy when running sqrt
directly, as shown below, so it's not clear to me what is causing this message to appear. A quick look at test_estmethods.py
doesn't suggest that the warning is being "turned off" in any way - I haven't looked to see if numpy.seterr
allows this - but maybe there's some change in behavior in one of the earlier tests that means that with numpy 1.9 the message hasn't already been created, whereas it is with numpy 1.8 (in a place where the message is "hidden"); this is all speculation.
% python
Python 2.7.9 |Continuum Analytics, Inc.| (default, Apr 14 2015, 12:54:25)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://binstar.org
>>> import numpy as np
>>> np.version.version
'1.9.2'
>>> np.sqrt([1,2,3])
array([ 1. , 1.41421356, 1.73205081])
>>> np.sqrt([1,-2,3])
__main__:1: RuntimeWarning: invalid value encountered in sqrt
array([ 1. , nan, 1.73205081])
>>> np.sqrt([1,-2,3])
array([ 1. , nan, 1.73205081])
>>>
and
% python
Python 2.7.6 (default, Apr 18 2014, 17:48:56)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.version.version
'1.8.1'
>>> np.sqrt([1,2,3])
array([ 1. , 1.41421356, 1.73205081])
>>> np.sqrt([1,-2,3])
__main__:1: RuntimeWarning: invalid value encountered in sqrt
array([ 1. , nan, 1.73205081])
>>> np.sqrt([1,-2,3])
array([ 1. , nan, 1.73205081])
>>>
I started importing the existing bugs from the older bug system.
This bug #13899 is related to XSPEC - Seg. Fault in CIAO 4.6 version.
I include the discussion below. @dtnguyen2 should have the test files for testing this bug.
I suggest to check if this bug has been fixed with the recent work on XSPEC build described in PR #34
Using the files that you provide above, I can replicate the segv that you
mention using the ciao4.6 version of sherpa (/soft/ciao- linux 64 bit
installation). As you state, it is not consistent, but I've had it happen
several times.
From a quick peek into a generated core file- it looks like the segv is
occurring in the XSpec code:
Core was generated by `/soft/ciao-4.6/ots/bin/python
/soft/ciao-4.6/ots/bin/ipython --profile sherpa -'.
Program terminated with signal 11, Segmentation fault.
#0 0x00007f37ea72a54a in bisearch (ne=,
ear=0x4fa5b80,
energy=@0x7fff63be200c) at xsgaul.f:314
314 IF ( energy .GT. ear(bisearch) ) THEN
We'll continue to investigate the issue and let you know when we have more
details.
Hi Dan,
I've edited:
/pool14/nlee/helpdesk/NEI_sim.py
to force it to point to the APEC files I've placed in:
/pool7/nlee/apec
which should ensure that the script executes.
~Nick
On Thu, Jan 16, 2014 at 10:20 AM, Nguyen, Dan [email protected]:
I'm not seeing the segv that you are reporting, in fact I never quite
make it there:dnguyen_dev@newdevel12-10160: python -i NEI_sim.py
Simulating 1 of 2000Traceback (most recent call last):
File "NEI_sim.py", line 121, in
load_pha(filename)
File "", line 1, in load_pha
File
"/vobs/ASC_BUILD/lib/python2.7/site-packages/sherpa/astro/ui/utils.py",
line 1096, in load_pha
phasets = self.unpack_pha(arg, use_errors)
File
"/vobs/ASC_BUILD/lib/python2.7/site-packages/sherpa/astro/ui/utils.py",
line 1031, in unpack_pha
return sherpa.astro.io.read_pha(arg, use_errors)
File
"/vobs/ASC_BUILD/lib/python2.7/site-packages/sherpa/astro/io/init.py",
line 151, in read_pha
datasets, filename =
backend.get_pha_data(arg,use_background=use_background)
File
"/vobs/ASC_BUILD/lib/python2.7/site-packages/sherpa/astro/io/crates_backend.py",
line 964, in get_pha_data
raise IOErr('reqcol', 'CHANNEL', filename)
sherpa.utils.err.IOErr: Required column 'CHANNEL' not found in temp_1.phaOn Thu, Jan 16, 2014 at 10:07 AM, McLaughlin, Warren <
[email protected]> wrote:Hi Dan-
I believe they are available for download from the table at:
http://cxc.harvard.edu/caldb/prop_plan/imaging/index.html
You can also access copies of them that I downloaded into /pool7/sherpa.
-Warren
On Thu, Jan 16, 2014 at 9:54 AM, Nguyen, Dan [email protected]:
Nick---
Can you make the files
arfname="aciss_aimpt_cy16.arf"
rmfname="aciss_aimpt_cy16.rmf"available?
dnguyen_dev@newdevel12-10151: ls -l /pool14/nlee/helpdesk
total 12
-rw-r--r--. 1 nlee nlee 11733 Jan 15 17:02 NEI_sim.pyThanks in advance.
dan
dnguyen_dev@newdevel12-10148: python -i
/pool14/nlee/helpdesk/NEI_sim.py
Traceback (most recent call last):
File "/pool14/nlee/helpdesk/NEI_sim.py", line 72, in
arf1=unpack_arf(arfname)
File "", line 1, in unpack_arf
File
"/vobs/ASC_BUILD/lib/python2.7/site-packages/sherpa/astro/ui/utils.py",
line 3411, in unpack_arf
return sherpa.astro.instrument.ARF1D(sherpa.astro.io.read_arf(arg))
File
"/vobs/ASC_BUILD/lib/python2.7/site-packages/sherpa/astro/io/init.py",
line 131, in read_arf
data, filename = backend.get_arf_data(arg)
File
"/vobs/ASC_BUILD/lib/python2.7/site-packages/sherpa/astro/io/crates_backend.py",
line 673, in get_arf_data
arf = open_crate(arg)
File
"/vobs/ASC_BUILD/lib/python2.7/site-packages/sherpa/astro/io/crates_backend.py",
line 44, in open_crate
dataset = open_crate_dataset(filename, crateType, mode)
File
"/vobs/ASC_BUILD/lib/python2.7/site-packages/sherpa/astro/io/crates_backend.py",
line 36, in open_crate_dataset
dataset = crateType(filename, mode=mode)
File
"/vobs/ASC_BUILD/lib/python2.7/site-packages/pycrates/cratedataset.py",
line 28, in init
raise IOError("File " + input + " does not exist.")
IOError: File aciss_aimpt_cy16.arf does not exist.On Wed, Jan 15, 2014 at 5:16 PM, Nicholas Lee [email protected]:
Hi sherpadev,
I've been working on a helpdesk ticket for a user that is generating a
large number of synthetic spectra (2000 in this case) and repeatedly fits
them to a xsvnei model, and it seems there is a problem when the
NEIAPECROOT is set to a specific path and the NEIVERS called is AtomDB 2.I've placed a modified version of the user's scripts in:
/pool14/nlee/helpdesk/NEI_sim.pyI've pointed the script to use the spectral model data found in the
HEAD installation of HEASoft.set_xsxset("APECROOT","/soft/heasoft/spectral/modelData/apec_v2.0.2")
set_xsxset("NEIAPECROOT","/soft/heasoft/spectral/
modelData/APEC_nei_v11")and while using AtomDB 1.3.1, the scripts runs to completion; if set
to use 2.0, 2.0.1, or 2.0.2, the scripts will fail after a random number of
iterations - it can be as few as 1 or as many as 1000 - with a segv core
dump. Quite possibly the Python destructor not closing a file randomly? It
seems like a bug with how Sherpa is handling AtomDB 2.Thanks,
~Nick
This is an issue from the CIAO 4.7 ECR related to the test failures.
"We have noticed some sporadic Xspec failures when running the sherpa regression tests. For example, when some models are called multiple times or with certain array sizes, they may return arrays of zeros or nans. The issue also exists in CIAO 4.6 using Xspec 12.8.0. "
The status of the tests should be checked after PR #34 is merged.
Hi,
I just read about sherpa on the astrostatistics facebook timeline, and I wonder if you would like to create Debian/Ubuntu package?
Ideally, this should be done within the Debian Astronomy working group . You could do it yourself, or anyone else can volunteer. As far as I could see, the package is quite standard, so the packaging should be not too difficult.
Please contact me directly, or via the mailing list list if you need further information. We can guide you through the full process, and also finally upload your package to Debian. Once it is in Debian, it will automatically migrate to distributions like Ubuntu or Mint.
Best regards
Ole
This Friday I'll join the Debian Astro packaging tutorial by @olebole and I'd like to try and create a Debian package for Sherpa 4.7.
The maintainer for the package will be the "Debian Astronomy Team" as explained here. Anyone interested to follow this or help out is welcome to join their mailing list.
OK? Or are there any reasons not to do it or work in progress to add it already?
And would it be OK if I use this issue to ask for advice if any issues come up during packaging?
This is an issue sent by a user to the CXC helpdesk pointing out the limitations of the calc_stat_function.
This issue limits the user_stat functionality. There is no workaround..
"So I'm trying to define a w-statistic function for use in fitting some data, and I'm trying to follow the examples and documentation at http: //cxc.harvard.edu/sherpa/faq/user_stat.html and
http: //cxc.harvard.edu/sherpa/ahelp/load_user_stat.html, and the definition of the w-statistic at
http: //heasarc.nasa.gov/xanadu/xspec/manual/XSappendixStatistics.html (at
the end of the second cstat section).
My question is, how does a statistic-calculating function gain access to the
backgrounds from the pha file? According to the documentation the function
definition should look like
def calc_stat_func(data, model, staterror=None, syserror=None, weight=None):
and that doesn't have a "background" argument."
XSpec models that call the SUMDEM routine - e.g. xsapec
but many others - can cause a crash if called with only one bin, as shown below ("XSpec model crash" section). This has been reported to the XSpec team, but is unlikely to be a major problem for Xspec. It is a serious issue for Sherpa than XSpec due to the way that the wrapper code that interfaces to the XSpec model library is written; in particular, how it handles being given a non-continuous grid ("Sherpa model crash" and "What causes the crash (in Sherpa)?" sections).
It is not clear to me how many of the existing issues are affected by this: #55 and #56 may be but I'm not convinced that #42 is, because the test code isn't written to test out this behavior (and so doesn't show it).
I am going to update the xspec test code to make sure it compares the results from model comparisons run with a single grid (contiguous) and xlo,xhi grids (made to be non-contiguos) and compare the overlap. I've already done this in off-to-the-side work and it shows the crash behavior for some models, as well as other differences which I believe to be down to the fact that many models involve interpolating between grids, so if you run with different grids you can get different results.
Once the test suite has been improved I plan to change the wrapper code so that it deals with non-contiguous grids by only calling the XSpec function once, with extra bins added to make it contiguous, and then remove them on output. Unless someone wants to do this before me ;-)
% uname -a
Linux madeupname 3.19.0-21-generic #21-Ubuntu SMP Sun Jun 14 18:31:11 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
% xspec
XSPEC version: 12.8.2q
Build Date/Time: Wed Jun 24 12:42:59 2015
XSPEC12>energies 0.1 0.2 1
Models will now use energy array created from:
0.1 - 0.2 1 linear bins
XSPEC12>mo apec
Input parameter value, delta, min, bot, top, and max values for ...
1 0.01( 0.01) 0.008 0.008 64 64
1:apec:kT>/*
Segmentation fault (core dumped)
% python
Python 2.7.10 |Continuum Analytics, Inc.| (default, May 28 2015, 17:02:03)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://binstar.org
>>> import sherpa
>>> sherpa.__version__
'4.7+475.g0ee48e5.dirty'
>>> from sherpa.astro import xspec
>>> xspec.get_xsversion()
'12.8.2q'
>>> mdl = xspec.XSapec()
>>> mdl([0.1,0.2,0.3,0.4,0.5])
array([ 2.01070237, 0.31376797, 0.21693133, 0.12136491, 0. ], dtype=float32)
>>> mdl([0.1,0.2,0.3,0.4], [0.2,0.3,0.4,0.5])
array([ 2.01070237, 0.31376797, 0.21693133, 0.12136491], dtype=float32)
>>> mdl([0.1,0.3,0.4], [0.2,0.4,0.5])
Segmentation fault (core dumped)
So, when given xlo
and xhi
grids which are not continuous, we see a crash.
The Sherpa behavior is down to sherpa/include/sherpa/astro/xspec_extension.hh
, which has templates that include some variant of
// Suppose the data were filtered, such that there is a gap
// in the middle of the energy array. In that case *only*,
// we will find that xlo[i+1] != xhi[i]. However, XSPEC models
// expect that xlo[i+1] == xhi[i].
//
// So, if we pass in filtered data and xlo[i+1] != xhi[i],
// then at energy bin i we will end up calculating an energy
// flux that is far too great. We will correct that by gathering
// information to allow us to recalculate individual bins, with
// boundaries xlo[i], xhi[i], to correct for cases where
// boundaries xlo[i], xlo[i+1] results in a bin that is too big.
//
// We will gather the locations of the gaps here, and calculate
// actual widths based on xhi[i] - xlo[i] downstream.
//
// If we are working in wavelength space we will also correct for that.
// SMD 11/21/12.
for (int i = 0; i < nelem-1; i++) {
double cmp;
if ( is_wave ) {
cmp = sao_fcmp(xlo[i], xhi[i+1], DBL_EPSILON);
} else {
cmp = sao_fcmp(xhi[i], xlo[i+1], DBL_EPSILON);
}
if (0 != cmp) {
gaps.push_back(i);
double width = fabs(xhi[i] - xlo[i]);
if( is_wave ) {
width = hc / width;
}
gap_widths.push_back(width);
}
}
and then, after evaluating the model
// If there were gaps in the energy array, because of locations
// where xlo[i+1] != xhi[i], then this is place where we recalculate
// energy fluxes for those bins *only*.
//
// For each such location in the energy grid, construct a new
// 2-bin energy array, such that the 2-bin array is [xlo[i],
// xhi[i]]. This is accomplished by:
//
// ear2[0] = ear[location of gap]
// ear2[1] = ear[location of gap] + abs(xhi[location of gap] -
// xlo[location of gap])
// The locations of the gaps, and the actual widths of the energy
// bins at those locations, were calculated above. So use the
// gaps and gap_widths vectors here to recalculate energy fluxes
// at affected bins *only*. SMD 11/21/12
while(!gaps.empty()) {
std::vector<FloatArrayType> ear2(2);
int bin_number = gaps.back();
ear2[0] = ear[bin_number];
ear2[1] = ear2[0] + gap_widths.back();
int ear2_nelem = 1;
XSpecFunc( &ear2[0], &ear2_nelem, &pars[0], &ifl, &result[bin_number],
&error[bin_number]);
gaps.pop_back();
gap_widths.pop_back();
}
where you can see that there are calls to evaluate the model for grids with 1 bin, which triggers the crash.
I found this when updating the documentation to match the CIAO version, and it happens in both CIAO and the released version.
The warning message that delete_model_component
should create when the input model component is part of a source expression does not appear, instead there's an error thrown:
KeyError: 1
. The 1
comes from list_model_ids
, and is used to find the source expressions to check. Something in the internals has changed, which makes this check fail, and throw an unrelated, and unhelpful-to-the-user error. As discussed below, it also leaves the system in a slightly-confused state.
In [1]: from sherpa.astro import ui
... hidden warning messages ...
In [2]: ui.load_arrays(1, [1,2,3], [4,5,6])
In [3]: ui.set_model(ui.const1d.mdl)
In [4]: print(mdl)
const1d.mdl
Param Type Value Min Max Units
----- ---- ----- --- --- -----
mdl.c0 thawed 1 -3.40282e+38 3.40282e+38
In [5]: ui.delete_model_component('mdl')
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-5-3ef4f808567f> in <module>()
----> 1 ui.delete_model_component('mdl')
<string> in delete_model_component(name)
/home/naridge/data/my-sherpa/ltest/lib/python2.7/site-packages/sherpa/ui/utils.pyc in delete_model_component(self, name)
4318 if mod is not None:
4319 for key in self.list_model_ids():
-> 4320 if mod in self._models[key] or mod in self._sources[key]:
4321 warning("the model component '%s' is found in model %s" %
4322 (mod.name, str(key) + " and cannot be deleted" ))
KeyError: 1
The line numbers will not match up since this comes from my own documentation fork.
In [6]: ui.list_model_ids()
Out[6]: [1]
Looking at the code you can see that the "removal" of the component is done before the warning message is displayed/model restored, which - in this case - leads to the component actually disappearing. Perhaps the logic should be changed to only do the deletion after the check?
In [7]: ui.list_model_components()
Out[7]: []
I expected this to still return ['mdl']
.
Using CIAO 4.7 (OS-X), with Sherpa started up in the top-level of the code distribution (for easy access to a PHA test file):
% sherpa
-----------------------------------------------------
Welcome to Sherpa: CXC's Modeling and Fitting Package
-----------------------------------------------------
CIAO 4.7 Sherpa version 1 Thursday, December 4, 2014
sherpa-1> import sys
sherpa-2> sys.tracebacklimit = 10
sherpa-3> load_pha('sherpa/astro/datastack/tests/data/3c273.pi')
WARNING: systematic errors were not found in file 'sherpa/astro/datastack/tests/data/3c273.pi'
statistical errors were found in file 'sherpa/astro/datastack/tests/data/3c273.pi'
but not used; to use them, re-read with use_errors=True
read ARF file sherpa/astro/datastack/tests/data/3c273.arf
read RMF file sherpa/astro/datastack/tests/data/3c273.rmf
WARNING: systematic errors were not found in file 'sherpa/astro/datastack/tests/data/3c273_bg.pi'
statistical errors were found in file 'sherpa/astro/datastack/tests/data/3c273_bg.pi'
but not used; to use them, re-read with use_errors=True
read background file sherpa/astro/datastack/tests/data/3c273_bg.pi
sherpa-4> save_table(1, '/tmp/tbl.fits', ascii=False, clobber=True)
File "/export/local/ciao-4.7/ots/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2883, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-6a29ee71142f>", line 1, in <module>
save_table(1, '/tmp/tbl.fits', ascii=False, clobber=True)
File "<string>", line 1, in save_table
File "/export/local/ciao-4.7/lib/python2.7/site-packages/sherpa/astro/ui/utils.py", line 3162, in save_table
ascii, clobber)
File "/export/local/ciao-4.7/lib/python2.7/site-packages/sherpa/astro/io/__init__.py", line 391, in write_table
backend.set_table_data(filename, data, names, ascii=ascii, clobber=clobber)
File "/export/local/ciao-4.7/lib/python2.7/site-packages/sherpa/astro/io/crates_backend.py", line 1230, in set_table_data
tbl.write(filename, clobber=True)
File "/export/local/ciao-4.7/lib/python2.7/site-packages/pycrates/tablecrate.py", line 299, in write
backend.write( self, outfile=outfile )
File "/export/local/ciao-4.7/lib/python2.7/site-packages/pycrates/io/dm_backend.py", line 652, in write
self.__write_block( crate, close_flag )
File "/export/local/ciao-4.7/lib/python2.7/site-packages/pycrates/io/dm_backend.py", line 695, in __write_block
self.__write_data(block, crate)
File "/export/local/ciao-4.7/lib/python2.7/site-packages/pycrates/io/dm_backend.py", line 805, in __write_data
col_dd = self.__create_column_descriptor( block, col )
ValueError: Unable to create column HEADER: unsupported datatype ('numpy.object_').
sherpa-5>
This was originally sent to sherpadev
by a user.
Sherpa version 4.7 (commit 2d819ad).
Ubuntu 12.04
Numpy 1.6
Python 2.7
When building Sherpa 4.7 from sources on Ubuntu 12.04 with the system's Python and numpy, one gets the following error:
building 'sherpa.estmethods._est_funcs' extension
compiling C++ sources
C compiler: g++ -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -fPIC
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/sherpa
creating build/temp.linux-x86_64-2.7/sherpa/estmethods
creating build/temp.linux-x86_64-2.7/sherpa/estmethods/src
compile options: '-Isherpa/include -Isherpa/utils/src -Isherpa/utils/src/gsl -I/usr/lib/python2.7/dist-packages/numpy/core/include -I/usr/include/python2.7 -c'
g++: sherpa/estmethods/src/estwrappers.cc
In file included from sherpa/include/sherpa/extension.hh:23:0,
from sherpa/estmethods/src/estwrappers.cc:20:
sherpa/include/sherpa/array.hh: In member function โint sherpa::Array<CType, ArrayType>::create(int, const npy_intp*, CType*)โ:
sherpa/include/sherpa/array.hh:53:21: error: โNPY_ARRAY_CARRAYโ was not declared in this scope
sherpa/include/sherpa/array.hh: In member function โint sherpa::Array<CType, ArrayType>::from_obj(PyObject*, bool)โ:
sherpa/include/sherpa/array.hh:138:9: error: โNPY_ARRAY_CARRAYโ was not declared in this scope
sherpa/include/sherpa/array.hh:138:9: error: โNPY_ARRAY_BEHAVEDโ was not declared in this scope
sherpa/estmethods/src/estwrappers.cc: In function โPyObject* _wrap_info_matrix(PyObject*, PyObject*)โ:
sherpa/estmethods/src/estwrappers.cc:346:34: error: โNPY_ARRAY_CARRAYโ was not declared in this scope
sherpa/include/sherpa/array.hh: In member function โint sherpa::Array<CType, ArrayType>::create(int, const npy_intp*, CType*) [with CType = double, int ArrayType = 12, npy_intp = long int]โ:
sherpa/include/sherpa/array.hh:54:5: warning: control reaches end of non-void function [-Wreturn-type]
sherpa/include/sherpa/array.hh: In member function โint sherpa::Array<CType, ArrayType>::create(int, const npy_intp*, CType*) [with CType = int, int ArrayType = 5, npy_intp = long int]โ:
sherpa/include/sherpa/array.hh:54:5: warning: control reaches end of non-void function [-Wreturn-type]
sherpa/estmethods/src/estutils.hh: At global scope:
sherpa/estmethods/src/estutils.hh:48:15: warning: โdouble get_stat(double*, double*, int, const double*, const double*, const double*, int, double (*)(double*, int))โ declared โstaticโ but never defined [-Wunused-function]
In file included from sherpa/include/sherpa/extension.hh:23:0,
from sherpa/estmethods/src/estwrappers.cc:20:
sherpa/include/sherpa/array.hh: In member function โint sherpa::Array<CType, ArrayType>::create(int, const npy_intp*, CType*)โ:
sherpa/include/sherpa/array.hh:53:21: error: โNPY_ARRAY_CARRAYโ was not declared in this scope
sherpa/include/sherpa/array.hh: In member function โint sherpa::Array<CType, ArrayType>::from_obj(PyObject*, bool)โ:
sherpa/include/sherpa/array.hh:138:9: error: โNPY_ARRAY_CARRAYโ was not declared in this scope
sherpa/include/sherpa/array.hh:138:9: error: โNPY_ARRAY_BEHAVEDโ was not declared in this scope
sherpa/estmethods/src/estwrappers.cc: In function โPyObject* _wrap_info_matrix(PyObject*, PyObject*)โ:
sherpa/estmethods/src/estwrappers.cc:346:34: error: โNPY_ARRAY_CARRAYโ was not declared in this scope
sherpa/include/sherpa/array.hh: In member function โint sherpa::Array<CType, ArrayType>::create(int, const npy_intp*, CType*) [with CType = double, int ArrayType = 12, npy_intp = long int]โ:
sherpa/include/sherpa/array.hh:54:5: warning: control reaches end of non-void function [-Wreturn-type]
sherpa/include/sherpa/array.hh: In member function โint sherpa::Array<CType, ArrayType>::create(int, const npy_intp*, CType*) [with CType = int, int ArrayType = 5, npy_intp = long int]โ:
sherpa/include/sherpa/array.hh:54:5: warning: control reaches end of non-void function [-Wreturn-type]
sherpa/estmethods/src/estutils.hh: At global scope:
sherpa/estmethods/src/estutils.hh:48:15: warning: โdouble get_stat(double*, double*, int, const double*, const double*, const double*, int, double (*)(double*, int))โ declared โstaticโ but never defined [-Wunused-function]
region_util.c: In function โregComposeAllocShapeโ:
region_util.c:514:7: warning: format not a string literal and no format arguments [-Wformat-security]
region_util.c: In function โregPrintShapeโ:
region_util.c:651:7: warning: format not a string literal and no format arguments [-Wformat-security]
stack.c: In function 'stk_test':
stack.c:1116:9: warning: ignoring return value of 'fscanf', declared with attribute warn_unused_result [-Wunused-result]
stack.c: In function 'fgets_trim':
stack.c:997:7: warning: ignoring return value of 'fgets', declared with attribute warn_unused_result [-Wunused-result]
stack.c: In function 'stk_test':
stack.c:1116:9: warning: ignoring return value of 'fscanf', declared with attribute warn_unused_result [-Wunused-result]
stack.c: In function 'fgets_trim':
stack.c:997:7: warning: ignoring return value of 'fgets', declared with attribute warn_unused_result [-Wunused-result]
pystk.c: In function '_stk_build':
pystk.c:88:7: warning: ignoring return value of 'pipe', declared with attribute warn_unused_result [-Wunused-result]
wcssubs-3.8.7/imhfile.c: In function 'irafwhead':
wcssubs-3.8.7/imhfile.c:913:5: warning: ignoring return value of 'ftruncate', declared with attribute warn_unused_result [-Wunused-result]
In file included from /usr/include/string.h:642:0,
from wcssubs-3.8.7/imhfile.c:83:
In function 'strncat',
inlined from 'same_path' at wcssubs-3.8.7/imhfile.c:1078:2:
/usr/include/x86_64-linux-gnu/bits/string3.h:152:3: warning: call to __builtin___strncat_chk might overflow destination buffer [enabled by default]
wcssubs-3.8.7/fileutil.c: In function 'first_token':
wcssubs-3.8.7/fileutil.c:346:6: warning: ignoring return value of 'fgets', declared with attribute warn_unused_result [-Wunused-result]
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "max" with "max_bn".
error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -fPIC -Isherpa/include -Isherpa/utils/src -Isherpa/utils/src/gsl -I/usr/lib/python2.7/dist-packages/numpy/core/include -I/usr/include/python2.7 -c sherpa/estmethods/src/estwrappers.cc -o build/temp.linux-x86_64-2.7/sherpa/estmethods/src/estwrappers.o" failed with exit status 1
Most likely this has to do with the version of Numpy.
By the way, Numpy 1.6 was never supported by CIAO. We went from 1.5 (CIAO 4.5) to 1.7 (CIAO 4.6).
We should confirm that the Numpy version is the cause of the build failure, and then decide whether we want to support older versions of Numpy, or we want to explicitly mention what versions of Numpy are supported. Maybe our code should check the Numpy version and fail if it is not supported. And we should probably update our Travis CI configuration to build and test Sherpa with the Numpy versions we support.
We should probably keep this issue open until we confirm the cause of the failure and we decide how to proceed. Then maybe we should open different issues/pull requests with whatever actions we decide to take.
I have a fix for the following (and a few other minor issues with save_all
), but I'm waiting for #91 to be merged before submitting, to avoid a conflict. EDIT See PR #98
The save_all
command does not always save the source-model expression. It will in simple cases - e.g. (I've removed irrelevant text output from the command here)
In [1]: from sherpa.astro import ui
/home/naridge/local/anaconda/envs/sherpa-fits-cleanup/lib/python2.7/site-packages/IPython/kernel/__init__.py:13: ShimWarning: The `IPython.kernel` package has been deprecated. You should import from ipykernel or jupyter_client instead.
"You should import from ipykernel or jupyter_client instead.", ShimWarning)
In [2]: ui.load_arrays(1, [1,2,3], [5,6,7])
In [3]: ui.set_source(ui.const1d.m1)
In [4]: ui.save_all()
...
######### Set Source, Pileup and Background Models
set_full_model(1, const1d.m1)
######### XSPEC Module Settings
...
However, with PHA data it doesn't record a source model:
In [1]: from sherpa.astro import ui
/home/naridge/local/anaconda/envs/sherpa-fits-cleanup/lib/python2.7/site-packages/IPython/kernel/__init__.py:13: ShimWarning: The `IPython.kernel` package has been deprecated. You should import from ipykernel or jupyter_client instead.
"You should import from ipykernel or jupyter_client instead.", ShimWarning)
In [2]: ui.load_arrays(1, [1,2,3], [5,6,7])
In [3]: ui.set_source(ui.const1d.m1)
In [4]: ui.save_all()
...
######### Set Source, Pileup and Background Models
######### XSPEC Module Settings
...
If you are using the master branch, and have IPython 4.0.0 and astropy 1.0.4 installed, then you will see this message on importing sherpa.astro.ui
:
% ipython
Python 2.7.10 |Continuum Analytics, Inc.| (default, May 28 2015, 17:02:03)
Type "copyright", "credits" or "license" for more information.
IPython 4.0.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: from sherpa.astro import ui
/home/user/local/anaconda/envs/sherpa/lib/python2.7/site-packages/IPython/kernel/__init__.py:13: ShimWarning: The `IPython.kernel` package has been deprecated. You should import from ipykernel or jupyter_client instead.
"You should import from ipykernel or jupyter_client instead.", ShimWarning)
WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available
This comes from astropy, not sherpa. It looks to have been recently fixed upstream (see astropy/astropy#4078).
I'm running an ipython notebook using the sherpa development version
4.7+426.g6879d8d
I get additional INFO line after certain commands when I use the notebook. This does not happen when I import sherpa into my ipython session. Is this something specific to the ipython Notebook?
Here is an example of the output in the notebook, the command line gives only the standard one line information show in the second line:
load_data("acis.pi")
read ARF file acis.arf
INFO:sherpa.astro.io:read ARF file acis.arf`
In case of fit() I see the double outputs:
WARNING: data set 1 has associated backgrounds, but they have not been subtracted, nor have background models been set
WARNING:sherpa.astro.ui.utils:data set 1 has associated backgrounds, but they have not been subtracted, nor have background models been set
Dataset = 1
Method = neldermead
Statistic = cash
Initial fit statistic = 6.67964e+11
Final fit statistic = -89714 at function evaluation 788
Data points = 444
Degrees of freedom = 442
Change in statistic = 6.67964e+11
p1.ampl 9.36365e-10
p1.index -0.989984
INFO:sherpa.astro.ui.utils:Dataset = 1
Method = neldermead
Statistic = cash
Initial fit statistic = 6.67964e+11
Final fit statistic = -89714 at function evaluation 788
Data points = 444
Degrees of freedom = 442
Change in statistic = 6.67964e+11
p1.ampl 9.36365e-10
p1.index -0.989984
Next get the covariance matrix with covar(). The matrix is later used in the MCMC run.
It would be good to have the ability to store the output of individual Sherpa regression test results.
CIAO tool regression tests compare a baseline file (the expected output) against the actual output of a test. Right now, the Sherpa regression tests are compared against a baseline output file of the screen output from running sherpa.test(datadir=datadir)
. In other words, we compare the expected screen outputs against the resultant screen output of the whole test suite, e.g.:
Found 7/7 tests for sherpa.astro.tests.test_datastack
<more tests found>
...
....<more passes>....
. Solar Abundance Vector set to angr: Anders E. & Grevesse N. Geochimica et Cosmochimica Acta 53, 197 (1989)
Cross Section Table set to bcmc: Balucinska-Church and McCammon, 1998
....<more passes>....
----------------------------------------------------------------------
Ran 287 tests in 191.836s
OK (skipped=1)
What would work better is to compare the actual expected test results to the output for each, individual regression test. Therefore, the Sherpa regression tests should at least provide the capability to store test output to file so that Sherpa can be tested in CIAO in the same way as other CIAO tools.
For example, sherpa/astro/ui/tests/test_astro_ui.py::test_table()
could store a file for each loaded file; a baseline would be compared against the output test file.
Another example would be to output the fit results from sherpa-test-data/sherpatest/ciao4.3/load_template_with_interpolation/fit.py
.
Note that we don't need this for unit tests.
Sherpa version: 4.8.0+16.gecac9ec
The URL at the top of the sherpa github repo is http://cxc.cfa.harvard.edu/contrib/sherpa47/ but should probably be http://cxc.cfa.harvard.edu/contrib/sherpa.
This is a bug imported from an older bug tracking system.
The bug #13545 shows up in the current version in both CIAO and standalone.
Grouping twice gives an IndexError.
Here is the current output in my local version.
(sherpa-test)13545 $ ipython
Python 2.7.10 |Continuum Analytics, Inc.| (default, May 28 2015, 17:04:42)
Type "copyright", "credits" or "license" for more information.
IPython 3.1.0 -- An enhanced Interactive Python.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: from sherpa.astro.ui import *
WARNING: imaging routines will not be available,
failed to import sherpa.image.ds9_backend due to
'RuntimeErr: DS9Win unusable: Could not find ds9 on your PATH'
WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available
In [2]: load_data("xte.pi")
read ARF file xte.warf
read RMF file xte.wrmf
read ARF (background) file xte_bkg.warf
read RMF (background) file xte_bkg.wrmf
read background file xte_bkg.pi
In [3]: notice(0.3,2)
In [4]: show_data()
Data Set: 1
Filter: 0.2993-1.9929 Energy (keV)
Bkg Scale: 0.332619
Noticed Channels: 21-137
name = xte.pi
channel = Float64[1024]
counts = Float64[1024]
staterror = None
syserror = None
bin_lo = None
bin_hi = None
grouping = None
quality = None
exposure = 18513.2782715
backscal = 2.18391968294e-07
areascal = 1.0
grouped = False
subtracted = False
units = energy
rate = True
plot_fac = 0
response_ids = [1]
background_ids = [1]
RMF Data Set: 1:1
name = xte.wrmf
detchans = 1024
energ_lo = Float64[1070]
energ_hi = Float64[1070]
n_grp = UInt64[1070]
f_chan = UInt64[1354]
n_chan = UInt64[1354]
matrix = Float64[382802]
offset = 1
e_min = Float64[1024]
e_max = Float64[1024]
In [5]: group_counts(30)
In [6]: import matplotlib
In [7]: plot_data()
In [8]: show_data()
Data Set: 1
Filter: 0.3322-2.1754 Energy (keV)
Bkg Scale: 0.332619
Noticed Channels: 1-165
name = xte.pi
channel = Float64[1024]
counts = Float64[1024]
staterror = None
syserror = None
bin_lo = None
bin_hi = None
grouping = Float64[1024]
quality = Float64[1024]
exposure = 18513.2782715
backscal = 2.18391968294e-07
areascal = 1.0
grouped = True
subtracted = False
units = energy
rate = True
plot_fac = 0
response_ids = [1]
background_ids = [1]
RMF Data Set: 1:1
name = xte.wrmf
detchans = 1024
energ_lo = Float64[1070]
energ_hi = Float64[1070]
n_grp = UInt64[1070]
f_chan = UInt64[1354]
n_chan = UInt64[1354]
matrix = Float64[382802]
offset = 1
e_min = Float64[1024]
e_max = Float64[1024]
ARF Data Set: 1:1
name = xte.warf
energ_lo = Float64[1070]
energ_hi = Float64[1070]
specresp = Float64[1070]
bin_lo = None
bin_hi = None
exposure = 18513.2782715
Background Data Set: 1:1
Filter: 7.4789 Energy (keV)
Noticed Channels: 1-1024
name = xte_bkg.pi
channel = Float64[1024]
counts = Float64[1024]
staterror = None
syserror = None
bin_lo = None
bin_hi = None
grouping = Float64[1024]
quality = Float64[1024]
exposure = 18513.2782715
backscal = 6.56582415104e-07
areascal = 1.0
grouped = True
subtracted = False
units = energy
rate = True
plot_fac = 0
response_ids = [1]
background_ids = []
Background RMF Data Set: 1:1
name = xte_bkg.wrmf
detchans = 1024
In [9]: group_counts(15)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-9-f2f20366e574> in <module>()
----> 1 group_counts(15)
<string> in group_counts(id, num, bkg_id, maxLength, tabStops)
/Users/aneta/git/sherpa/sherpa/astro/ui/utils.pyc in group_counts(self, id, num, bkg_id, maxLength, tabStops)
7552 if bkg_id is not None:
7553 data = self.get_bkg(id, bkg_id)
-> 7554 data.group_counts(num, maxLength, tabStops)
7555
7556 ### DOC-TODO: check the Poisson stats claim; I'm guessing it means
/Users/aneta/git/sherpa/sherpa/astro/data.pyc in group_counts(self, num, maxLength, tabStops)
1005 bkg = self.get_background(bkg_id)
1006 if (hasattr(bkg, "group_counts")):
-> 1007 bkg.group_counts(num, maxLength=maxLength, tabStops=tabStops)
1008
1009 ### DOC-TODO: see discussion in astro.ui.utils regarding errorCol
/Users/aneta/git/sherpa/sherpa/astro/data.pyc in group_counts(self, num, maxLength, tabStops)
1001 raise ImportErr('importfailed', 'group', 'dynamic grouping')
1002 self._dynamic_group(pygroup.grpNumCounts, self.counts, num,
-> 1003 maxLength=maxLength, tabStops=tabStops)
1004 for bkg_id in self.background_ids:
1005 bkg = self.get_background(bkg_id)
/Users/aneta/git/sherpa/sherpa/astro/data.pyc in _dynamic_group(self, group_func, *args, **kwargs)
837 kwargs.pop(key)
838
--> 839 old_filter = self.get_filter(group=False)
840 do_notice = numpy.iterable(self.mask)
841
/Users/aneta/git/sherpa/sherpa/astro/data.pyc in get_filter(self, group, format, delim)
1596 x = x[::-1]
1597 mask = mask[::-1]
-> 1598 return create_expr(x, mask, format, delim)
1599
1600 def get_filter_expr(self):
/Users/aneta/git/sherpa/sherpa/utils/__init__.pyc in create_expr(vals, mask, format, delim)
571 expr.append(format % vals[ii])
572 expr.append(',')
--> 573 if expr[-1] in (',',delim):
574 expr.append(format % vals[-1])
575
IndexError: list index out of range
A warning message is shown if someone tries to open a read-only files. Running the sherpa regression tests with the data directory specified in a CIAO environment will give a warning:
% source /proj/xena/ciaod/Linux64/ciao_1021/bin/ciao.csh
CIAO configuration is complete...
CIAO 4.8 Wednesday, October 21, 2015
bindir : /proj/xena/ciaod/Linux64/ciao_1021/bin
CALDB : 4.6.9
% python -c "import sherpa; sherpa.test(datadir='/data/regression_test/master/in/sherpa')"
Found 7/7 tests for sherpa.astro.tests.test_datastack
Found 1/1 tests for sherpa.astro.models.tests.test_models
...
................................./proj/xena/ciaod/Linux64/ciao_1021/lib/python2.7/site-packages/pycrates/io/dm_backend.py:140: UserWarning: File '/data/regression_test/master/in/sherpa/aref_Cedge.fits' does not have write permission. Changing to read-only mode.
warnings.warn("File '" + fname + "' does not have write permission. Changing to read-only mode.")
/proj/xena/ciaod/Linux64/ciao_1021/lib/python2.7/site-packages/pycrates/io/dm_backend.py:140: UserWarning: File '/data/regression_test/master/in/sherpa/aref_sample.fits' does not have write permission. Changing to read-only mode.
warnings.warn("File '" + fname + "' does not have write permission. Changing to read-only mode.").....
The files the test is trying to read in do not need to be edited, so this read-only mode shouldn't affect the results of the regression tests. I have write permissions to the files in /data/regression_test/master/in/sherpa
and can confirm that files haven't been modified after running the tests.
The test in which these warnings pops up is in sherpa/astro/sim/tests/test_astro_sim.py
. Specifically, the test calls pragbayes.py:ARFSIMFacory.read()
method, which uses the read_table_blocks()
method of the io backend. This function does not accept a mode string, neither in the pycrates nor in the pyfits implementations.
This was an issue that was brought up last year, but was never fixed. @olaurino had done some investigations into this issue last year, which is summarized here.
The pycrates
backend uses the default Crates file open mode. Crates had changed the default file mode from "read-only" to "read-write" before CIAO 4.7. If a user chooses pycrates
for I/O, and they open a read-only file in Sherpa, they will see this warning. The pyfits backend uses "read-only" mode for all of its functions.
Here are two options for fixing the issue:
sherpa-test-data/sherpatest/aref_Cedge.fits
and aref_sample.fits
, orSherpa version: 4.8.0+16.gecac9ec
More info (from @olaurino):
The pragbayes.py:ARFSIMFacory.read() method uses the read_table_blocks() method of the io backend. This function does not accept a mode string, neither in the pycrates nor in the pyfits implementations.
One option would be to change the signature of the method to accept a mode string, but this should be done in both the crates and pyfits backends. However, the mode string options are not the same for pyfits and pycrates, so the API would be somewhat different for the two backends, although the signature would be the same. One might translate the common subset of options and homogenize the API, but this looks like too much of a change for just getting rid of a warning message that, by itself, is innocent, right?
It is interesting that the pyfits.open() method used by pyfits_backend has a readonly default mode. So, I was wondering whether I should just make 'r' the default in the crates backend as well. Of course the risk is that we break somebody's scripts if they use sherpa to open files they write to using the crates backend.
As we are not doing this because something is broken, but just to get rid of a new warning, I would suggest we just change the flags of the offending files without touching any code. Is there a chance something will be broken because of the changes in the dm code? It depends on the changes. As far as I understand, they just change the opening mode to 'r' when the file is not writable, and they issue a warning. If a user relied on sherpa to write files using the pycrates backend, and the file they wanted to write was readonly, then nothing should change: the weren't able to write the file before and they are not able to do it now.
In conclusion I would suggest that we either leave the code untouched, and we change the file permissions to avoid the warning, or we change the read_table_blocks() method in the pycrates backend to always use the 'r' mode, hoping that nobody is relying on this method opening files in "rw" mode.
If we really wanted to change the read_table_blocks() signature we should do it in both backend implementations, possibly translating the common mode options to have a consistent API, and then change the calls to read_table_blocks().
The pyfits back end uses deprecated functions and classes - it includes new_table
, CardList
, and append
but I have not done an exhaustive check. Once #6 is resolved we should address these issues.
The error, whereby the Chandra PHA file can not be written out is discussed in #46.
The following test script:
% cat test_fits.py
# does writing out a FITS file cause warnings?
import sherpa
from sherpa.astro import ui
print("*** Sherpa version: {}".format(sherpa.__version__))
ui.load_arrays(1, [1,2,3], [4,5,6])
ui.save_data(1, 'test_fits.dat', ascii=True, clobber=True)
print("*** created test_fits.dat ***")
ui.save_data(1, 'test_fits.fits', ascii=False, clobber=True)
print("*** created test_fits.fits ***")
causes the following messages when writing the FITS version of the data.
I meant to use the master branch, but used PR #26 by accident. As the code changes there are not functional, it is not the cause of the error (although the line numbering may be out slightly).
% conda list pyfits
# packages in environment at /home/naridge/local/anaconda/envs/sherpa-pep8:
#
You are using pip version 6.1.1, however version 7.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
pyfits 3.3.0 np18py27_1
% python test_fits.py
WARNING: imaging routines will not be available,
failed to import sherpa.image.ds9_backend due to
'RuntimeErr: DS9Win unusable: Could not find ds9 on your PATH'
WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available
*** Sherpa version: 4.7+375.g1693f34
*** created test_fits.dat ***
/home/naridge/local/anaconda/envs/sherpa-pep8/lib/python2.7/site-packages/sherpa/astro/io/pyfits_backend.py:877: PyfitsDeprecationWarning: The new_table function is deprecated as of version 3.3 and may be removed in a future version.
Use :meth:`BinTableHDU.from_columns` for new BINARY tables or :meth:`TableHDU.from_columns` for new ASCII tables instead.
tbl = pyfits.new_table(pyfits.ColDefs(collist))
*** created test_fits.fits ***
This is using PR #6
% conda list astropy
# packages in environment at /home/naridge/local/anaconda/envs/sherpa-astropy:
#
You are using pip version 6.1.1, however version 7.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
astropy 1.0.2 np19py27_0
% python test_fits.py
WARNING: imaging routines will not be available,
failed to import sherpa.image.ds9_backend due to
'RuntimeErr: DS9Win unusable: Could not find ds9 on your PATH'
WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available
*** Sherpa version: 4.7+2.gffe559b
*** created test_fits.dat ***
WARNING: AstropyDeprecationWarning: The new_table function is deprecated and may be removed in a future version.
Use :meth:`BinTableHDU.from_columns` for new BINARY tables or :meth:`TableHDU.from_columns` for new ASCII tables instead. [astropy.utils.decorators]
*** created test_fits.fits ***
Note that I've seen other warnings from the pyfits back-end. For instance, if I load in a CIAO PHA file, then I get - this is from the astropy
branch, but you see essentially the same from pyfits:
In [1]: from sherpa.astro import ui
WARNING: imaging routines will not be available,
failed to import sherpa.image.ds9_backend due to
'RuntimeErr: DS9Win unusable: Could not find ds9 on your PATH'
WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available
In [2]: ui.load_pha(1, 'grp.pi')
WARNING: [Errno 2] No such file or directory: 'srcs_src1.corr.arf'
WARNING: [Errno 2] No such file or directory: 'srcs_src1.rmf'
In [3]: ui.save_data('test.fits', ascii=False, clobber=True)
WARNING: AstropyDeprecationWarning: The CardList class has been deprecated; all its former functionality has been subsumed by the Header class, so CardList objects should not be directly created. See the PyFITS 3.1.0 CHANGELOG for more details. [astropy.io.fits.card]
WARNING: AstropyDeprecationWarning: The append function is deprecated and may be removed in a future version.
Use :meth:`Header.append` instead. [astropy.utils.decorators]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-0a6e7bbb21bb> in <module>()
----> 1 ui.save_data('test.fits', ascii=False, clobber=True)
<string> in save_data(id, filename, bkg_id, ascii, clobber)
/home/naridge/local/anaconda/envs/sherpa-astropy/lib/python2.7/site-packages/sherpa/astro/ui/utils.pyc in save_data(self, id, filename, bkg_id, ascii, clobber)
3215
3216 try:
-> 3217 sherpa.astro.io.write_pha(filename, d, ascii, clobber)
3218 except IOErr:
3219 try:
/home/naridge/local/anaconda/envs/sherpa-astropy/lib/python2.7/site-packages/sherpa/astro/io/__init__.pyc in write_pha(filename, dataset, ascii, clobber)
398 data, col_names, hdr = _pack_pha( dataset )
399 backend.set_pha_data(filename, data, col_names, hdr, ascii=ascii,
--> 400 clobber=clobber)
401
402 def pack_table(dataset):
/home/naridge/local/anaconda/envs/sherpa-astropy/lib/python2.7/site-packages/sherpa/astro/io/pyfits_backend.pyc in set_pha_data(filename, data, col_names, header, ascii, clobber, packup)
898 if header[key] is None:
899 continue
--> 900 hdrlist.append(pyfits.Card( str(key.upper()), header[key] ))
901
902 collist = []
/home/naridge/local/anaconda/envs/sherpa-astropy/lib/python2.7/site-packages/astropy/io/fits/card.pyc in __init__(self, keyword, value, comment, **kwargs)
446 self.keyword = keyword
447 if value is not None:
--> 448 self.value = value
449
450 if comment is not None:
/home/naridge/local/anaconda/envs/sherpa-astropy/lib/python2.7/site-packages/astropy/io/fits/card.pyc in value(self, value)
568 (float, complex, bool, Undefined, np.floating,
569 np.integer, np.complexfloating, np.bool_)):
--> 570 raise ValueError('Illegal value: %r.' % value)
571
572 if isinstance(value, float) and (np.isnan(value) or np.isinf(value)):
ValueError: Illegal value: This FITS file may contain long string keyword values that are
continued over multiple keywords. The HEASARC convention uses the &
character at the end of each substring which is then continued
on the next keyword which has the name CONTINUE.
specextract version 18 September 2014.
There is a restriction on the bin size of the PSF image, so the bin size of the PSF image needs to match the bin size of the data image. This leads to many issues in handling the PSF in Sherpa.
There is no a priori need for this restriction.
This ticket replaces one of the issues reported in #15.
When python3 is detected, the configure
script should gracefully fail informing the user python3 is not supported.
However, this does not happen (see https://gist.github.com/cdeil/39e9404e91954cb67270#file-gistfile1-txt-L276).
The AX_PYTHON_DEVEL
call in configure.ac
should be changed to AX_PYTHON_DEVEL([<'3.0'])
and autoreconf
should be run. We need to coordinate this with the maintainers of the group library.
I believe that a lot of the XSPEC module code - i.e. most of _xspec.cc
and __init__.py
- should be auto-generated based on the contents of the model.dat
file in the XSPEC distribution. The main advantage to this approach is that it makes it possible for users to compile against the XSPEC module that they are using. That is, handle changes to the model.dat
file that includes:
At present, users are forced to use an interface that assumes a single patch level - at present it's 12.8.2e - which should work for any 12.8.2 build, but leaves the user in a slightly-strange place if used against a different patch level (i.e. what bugs have/have not been fixed). If there's a 12.8.3 with new models then they will not pick them up until the code has been manually edited (which we have found to be problematic), and then there may be problems compiling the module against an older version of XSPEC.
There are some down sides to auto-generating most of the code - such as documentation, complicating the build process, and tracking down bugs (although I'm less bothered about the last point since the code that needs to be generated is quite simple).
I'd also like to expand access to the "utility" functions in XSPEC (in particular the keyword-related ones that might allow us to support some of the "mixing" models), which is related (some of them can be auto-generated, if there's an easy way to check for the functionality being present), but isn't an essential part of this request.
I plan to work on this, and have existing code that I can use as a starting point, but that's going to take a bit of time to put together.
I get an error that GIT is not defined on this line when running python setup.py build_ext --inplace
:
https://github.com/sherpa/sherpa/blame/master/sherpa/_version.py#L129
Can someone reproduce the issue?
(My git is at /opt/local/bin/git
in case it matters.)
Since the Numpy 1.9 upgrade, users have been reporting warning messages like the following:
FutureWarning: comparison to
None
will result in an elementwise object comparison in the future.
Generally speaking, it is good practice to do None checks with if var is None
rather than if var == None
because the equality check can be (at least in principle) overridden. Which is what some of the Numpy structures do or are going to do in the future.
The issue is usually easy to fix, but getting a list of these warnings is not easy. We did fix some of them in the past (see 61fbfcd) but more have been reported, in particular:
sherpa/astro/io/crates_backend.py:1224
sherpa/sim/sample.py:184
sherpa/astro/ui/utils.py:8763
sherpa/astro/ui/utils.py:8768
For what is worth, a workaround exists:
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
After doing a python setup.py install
I see the following:
$ ls /Users/aldcroft/anaconda/envs/sherpa/lib/python2.7/site-packages/*.so
/Users/aldcroft/anaconda/envs/sherpa/lib/python2.7/site-packages/group.so*
/Users/aldcroft/anaconda/envs/sherpa/lib/python2.7/site-packages/stk.so*
Installing modules outside of the package namespace is not a typical behavior.
There are places in the code (usually rather old code) where mutable objects are assigned to default arguments (see for instance #87 (comment)). While this is a clear mistake and should be fixed, we are deferring this fix because:
In some cases the simple fix that has been applied to, e.g., PR #87, would introduce even less idiomatic python code, as the original source was not idiomatic to begin with. In these cases, a proper fix would go beyond just removing the mutable objects, and should deal with the issues in the original code.
As this does not seem to be a priority, we are going to re-evaluate this issue after the 4.8 CIAO release.
See comment #91 (comment)
When no FITS backend is installed, currently the code just prints the bare ImportError from Python, but that's misleading since users can install both astropy and pyfits.
I am setting this to the 4.8 milestone as it does not impact CIAO users.
The file system structure I work on are unique, and we have our own equivalent of pip. Installing sherpa, I discovered that the extensions defined in helpers/extensions/init.py although later added into distribution.ext_modules through config, are not built during the procedure of build_ext.
The current strategy in sherpa packaging, specifically, in the overrode cmd_build and subclassed build(), it tries to call sherpa_config.build_configure(), in which the self.distribution.ext_modules will get written in several extensions defined in helpers/extensions/init.py. But, since what's defined in ext_modules are processed in build_ext procedure, which in time precedes build procedure, the expanded part of ext_modules won't have the opportunity to get processed, hence, those extenions won't be built. This has ultimately resulted in sherpa.test() failure, with a ImportError on looking for _psf module while importing sherpa.utils.
In the effort of mine trying to get this to work for my purpose, I "rescheduled" this to happen in the very beginning of build_ext, and it worked.
I'm getting this configure error for grplib when I try python2.7 setup.py install
for Sherpa:
https://gist.github.com/cdeil/39e9404e91954cb67270#file-gistfile1-txt-L284
It tries to use python3.4, which is the python
on my PATH.
Instead it should use the Python with which I call setup.py.
The data and axis labels are not the same for
plot_source()
compared to plot('source')
plot_model()
compared to plot('model')
when using PHA data sets. It's not clear whether it's the same issue for the two, but we need to look at what the plot
command is doing.
The following is seen with the ChIPS backend (CIAO 4.7) and master with the master
branch. I've seen it with multiple PHA data sets.
% ipython --pylab
Python 2.7.10 |Continuum Analytics, Inc.| (default, Sep 15 2015, 14:50:01)
Type "copyright", "credits" or "license" for more information.
IPython 4.0.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
Using matplotlib backend: Qt4Agg
In [1]: from sherpa.astro import ui
/home/naridge/local/anaconda/envs/sherpa-astropy-fits-gzip/lib/python2.7/site-packages/IPython/kernel/__init__.py:13: ShimWarning: The `IPython.kernel` package has been deprecated. You should import from ipykernel or jupyter_client instead.
"You should import from ipykernel or jupyter_client instead.", ShimWarning)
WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available
In [2]: ui.load_pha(1, 'sherpa-test-data/sherpatest/threads/pha_intro/3c273.pi')
WARNING: systematic errors were not found in file 'sherpa-test-data/sherpatest/threads/pha_intro/3c273.pi'
statistical errors were found in file 'sherpa-test-data/sherpatest/threads/pha_intro/3c273.pi'
but not used; to use them, re-read with use_errors=True
read ARF file sherpa-test-data/sherpatest/threads/pha_intro/3c273.arf
read RMF file sherpa-test-data/sherpatest/threads/pha_intro/3c273.rmf
WARNING: systematic errors were not found in file 'sherpa-test-data/sherpatest/threads/pha_intro/3c273_bg.pi'
statistical errors were found in file 'sherpa-test-data/sherpatest/threads/pha_intro/3c273_bg.pi'
but not used; to use them, re-read with use_errors=True
read background file sherpa-test-data/sherpatest/threads/pha_intro/3c273_bg.pi
In [3]: ui.load_pha(2, 'sherpa-test-data/sherpatest/threads/pha_intro/3c273.pi')
WARNING: systematic errors were not found in file 'sherpa-test-data/sherpatest/threads/pha_intro/3c273.pi'
statistical errors were found in file 'sherpa-test-data/sherpatest/threads/pha_intro/3c273.pi'
but not used; to use them, re-read with use_errors=True
read ARF file sherpa-test-data/sherpatest/threads/pha_intro/3c273.arf
read RMF file sherpa-test-data/sherpatest/threads/pha_intro/3c273.rmf
WARNING: systematic errors were not found in file 'sherpa-test-data/sherpatest/threads/pha_intro/3c273_bg.pi'
statistical errors were found in file 'sherpa-test-data/sherpatest/threads/pha_intro/3c273_bg.pi'
but not used; to use them, re-read with use_errors=True
read background file sherpa-test-data/sherpatest/threads/pha_intro/3c273_bg.pi
In [4]: ui.set_source(1, ui.powlaw1d.pl1)
In [5]: ui.set_source(2, ui.powlaw1d.pl2)
In [6]: ui.set_analysis(1, 'energy')
In [7]: ui.set_analysis(2, 'wave')
In [8]: ui.plot_source(1)
In [9]: plt.ylim()
Out[9]: (0.0, 10.0)
The X axis is labelled Energy (keV)
and the Y axis f(E) Photons/sec/cm^2/keV
(for the matplotlib backend this is not converted to a superscript; I'll have a PR about that).
In [10]: ui.plot_source(2)
In [11]: plt.ylim()
Out[11]: (0.0, 0.90000000000000002)
Switching to wavelength gives an X axis label of Wavelength (\AA)
and Y axis label of f(\lambda) Photons/sec/cm^2/\AA
.
If I now switch to the plot
command, I get different axis labels and ranges displayed on the Y axis:
In [12]: ui.plot('source', 1)
In [13]: plt.ylim()
Out[13]: (0.0, 0.00040000000000000002)
The Y axis label is now Counts/sec/keV
- i.e. it looks like it thinks it is plotting the model - but you can see that the model is still a plain power law (i.e. not multiplied by an ARF). Not sure what the Yaxis range indicates, but it is significantly different to the plot_source
values shown above. You see similar things with the wavelength version:
In [14]: ui.plot('source', 2)
In [15]: plt.ylim()
Out[15]: (0.0, 2.9999999999999997e-05)
The Y axis label now is (if you can see it) Counts/sec/Angstrom
and the X axis is Wavelength (Angstrom)
.
if I compare ui.plot_model(1)
to ui.plot('model', 1)
then there's much less difference, but they aren't quite the same (it looks like the points the model is evaluated over are different, so that the X axis range is slightly different).
The plot for ui.plot_model(2)
looks very wrong (the X axis is 0 to 5000!), whereas ui.plot('model', 2)
covers 0 to 800.
In the following, the guess
function gives a reasonable (but not great) result for the amplitude of the powlaw1d
model (a Sherpa model), but completely ridiculous for the xspowerlaw
model (XSPEC). I don't show plots, but do show the integrated counts and model values, which I would expect to be close together after the guess, but they are not:
data counts = 736
model counts (after guess, powlaw1d) = 1173.3
model counts (after guess, xspowerlaw) = 189115.3
The original case when I noticed this was even worse (more magnitudes out).
Not shown here is that repeated calls to guess keep on changing the norm
parameter of the XSPEC model (I have not checked for the Sherpa model).
In [1]: from sherpa.astro import ui
/home/naridge/local/anaconda/envs/sherpa-wstat/lib/python2.7/site-packages/IPython/kernel/__init__.py:13: ShimWarning: The `IPython.kernel` package has been deprecated. You should import from ipykernel or jupyter_client instead.
"You should import from ipykernel or jupyter_client instead.", ShimWarning)
In [2]: import sherpa
In [3]: sherpa.__version__
Out[3]: '4.7+531.g0a0dc6f.dirty'
In [4]: ui.load_pha('sherpa/astro/datastack/tests/data/3c273.pi')
WARNING: systematic errors were not found in file 'sherpa/astro/datastack/tests/data/3c273.pi'
statistical errors were found in file 'sherpa/astro/datastack/tests/data/3c273.pi'
but not used; to use them, re-read with use_errors=True
read ARF file sherpa/astro/datastack/tests/data/3c273.arf
read RMF file sherpa/astro/datastack/tests/data/3c273.rmf
WARNING: systematic errors were not found in file 'sherpa/astro/datastack/tests/data/3c273_bg.pi'
statistical errors were found in file 'sherpa/astro/datastack/tests/data/3c273_bg.pi'
but not used; to use them, re-read with use_errors=True
read background file sherpa/astro/datastack/tests/data/3c273_bg.pi
In [5]: ui.set_source(powlaw1d.spl)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-5-7ac16d0585b6> in <module>()
----> 1 ui.set_source(powlaw1d.spl)
NameError: name 'powlaw1d' is not defined
In [6]: ui.set_source(ui.powlaw1d.spl)
In [7]: print(spl)
powlaw1d.spl
Param Type Value Min Max Units
----- ---- ----- --- --- -----
spl.gamma thawed 1 -10 10
spl.ref frozen 1 -3.40282e+38 3.40282e+38
spl.ampl thawed 1 0 3.40282e+38
In [8]: ui.guess(spl)
In [9]: print(spl)
powlaw1d.spl
Param Type Value Min Max Units
----- ---- ----- --- --- -----
spl.gamma thawed 1 -10 10
spl.ref frozen 1 -3.40282e+38 3.40282e+38
spl.ampl thawed 0.000150152 1.50152e-07 0.150152
In [10]: ui.calc_model_sum()
Out[10]: 1173.3072192683503
In [11]: ui.calc_data_sum()
Out[11]: 736.0
In [12]: ui.set_source(ui.xspowerlaw.xpl)
In [13]: print(xpl)
xspowerlaw.xpl
Param Type Value Min Max Units
----- ---- ----- --- --- -----
xpl.PhoIndex thawed 1 -2 9
xpl.norm thawed 1 0 1e+24
In [14]: ui.guess(xpl)
In [15]: print(xpl)
xspowerlaw.xpl
Param Type Value Min Max Units
----- ---- ----- --- --- -----
xpl.PhoIndex thawed 1 -2 9
xpl.norm thawed 0.0242017 2.42017e-05 24.2017
In [16]: ui.calc_model_sum()
Out[16]: 189115.25789620852
In [17]: ui.calc_data_sum()
Out[17]: 736.0
Just to check we are comparing like with like, if I set the xpl
normalization to that of the spl
component, the calc_model_sum
matches (so it's not because the models are calculating different things):
In [18]: xpl.norm = spl.ampl.val
In [19]: ui.calc_model_sum()
Out[19]: 1173.3072192683503
Our default option in plot_pvalue is not to use the response files. The interface for multiple data sets with the response file for each of them is confusing. I'm still not sure how to indicate the convolution model for
other ids.
I'd forgotten about this, but we should add a link between the sherpa github account and zenodo (for getting a DOI for citation).
An example of this integration is https://github.com/dfm/triangle.py - if you go to the bottom of the page there's an Attribution section with a doi badge - this takes you to the zenodo page (there's one for each release): https://zenodo.org/record/11020#.VYgkWZRHB2Q
I tried following the guide at https://guides.github.com/activities/citable-code/ but I think @olaurino would have to do this, as owner of the account.
I recently built a dev cut of the conda binary (for supporting our gammapy colleagues), and I found some minor issues with sherpa_test while testing the package.
Since this does not impact CIAO users I am setting this ticket to the 4.8 release milestone.
Here are the issues:
pip install -r test_requirements.txt
, which is fine when you are working with the source code, but not when you are working with binaries. The suggestion should be improved for dealing with both cases.python setup.py install
, only when using pip install .
This should be either fixed or documented.sherpa_test -d
(for overriding the discovery of the test data folder) did not seem to work. I did not have time for looking into it, as there is a workaround (install sherpa-test-data as a python package), but I need to double-check.The current install docs don't call out the requirement for gfortran
. I initially tried g77
on Mac Mavericks and this failed.
This ticket replaces one of the issues reported in #15.
The configure
script of external python extensions coming from CIAO libraries tries to infer the python version and picks up the first python
available in the PATH
. However, when users explicitly use a different version of python (e.g. python2.7
in an environment where python
points to Python3, as reported in #15) the configure
script should not try to infer what version of Python to run, but just use the one specified by the user.
A workaround exists: export PYTHON=python2.7
(BASH) would tell configure
what is the Python executable the user wants to run.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.