Coder Social home page Coder Social logo

threeml / threeml Goto Github PK

View Code? Open in Web Editor NEW
72.0 13.0 59.0 742.49 MB

The Multi-Mission Maximum Likelihood framework (3ML)

License: BSD 3-Clause "New" or "Revised" License

Python 98.45% Shell 1.49% Batchfile 0.03% Dockerfile 0.03%
python-environment xspec fermi fermi-science xspec-models python astronomy spectral-analysis multi-wavelength hawc

threeml's Introduction

CI Conda Build and Publish Test Against XSPEC codecov Documentation Status License DOI

GitHub pull requests GitHub issues

PyPi

PyPI version fury.io PyPI - Downloads PyPI - Python Version Install using pip

Conda

Conda Conda

drawing

The Multi-Mission Maximum Likelihood framework (3ML)

A framework for multi-wavelength/multi-messenger analysis for astronomy/astrophysics.


Astrophysical sources are observed by different instruments at different wavelengths with an unprecedented quality. Putting all these data together to form a coherent view, however, is a very difficult task. Indeed, each instrument and data type has its own ad-hoc software and handling procedure, which present steep learning curves and do not talk to each other.

The Multi-Mission Maximum Likelihood framework (3ML) provides a common high-level interface and model definition, which allows for an easy, coherent and intuitive modeling of sources using all the available data, no matter their origin. At the same time, thanks to its architecture based on plug-ins, 3ML uses under the hood the official software of each instrument, the only one certified and maintained by the collaboration which built the instrument itself. This guarantees that 3ML is always using the best possible methodology to deal with the data of each instrument.

drawing

Though Maximum Likelihood is in the name for historical reasons, 3ML is an interface to several Bayesian inference algorithms such as MCMC and nested sampling as well as likelihood optimization algorithms. Each approach to analysis can be seamlessly switched between allowing users to try different approaches quickly and without having to rewrite their model or data interfaces.

Like your XPSEC models? You can use them in 3ML as well as our growing selection of 1-,2- and 3-D models from our fast and customizable modeling language astromodels.

Installation

Installing with pip or conda is easy. However, you want to include models from XSPEC, the process can get tougher and we recommend the more detailed instructions:

pip install astromodels threeml
conda  install astromodels threeml -c threeml conda-forge 

Please refer to the Installation instructions for more details and trouble-shooting.

Press

Who is using 3ML?

Here is a highlight list of teams and their publications using 3ML.

A full list of publications using 3ML is here.

Citing

If you find this package useful in you analysis, or the code in your own custom data tools, please cite:

Vianello et al. (2015)

Acknowledgements

3ML makes use of the Spanish Virtual Observatory's Filter Profile service (http://svo2.cab.inta-csic.es/svo/theory/fps3/index.php?mode=browse&gname=NIRT).

If you use these profiles in your research, please consider citing them by using the following suggested sentence in your paper:

"This research has made use of the SVO Filter Profile Service (http://svo2.cab.inta-csic.es/theory/fps/) supported from the Spanish MINECO through grant AyA2014-55216"

and citing the following publications:

The SVO Filter Profile Service. Rodrigo, C., Solano, E., Bayo, A. http://ivoa.net/documents/Notes/SVOFPS/index.html The Filter Profile Service Access Protocol. Rodrigo, C., Solano, E. http://ivoa.net/documents/Notes/SVOFPSDAL/index.html

ThreeML is supported by National Science Foundation (NSF)

threeml's People

Contributors

anastasia-tsvetkova avatar bjoernbiltzinger avatar bwgref avatar cbrisboi avatar cescalara avatar domeckert avatar dougburke avatar felixthebeard avatar giacomov avatar grburgess avatar hamogu avatar henrikef avatar hfdavidyu avatar johannesbuchner avatar ke-fang avatar maklinger avatar ndilalla avatar omodei avatar pwyounk avatar rwiller avatar sammarinelli avatar tibaldo avatar torresramiro350 avatar volodymyrss avatar xboluna avatar zhoouhaoo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

threeml's Issues

check math?

@drJfunk : what does "check math" mean in bayesian_analysis? :-)

Problem when trying to Bayes sample

samples = bayes.sample(n_walkers=50,burn_in=100, n_samples=1000)

Running burn-in of 100 samples...

emcee: Exception while calling your likelihood function:
params: [ 0.23218572 -0.40348434 196.39287035]
args: []
kwargs: {}
exception:
Traceback (most recent call last):
File "build/bdist.macosx-10.10-x86_64/egg/emcee/ensemble.py", line 505, in call
return self.f(x, _self.args, *_self.kwargs)
File "/usr/local/lib/python2.7/site-packages/threeML-0.2.0-py2.7.egg/threeML/bayesian/bayesian_analysis.py", line 569, in _get_posterior
prior_value = parameter.prior(trial_values[i])
File "build/bdist.macosx-10.10-x86_64/egg/astromodels/parameter.py", line 546, in _get_prior
raise RuntimeError("There is no defined prior for parameter %s" % self.name)

RuntimeError: There is no defined prior for parameter K

RuntimeError Traceback (most recent call last)
in ()
----> 1 samples = bayes.sample(n_walkers=50,burn_in=100, n_samples=1000)

/usr/local/lib/python2.7/site-packages/threeML-0.2.0-py2.7.egg/threeML/bayesian/bayesian_analysis.pyc in sample(self, n_walkers, burn_in, n_samples)
153 # Prepare the list of likelihood values
154 self._log_like_values = []
--> 155 pos, prob, state = sampling_procedure(p0, sampler, burn_in)
156
157 # Reset sampler

/usr/local/lib/python2.7/site-packages/threeML-0.2.0-py2.7.egg/threeML/bayesian/bayesian_analysis.pyc in sample_with_progress(p0, sampler, n_samples, *_kwargs)
35 progress_bar_iter = max(int(n_samples / 100),1)
36
---> 37 for i, result in enumerate(sampler.sample(p0, iterations=n_samples, *_kwargs)):
38 # Show progress
39

/usr/local/lib/python2.7/site-packages/emcee-2.1.0-py2.7.egg/emcee/ensemble.pyc in sample(self, p0, lnprob0, rstate0, blobs0, iterations, thin, storechain, mh_proposal)
196 blobs = blobs0
197 if lnprob is None:
--> 198 lnprob, blobs = self._get_lnprob(p)
199
200 # Check to make sure that the probability function didn't return

/usr/local/lib/python2.7/site-packages/emcee-2.1.0-py2.7.egg/emcee/ensemble.pyc in _get_lnprob(self, pos)
380
381 # Run the log-probability calculations (optionally in parallel).
--> 382 results = list(M(self.lnprobfn, [p[i] for i in range(len(p))]))
383
384 try:

/usr/local/lib/python2.7/site-packages/emcee-2.1.0-py2.7.egg/emcee/ensemble.pyc in call(self, x)
503 def call(self, x):
504 try:
--> 505 return self.f(x, _self.args, *_self.kwargs)
506 except:
507 import traceback

/usr/local/lib/python2.7/site-packages/threeML-0.2.0-py2.7.egg/threeML/bayesian/bayesian_analysis.pyc in _get_posterior(self, trial_values)
567 for i, (parameter_name, parameter) in enumerate(self._free_parameters.iteritems()):
568
--> 569 prior_value = parameter.prior(trial_values[i])
570
571 if prior_value == 0:

/usr/local/lib/python2.7/site-packages/astromodels-0.1-py2.7.egg/astromodels/parameter.pyc in _get_prior(self)
544
545 if self._prior is None:
--> 546 raise RuntimeError("There is no defined prior for parameter %s" % self.name)
547
548 return self._prior

RuntimeError: There is no defined prior for parameter K

get_errors() should return a pandas

Right now get_errors returns a pandas containing the errors as string in the value column (which is the table that is displayed).

This is not optimal, it should return a DataFrame isntance with the errors as floats.

Cannot find configuration file

This is a note for me:

-change the behavior of 3ML with respect to the configuration file: if there is none in ~/.threeML then look for a system copy under the python tree

Response and GBM files

There seems to be an issue with the reading of GBM DRMs (similar to what I have seen with gammapy)


/usr/local/Cellar/python/HEAD/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/threeML-0.3.2-py2.7.egg/threeML/plugins/ogip.pyc in _read_matrix(self, data, n_channels)
    750             for j in range(n_grp[i]):
    751 
--> 752                 rsp[i, f_chan[i][j]: f_chan[i][j] + n_chan[i][j]] = matrix[i][m_start:m_start + n_chan[i][j]]
    753 
    754                 m_start += n_chan[i][j]

ValueError: could not broadcast input array from shape (128) into shape (127)


location of package version info in threeML

Trivial comment: I noticed that access to the threeML package version is via

threeML.version.__version__

But the convention for most packages is packagename.__version__, where the __version__ attribute is defined in the __init.py__ file in the module. Because it's in an unconventional place I had to waste a few minutes looking for the version info. It would be nice to change this, which would also make threeML conform to PEP 396:

https://www.python.org/dev/peps/pep-0396/

Plugin to Support Binned Likelihood Profiles

It would be useful to have a plugin that would be able to ingest binned likelihood profiles and use them to evaluate the likelihood in lieu of running the analysis software of the respective experiment.

Binned likelihood profiles are a way of encoding the likelihood function that preserves more information than a standard SED (i.e. flux points with 1 sigma errors). The basic format would contain an NxM matrix of tabulated likelihood values as a function of N source flux values and M energy bins. The sed method in fermipy is an example of how this can be implemented in practice.

I'm planning to create a new page on gamma-astro-data-formats with a proposal for how we might serialize this information to a FITS file. I'll post the link here when it's ready.

Adding ability to return log like (log prob) for a data point given a set of parameters

For some statistical calculations, the quantity p(y_i | x_s ) where y_i are ith data point and x_s is the parameters for the sth iteration of an MC is essential. One way to return this (I believe ) is to have the log likelihood return before summing. This is of course OGIP style specific.

Is it possible to generally do this or force the plugin to have this capability?

Implement pyopt minimizer

Pyopt:

http://www.pyopt.org/

is an awesome package which make available many minimizers with one interface.

I want to implement a minimizer using that to improve the robustness of our frequentists methods.

contours failing (possibly related to the get_child issue)

From the TTE example

res = jl.get_contours(band.xp,400,1700,20)

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-16-2128e39ddbca> in <module>()
----> 1 res = jl.get_contours(band.xp,400,1700,20)

/usr/local/Cellar/python/HEAD/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/threeML-0.3.2-py2.7.egg/threeML/classicMLE/joint_likelihood.pyc in get_contours(self, param_1, param_1_minimum, param_1_maximum, param_1_n_steps, param_2, param_2_minimum, param_2_maximum, param_2_n_steps, progress, **options)
    461                                         "is above parameter maximum (%s)" % (param_1, param_1_maximum, max1)
    462 
--> 463         min2, max2 = self.likelihood_model[param_2].bounds
    464 
    465         assert param_2_minimum >= min2, "Requested low range for parameter %s (%s) " \

/usr/local/Cellar/python/HEAD/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/astromodels-0.2.3-py2.7-macosx-10.11-x86_64.egg/astromodels/model.pyc in __getitem__(self, path)
    164         """
    165 
--> 166         return self._get_child_from_path(path)
    167 
    168     def __contains__(self, path):

/usr/local/Cellar/python/HEAD/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/astromodels-0.2.3-py2.7-macosx-10.11-x86_64.egg/astromodels/tree.pyc in _get_child_from_path(self, path)
    120         """
    121 
--> 122         keys = path.split(".")
    123 
    124         this_child = self

AttributeError: 'NoneType' object has no attribute 'split'

Tests failing

Ogip tests are failing:

_____ test_pha_files_in_generic_ogip_constructor_spec_number_in_file_name ______

    def test_pha_files_in_generic_ogip_constructor_spec_number_in_file_name():
        with within_directory(__this_dir__):

            ogip = OGIPLike('test_ogip', pha_file='test.pha{1}')

            pha_info = ogip.get_pha_files()

            for key in ['pha', 'bak']:

                assert isinstance(pha_info[key], PHA)

            assert pha_info['pha'].background_file == 'test_bak.pha{1}'
            assert pha_info['pha'].ancillary_file is None
            assert pha_info['pha'].instrument == 'GBM_NAI_03'
            assert pha_info['pha'].mission == 'GLAST'
            assert pha_info['pha'].is_poisson() == True
>           assert pha_info['pha'].n_channels == ogip.n_data_points
E           assert 128 == 119
E            +  where 128 = <threeML.plugins.OGIP.pha.PHA object at 0x7f4548d782d0>.n_channels
E            +  and   119 = <threeML.plugins.OGIPLike.OGIPLike object at 0x7f4548d85790>.n_data_points

test_ogip.py:56: AssertionError

There are also some warnings about ARF file.

Seaborn Plotting?

Since we will be moving to Pandas for storing tables, etc., should we look at using seaborn for data descriptive plots? For example, seaborn can make the corner plots very nicely, and also allow the user to pass their preferred colormap.

Combining plotting, statistics reporting into a post analysis class

While leaving the ability to do these things separately is nice, it could be potentially cleaner to have a big post analysis class that does the following:

  1. allow for plotting of the fits
  2. do statistics reporting of the fits
  3. flux/fluence calculations.

On point (3): does there exist a way either through astromodels or in the plugin base to compute fluxes? This seems like an essential ability. For one, it would be nice to be able to propagate the fit (covariance or chain) for computing these values rather than just being able to do it for the best fit value.

I may have missed this in the code.

Add README?

Can you please add a README file to the repo?

Markdown .md or RST .rst formats are common and nicely displayed by Github.

The info I'm looking for is how to get started, i.e. install 3ML and run a first example.

Exposure in FermiGBMLikeTTE

Need to implement the proper computation of exposure for Fermi/GBM.

Fermi/GBM has a deadtime for each recorded event. Look at the code in GtBurst, close to where you got the other code, for the function which computes that exposure.

Reduce size of data for GBM example

The size of the data directory has become too big to be kept in a github repository.

We should use only smaller data files for the examples. I just compressed the livetime cube for the LAT example.

Please leave only one NaI and one BGO as data for the GBM TTE example and modify the example accordingly.

non-profiled background

As we discussed, we will add the ability to directly fit the background (modeled or not) in count space. The main question is the interface.

Since we will use a likelihood model for the background, we need to provide an OGIPBAKLike plugin. This can be created from a plugin (e.g. in the case of Fermi TTE data) or it can simply be created from a PHA BAK file. This mean that we should most likely need to feed the plugin to the DataList. Regardless of how cumbersome this could be, the main issue is that we will then need to have special plugins for the observed data unless we want to proved a switch that knows an OGIPBAKLike is provided for a give dataset... some kind of link.

I think this is not the most practical way to go. Instead, we should have OGIPOBSLike and OGIPBAKLike. The user will have to know what they are doing in any case.. otherwise we will have to really think about the backend of the fitting engine.

Thoughts @giacomov ?

Combining XRT and XSPEC examples

To reduce files, I propose to merge the XSPEC and XRT demos to show off 1) XRTLike 2) XPEC models 3) joint fitting GBM and XRT data (the beauty of 3ML) I will have to add two GBM files.

LAT Like display

I believe the model-data/model residual plot has the label or the execution of the math reversed. They do not line up :)

OGIP Rebinner and errors

In OGIPLike, the Rebiner is called on background errors when the background is gaussian distributed.

However, should it not be the sqrt(sum of the squares) ? @giacomov

If so, I'm working on this currently and will update.

Plotting of spectra

We need a routine that is either attached to the model via atromodels or can read a model object and plot the results in a nice form.

I suggest that we do this via the model object in 3ML since it is data aware. I can work on this unless @giacomov has an idea.

Asymmetric MLE flux errors

@giacomov I have been thinking about how to implement this and it is impossible post fit unless you do some kind of profile. Instead, we can tackle it as you said during the fit.

In the Jointlikelihood object, there can be a keyword such as "fit_for_flux" that is specified will turn the normalization(s) for components into fluxes. This is done for the new Band function in astromodels, but can do it more generally. The ability to solve for component flux functions is built in to the Spectral flux class already, so I could just add the proper methods there.

The user would have to specify the energy range and the units (energy flux, photon flux, etc) and the Spectral flux class would create the appropriate parameter(s) for the components and the norms would be replaced in the function. Of course, all that is happening is that the functions get renormalized.

This approach keeps us from having to change astromodels while also making the method general for any function and since the machinery is already in the spectral flux class modulo a few modifications.

Dependencies and Notebook

Is 3ML supposed to be used mainly/only relying on notebook?
If not, maybe it would be good not necessarily require the notebook and its dependencies (eg zmq).
Eventually, 3ML installs properly but crashes at runtime:

[cr@crmac:~]$ ./extended.py 
Configuration read from /Users/cr/.threeML/threeML_config.yml
No module named zmq

Along these lines, can we display the formulas out of the notebook? Eventually just displaying the LaTeX code? From the current terminal output:

Spatial model: Point source
Formula:
<IPython.core.display.Latex object>

io module in 3ML may clash with system io module

On OS X Yosemite I can build 3ML and import it in ipython sessions, but oddly "import threeML" fails when running a plain python session. I copy my error below. After googling around it seems there is a possibility this error is caused by the fact that threeML contains an io directory which may be clashing with another io package in the system python; see comments here:

http://stackoverflow.com/questions/6315440/how-to-solve-attributeerror-when-importing-igraph

The recommended fix is to rename this directory -- io3ML is a suggestion. My error messages are copied below.

Traceback (most recent call last):
File "", line 1, in
File "/Users/sybenzvi/Library/Python/2.7/lib/python/site-packages/threeML-0.1.0-py2.7-macosx-10.10-intel.egg/threeML/init.py", line 46, in
from .models.PointSource import PointSource
File "/Users/sybenzvi/Library/Python/2.7/lib/python/site-packages/threeML-0.1.0-py2.7-macosx-10.10-intel.egg/threeML/models/PointSource.py", line 1, in
from SourceModel import SourceModel
File "/Users/sybenzvi/Library/Python/2.7/lib/python/site-packages/threeML-0.1.0-py2.7-macosx-10.10-intel.egg/threeML/models/SourceModel.py", line 4, in
from threeML.io.Table import Table
File "/Users/sybenzvi/Library/Python/2.7/lib/python/site-packages/threeML-0.1.0-py2.7-macosx-10.10-intel.egg/threeML/io/Table.py", line 6, in
import astropy.table
File "/Users/sybenzvi/Library/Python/2.7/lib/python/site-packages/astropy-1.0.3-py2.7-macosx-10.10-intel.egg/astropy/init.py", line 73, in
_check_numpy()
File "/Users/sybenzvi/Library/Python/2.7/lib/python/site-packages/astropy-1.0.3-py2.7-macosx-10.10-intel.egg/astropy/init.py", line 61, in _check_numpy
from .utils import minversion
File "/Users/sybenzvi/Library/Python/2.7/lib/python/site-packages/astropy-1.0.3-py2.7-macosx-10.10-intel.egg/astropy/utils/init.py", line 20, in
from .compat.odict import OrderedDict
File "/Users/sybenzvi/Library/Python/2.7/lib/python/site-packages/astropy-1.0.3-py2.7-macosx-10.10-intel.egg/astropy/utils/compat/init.py", line 14, in
from .numpycompat import *
File "/Users/sybenzvi/Library/Python/2.7/lib/python/site-packages/astropy-1.0.3-py2.7-macosx-10.10-intel.egg/astropy/utils/compat/numpycompat.py", line 18, in
NUMPY_LT_1_6_1 = not minversion('numpy', '1.6.1')
File "/Users/sybenzvi/Library/Python/2.7/lib/python/site-packages/astropy-1.0.3-py2.7-macosx-10.10-intel.egg/astropy/utils/introspection.py", line 143, in minversion
from pkg_resources import parse_version
File "build/bdist.macosx-10.10-x86_64/egg/pkg_resources/init.py", line 25, in
# Since we may exit via an exception, close fp explicitly.
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/zipfile.py", line 501, in
class ZipExtFile(io.BufferedIOBase):
AttributeError: 'module' object has no attribute 'BufferedIOBase'

sources/models etc. wrt plotting

I'm having a little trouble understanding the understanding the general idea multiple sources w.r.t. plotting.

Since a joint fit uses the same DataList for all sources, how does one handle the concept of multiple sources when considering point sources? I've assumed that all sources would have the same model type (not the same model instance) when summing, but I think this is a bad assumption. However, I really do not see what the point of having a source list with point sources at the moment.

It seems more natural to pass multiple instances of JointLikelihoods each containing different fits.

@giacomov , can you provide some insight?

from threeML import * broken

The import is broken because the uniform_prior no longer exists int he same place in astromodels.
I can't seem to find where it is to update the code.

SpectralModel.formula is dangerous

It is the formula people see (well, if they use a notebook) but not what is used for computation.
That creates a risk of inconstancy. For instance,
LogParabola.formula=r'\begin{equation}f(E) = A E^{\gamma+\beta \log(E)}\end{equation}' does not correspond to the actual function: return norm * (energies/piv)**(gamma+beta*numpy.log10(energies/piv))

no FermiGBMLikeTTE?

Hi,

I think the FermiGBMLikeTTE.py is not working?

I just reinstalled the new 3ML version today. When I type "Fermi" and hit tab, only FermiGBMLike comes up, no FermiGBMLikeTTE. Am I doing something wrong?

Thanks,
David

Jointlikehood display

With the migration to pandas, should we abandon the custom tables in the JL display so that all the data (matrix, MLE, etc) are pandas? If so, I would like to make them a little nicer on the eyes with pandas style. Thoughts?

Adding PolyChord

PolyChord is another nested sampling algorithm that hand handle multi-modal posteriors. It is an evolution over MULTINEST, but can be quite slow. It handles high dimensional parameter spaces very well because it uses slice sampling.

It is fairly easy to add on (they actually provide a python interface) and I've nearly completed this.
It will function in the same way as MULTINEST. However, they do not provide and analyzer. I will write a class in Bayesian analysis for this.

Still, there are more advanced things you can do with the output. For now, I suggest we only extract the the simple stuff that we are doing for the other samplers and encourage the user to use other tools for posterior clustering analysis, etc. We could put these in in the future.

Thoughts?

Non-convergence?

When running the example:
jl = JointLikelihood( model, data_list, verbose=False )
res = jl.fit()

If in the next cell I run:

res = jl.get_errors()

I get the following:

CannotComputeErrors Traceback (most recent call last)
...
CannotComputeErrors: MIGRAD results not valid, cannot compute errors. Did you run the fit first ?

I'm assuming that maybe it is not converging

Background subtracted data

There are some cases where the background is subtracted off (Swift BAT) and normally, the lowest information statistic to assume for the variance is a normal distribution. However, it may be possible to substitute a Skellam distribution in this case. This is the difference between two Poisson variables. It asymptotically approaches a normal distribution, but has a more peaked structure at low "counts."

It may be worth exploring how well this performs and is easy to implement. Thoughts @giacomov ?

Make matplotlib optional

I'm getting this error when I try to import threeML.

In Astropy and Gammapy we put the matplotlib.pyplot import inside the function or methods that need it, so that matplotlib is an optional dependency.

Can you do the same for threeML?

$ python
Python 2.7.8 (default, Aug 20 2015, 11:36:15) 
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.56)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import UnbinnedAnalysis
Plotter is MatPlotlib
>>> import threeML
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "threeML/__init__.py", line 53, in <module>
    from .models.spectralmodel import *
  File "threeML/models/spectralmodel.py", line 11, in <module>
    import matplotlib.pyplot as plt
  File "/Users/deil/software/fermi/latest/lib/python2.7/site-packages/matplotlib/pyplot.py", line 114, in <module>
    _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
  File "/Users/deil/software/fermi/latest/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
    globals(),locals(),[backend_name],0)
  File "/Users/deil/software/fermi/latest/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py", line 24, in <module>
    from matplotlib.backends import _macosx
RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are Working with Matplotlib in a virtual enviroment see 'Working with Matplotlib in Virtual environments' in the Matplotlib FAQ
>>> import matplotlib
>>> import threeML
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "threeML/__init__.py", line 53, in <module>
    from .models.spectralmodel import *
  File "threeML/models/spectralmodel.py", line 11, in <module>
    import matplotlib.pyplot as plt
  File "/Users/deil/software/fermi/latest/lib/python2.7/site-packages/matplotlib/pyplot.py", line 114, in <module>
    _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
  File "/Users/deil/software/fermi/latest/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
    globals(),locals(),[backend_name],0)
  File "/Users/deil/software/fermi/latest/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py", line 24, in <module>
    from matplotlib.backends import _macosx
RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are Working with Matplotlib in a virtual enviroment see 'Working with Matplotlib in Virtual environments' in the Matplotlib FAQ

get_errors() complaining when doing an error estimate

Hi,

I do the fit and then I want to get errors on my free parameters but when I run get_errors function it crashes with this message:

CannotComputeErrors Traceback (most recent call last)
in ()
8 jl2.fit()
9
---> 10 jl2.get_errors()

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/threeML-0.3.2-py2.7.egg/threeML/classicMLE/joint_likelihood.pyc in get_errors(self, quiet)
314 assert self._current_minimum is not None, "You have to run the .fit method before calling errors."
315
--> 316 errors = self._minimizer.get_errors()
317
318 # Set the parameters back to the best fit value

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/threeML-0.3.2-py2.7.egg/threeML/minimizer/minuit_minimizer.pyc in get_errors(self)
350
351 if not self._migrad_has_converged():
--> 352 raise CannotComputeErrors("MIGRAD results not valid, cannot compute errors. Did you run the fit first ?")
353
354 try:

CannotComputeErrors: MIGRAD results not valid, cannot compute errors. Did you run the fit first ?

Priors for nuisance parameters

I noticed in OGIPLike that the an uninformative flat prior was being used for the effective area correction.

I would suggest instead that we use a gaussian or even better, a cauchy prior for these:

  1. we believe they should be centered on zero, so we have some information there.
  2. while we would preferably set the variance of the distribution as a free parameter, we aren't going to do any fancy modeling here... so we can set a cauchy prior, centered at zero with beta=2.5

This is slightly less informative around the center than a gaussian but still uses our knowledge that we should not have something too far off in the tails. 2.5 comes from some work by Gelman that says this is uninformative around the mean.

There are not Cauchy priors in astro models... but maybe there should be :)

Model Comparison and statistics

After doing several fits of different models, the user will want to compare them. It may be nice if we provide a class that does this properly.

For MLE:

The class prints out different ratios: LRT, AIC, BIC. It would be nice if we could somehow provide warnings when the model comparison is invalid (Warning: Models are not formally nested) but this would be very very difficult in general.

Therefore, we could provide profiling monte carlo capabilities. I'm afraid this will be difficult because it will require plugins to be able to simulate data... it is not impossible, but a mighty task.

For Bayesian:

prints out WAIC, DIC, and for samplers that support marginal likelihood integration we print out Bayes factors. No need for sampling here.

This is lot of work, but in the end, we have a tool and a tutorial that enforces proper statistics.

Produced PHA files are not compatible with XSPEC

A .pha file produced with .write_pha of the OGIPLike plugin is not readable by XSPEC, because it is missing a column.

Furthermore, its GROUPING is not recognized.

Here is the XSPEC error:

XSPEC12>data test_out.pha{1}
***Warning: Unrecognized grouping for channel(s). It/they will be reset to 1.

***XSPEC Error:  data file missing required entries: ANCRFILE

Here is the code to reproduce the problem, using the data in attachment:

from threeML import *

dd = '100724029/'

lle = OGIPLike("LLE",dd+"bn100724029_LAT-LLE_srcspectra.pha{1}",
                        dd+"bn100724029_LAT-LLE_bkgspectra.bak{1}",
                        dd+"bn100724029_LAT-LLE_weightedrsp.rsp{1}")
lle.set_active_measurements('3e4-1e7')

lle.rebin_on_background(5.0)

lle.write_pha("test_out.pha", overwrite=True)

A probably unrelated problem is produced by this similar code, using one of the .pha files in the examples:

from threeML import *

# Update this with your path
dd = '/home/giacomov/develop/3ML/examples/'

lle = OGIPLike("LLE",dd+"bn090217206_n9_srcspectra.pha{1}",
                        dd+"bn090217206_n9_bkgspectra.bak{1}",
                        dd+"bn090217206_n9_weightedrsp.rsp{1}")
lle.set_active_measurements('10.0-1000.0')

lle.rebin_on_background(5.0)

lle.write_pha("test_out.pha", overwrite=True)

and this is the XSPEC error:

XSPEC12>data test_out.pha{1}

***XSPEC Error:  data file missing required entries: 
Detector channel numbers missing in extension: SPECTRUM

***XSPEC Error:  data file missing required entries: ../test_out.pha

***XSPEC Error: Error reading data: File ../test_out.pha

data_files.tar.gz

MCMC calculation in examples folder appears to be obsolete

It looks like the MCMC calculation in the notebook examples/090217206.ipynb uses an obsolete version of the JointLikelihood interface. As a result there is no documentation on the correct way to call the Bayesian calculations in the 3ML library.

Use panda for results

Using panda containers for results would make things much easier when performing time-resolved fitting or many fit one after the other.

Panda containers can be appended, merged, provide rich display functionalities and saved to disk.

missing function make_index

The function make_index in the _hpd function in bayesian_analysis is undefined.

Also, since you are at hit @drJfunk , it is better to make the _hpd and _calc_min_interval as static methods of the BayesianClass. Just add the @staticmethod decorator before the definition, like:

    @staticmethod
    def _hpd(...)
        ....

and then of course you need to call them as self._hpd .

parameter bound = None causes error

screenshot 2016-09-02 13 08 54

When using get contours:

res = jl.get_contours(band.xp,400,1700,20)


AssertionError                            Traceback (most recent call last)
<ipython-input-11-2128e39ddbca> in <module>()
----> 1 res = jl.get_contours(band.xp,400,1700,20)

/usr/local/Cellar/python/HEAD/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/threeML-0.3.2-py2.7.egg/threeML/classicMLE/joint_likelihood.pyc in get_contours(self, param_1, param_1_minimum, param_1_maximum, param_1_n_steps, param_2, param_2_minimum, param_2_maximum, param_2_n_steps, progress, **options)
    459 
    460         assert param_1_maximum <= max1, "Requested hi range for parameter %s (%s) " \
--> 461                                         "is above parameter maximum (%s)" % (param_1, param_1_maximum, max1)
    462 
    463         min2, max2 = self.likelihood_model[param_2].bounds

AssertionError: Requested hi range for parameter bn080916009.spectrum.main.Band.xp (1700) is above parameter maximum (None)

Event list unbinned MLE fitting

Right now EventList uses scipy to perform polynomial fitting of the time series.

  1. should we switch the optimize? It seems to work, so perhaps there is no need

  2. The current implementation uses a preset binned histogram to fit. Since these are events, should we not just switch to an unbinned method? I'm no expert on this approach so it would be nice to get your help/advice @giacomov

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.