Coder Social home page Coder Social logo

threeml / hawc_hal Goto Github PK

View Code? Open in Web Editor NEW
11.0 4.0 21.0 43.5 MB

HAWC Accelerated Likelihood - python-only framework for HAWC data analysis

License: BSD 3-Clause "New" or "Revised" License

Jupyter Notebook 15.47% Python 83.91% Shell 0.62%
hawc likelihood gamma-ray-astronomy

hawc_hal's Introduction

CI codecov Maintainability Scrutinizer Code Quality

The HAWC Accelerated Likelihood (HAL) framework

Installation

hawc_hal depends on astromodels, threeML as well as some additional packages (numba, uproot).

If you don't have mamba,install mamba according to the instruction into the base environment.

To install hawc_hal in a conda environment, we recommend to use the following procedure:

mamba create --name new_hal -c conda-forge -c threeml numpy scipy matplotlib ipython numba reproject "astromodels>=2" "threeml>=2" root
conda activate new_hal
pip install git+https://github.com/threeml/hawc_hal.git

For the time being, we recommend updating to the master version of astromodels and threeML from github:

pip install --upgrade git+https://github.com/threeml/astromodels.git
pip install --upgrade git+https://github.com/threeml/threeML.git

The above will install a new python 3 environment. There seem to be version conflicts that currently prevent installing hawc_hal with the newer (>=2.0) versions of threeML and astromodels.

You can also add hawc_hal to an existing environment. If you have conda installed, it is highly reccomended that you install numba through conda like this (simply skip this step if you are not running in a conda environment):

> conda install -c conda-forge numba

You also need root (whether installed through conda or not) and threeML/astromodels and their dependencies.

HAL now has no dependencies relying on ROOT or root-numpy. Instead, it uses uproot. The following are installed along with HAL: uproot, awkward, hist, mplhep

Check installation

Use the following commands to check if your installation was successful. You should be inside your conda environment for this.

  • To test threeML: pytest --pyargs threeML
  • To test astromodels: pytest --pyargs astromodels
  • To test HAL: pytest --pyargs hawc_hal

If you are interested in more detailed output from the tests, learn more about pytest command line options here.

Examples

You can find a worked example relying only on publicly accessible data on the threeML documentation (or download the notebook).

Mrk 421 analysis example

(This example assumes you have access to an all-sky HAWC dataset)

from hawc_hal import HAL, HealpixConeROI
import matplotlib.pyplot as plt
from threeML import *

# Define the ROI
ra_mkn421, dec_mkn421 = 166.113808, 38.208833
data_radius = 3.0
model_radius = 8.0

roi = HealpixConeROI(data_radius=data_radius,
                     model_radius=model_radius,
                     ra=ra_mkn421,
                     dec=dec_mkn421)

# Instance the plugin

maptree = ... # This can be either a ROOT or a hdf5 file
response = ... # This can be either a ROOT or hdf5 file

hawc = HAL("HAWC",
           maptree,
           response,
           roi,
           n_workers=3) # enables ability of loading ROOT's files faster with multiprocessing

# Use from bin 1 to bin 9
hawc.set_active_measurements(1, 9)

# Display information about the data loaded and the ROI
hawc.display()

# Look at the data
fig = hawc.display_stacked_image(smoothing_kernel_sigma=0.17)
# Save to file
fig.savefig("hal_mkn421_stacked_image.png")

# If you want, you can save the data *within this ROI* and the response
# in hd5 files that can be used again with HAL
# (this is useful if you want to publish only the ROI you
# used for a given paper)
hawc.write("my_response.hd5", "my_maptree.hd5")

# Define model as usual
spectrum = Log_parabola()
source = PointSource("mkn421", ra=ra_mkn421, dec=dec_mkn421, spectral_shape=spectrum)

spectrum.piv = 1 * u.TeV
spectrum.piv.fix = True

spectrum.K = 1e-14 / (u.TeV * u.cm ** 2 * u.s)  # norm (in 1/(keV cm2 s))
spectrum.K.bounds = (1e-25, 1e-19)  # without units energies are in keV

spectrum.beta = 0  # log parabolic beta
spectrum.beta.bounds = (-4., 2.)

spectrum.alpha = -2.5  # log parabolic alpha (index)
spectrum.alpha.bounds = (-4., 2.)

model = Model(source)

data = DataList(hawc)

jl = JointLikelihood(model, data, verbose=False)
jl.set_minimizer("ROOT")
param_df, like_df = jl.fit()

# See the model in counts space and the residuals
fig = hawc.display_spectrum()
# Save it to file
fig.savefig("hal_mkn421_residuals.png")

# See the spectrum fit
fig = plot_point_source_spectra(jl.results,
                                ene_min=0.1,
                                ene_max=100,
                                num_ene=50,
                                energy_unit='TeV',
                                flux_unit='TeV/(s cm2)')
fig.savefig("hal_mkn421_fit_spectrum.png")

# Look at the different energy planes (the columns are model, data, residuals)
fig = hawc.display_fit(smoothing_kernel_sigma=0.3)
fig.savefig("hal_mkn421_fit_planes.png")

# Compute TS
jl.compute_TS("mkn421", like_df)

# Compute goodness of fit with Monte Carlo
gf = GoodnessOfFit(jl)
gof, param, likes = gf.by_mc(100)
# print("Prob. of obtaining -log(like) >= observed by chance if null hypothesis is true: %.2f" % gof['HAWC'])
print(f"Prob. of obtaining -log(like) >= observed by chance if null hypothesis is true: {gof['HAWC']:.2f}")

# it is a good idea to inspect the results of the simulations with some plots
# Histogram of likelihood values
fig, sub = plt.subplots()
likes.hist(ax=sub)
# Overplot a vertical dashed line on the observed value
plt.axvline(jl.results.get_statistic_frame().loc['HAWC', '-log(likelihood)'],
            color='black',
            linestyle='--')
fig.savefig("hal_sim_all_likes.png")

# Plot the value of beta for all simulations (for example)
fig, sub = plt.subplots()
param.loc[(slice(None), ['mkn421.spectrum.main.Log_parabola.beta']), 'value'].plot()
fig.savefig("hal_sim_all_beta.png")

# Free the position of the source
source.position.ra.free = True
source.position.dec.free = True

# Set boundaries (no need to go further than this)
source.position.ra.bounds = (ra_mkn421 - 0.5, ra_mkn421 + 0.5)
source.position.dec.bounds = (dec_mkn421 - 0.5, dec_mkn421 + 0.5)

# Fit with position free
param_df, like_df = jl.fit()

# Make localization contour
a, b, cc, fig = jl.get_contours(model.mkn421.position.dec, 38.15, 38.22, 10,
                                model.mkn421.position.ra, 166.08, 166.18, 10, )

plt.plot([ra_mkn421], [dec_mkn421], 'x')
fig.savefig("hal_mkn421_localization.png")

# Of course we can also do a Bayesian analysis the usual way
# NOTE: here the position is still free, so we are going to obtain marginals about that
# as well
# For this quick example, let's use a uniform prior for all parameters
for parameter in model.parameters.values():

    if parameter.fix:
        continue

    if parameter.is_normalization:

        parameter.set_uninformative_prior(Log_uniform_prior)

    else:

        parameter.set_uninformative_prior(Uniform_prior)

# Let's execute our bayes analysis
bs = BayesianAnalysis(model, data)
samples = bs.sample(30, 100, 100)
fig = bs.results.corner_plot()

fig.savefig("hal_corner_plot.png")

Convert ROOT maptree to hdf5 maptree

from hawc_hal.maptree import map_tree_factory
from hawc_hal import HealpixConeROI

root_map_tree = "maptree_1024.root" # path to your ROOT maptree

# Export the entire map tree (full sky)
m = map_tree_factory(root_map_tree, None)
m.write("full_sky_maptree.hd5")

# Export only the ROI. This is a file only a few Mb in size
# that can be provided as dataset to journals, for example
ra_mkn421, dec_mkn421 = 166.113808, 38.208833
data_radius = 3.0
model_radius = 8.0

roi = HealpixConeROI(data_radius=data_radius,
                     model_radius=model_radius,
                     ra=ra_mkn421,
                     dec=dec_mkn421)

m = map_tree_factory(root_map_tree, roi)
m.write("roi_maptree.hd5")

Radial profile examples will come soon

hawc_hal's People

Contributors

cbrisboi avatar giacomov avatar grburgess avatar hayalaso avatar henrikef avatar maloneka avatar ndilalla avatar omodei avatar torresramiro350 avatar xiaojieww avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

hawc_hal's Issues

Cannot manually specify ntransits

In liff, you could specify the transits to use for the maps during analysis. I just noticed while working on an unrelated task that we cannot do the same for hal.

This should be straightforward to implement, a good task for a graduate student wanting to learn. Otherwise, I'll get to it eventually

Can't write model/residual maps for template ROIs

Will add a script to reproduce this later on, but this is the error message when calling write_model_map using a template ROI:

Traceback (most recent call last):
File "/Users/hfleisch/software_base/gitlab_aerie/threeml-analysis-scripts/fitModel/fitModel.py", line 722, in
like.write_model_map("{1}/{0}_modelMap.{2}".format(name, outdir, suffix))
File "/Users/hfleisch/miniconda3/envs/st_test3/lib/python2.7/site-packages/hawc_hal/HAL.py", line 942, in write_model_map
self._write_a_map(file_name, 'model', poisson_fluctuate, test_return_map)
File "/Users/hfleisch/miniconda3/envs/st_test3/lib/python2.7/site-packages/hawc_hal/HAL.py", line 928, in _write_a_map
new_map_tree.write(file_name)
File "/Users/hfleisch/miniconda3/envs/st_test3/lib/python2.7/site-packages/hawc_hal/maptree/map_tree.py", line 191, in write
serializer.store_pandas_object('/ROI', pd.Series(), **self._roi.to_dict())
File "/Users/hfleisch/miniconda3/envs/st_test3/lib/python2.7/site-packages/hawc_hal/serialize.py", line 39, in store_pandas_object
self._store.get_storer(path).attrs.metadata = metadata
File "/Users/hfleisch/miniconda3/envs/st_test3/lib/python2.7/site-packages/tables/attributeset.py", line 481, in setattr
self._g__setattr(name, value)
File "/Users/hfleisch/miniconda3/envs/st_test3/lib/python2.7/site-packages/tables/attributeset.py", line 423, in _g__setattr
self._g_setattr(self._v_node, name, stvalue)
File "tables/hdf5extension.pyx", line 710, in tables.hdf5extension.AttributeSet._g_setattr
tables.exceptions.HDF5ExtError: HDF5 error back trace

File "H5A.c", line 259, in H5Acreate2
unable to create attribute
File "H5Aint.c", line 280, in H5A_create
unable to create attribute in object header
File "H5Oattribute.c", line 347, in H5O_attr_create
unable to create new attribute in header
File "H5Omessage.c", line 224, in H5O_msg_append_real
unable to create new message
File "H5Omessage.c", line 1945, in H5O_msg_alloc
unable to allocate space for message
File "H5Oalloc.c", line 1142, in H5O_alloc
object header message is too large

End of HDF5 error back trace

Can't set attribute 'metadata' in node:
/ROI (Group) ''.

Citation question

Hello everyone,

I'm preparing a manuscript for publication. Not sure how to cite hawc_hal, since we do not have a publication for hawc_hal specifically. What I did was make a bib entry with the list of authors ordered by number of lines contributed.

How do people feel about this?

@misc{hawchal,
    author = {{Vianello}, Giacomo and {Riviere}, Colas and  {Brisbois}, Chad and {Fleischhack}, Henrike and {Burgess}, J. Michael},
    title = "{HAWC Accelerated Likelihood - python-only framework for HAWC data analysis}",
    url = {https://github.com/threeML/hawc_hal},
    year = "2018"
}

Fit broken

Currently running a test fit using the public dataset on (https://data.hawc-observatory.org/datasets/crab_data/index.php) is broken, after some discussion internally it appears more complicated scripts are breaking in a similar way.

I set up an environment using these lines:

ENV_NAME=hal_test
# create the environment
conda create --name $ENV_NAME -c conda-forge python=3.7 numpy scipy matplotlib numba

# pip install some things
#pip install --no-binary :all: root_numpy

export OMP_NUM_THREADS=1
export MKL_NUM_THREADS=1
export NUMEXPR_NUM_THREADS=1

conda activate $ENV_NAME

# Right now we have numpy 1.17.5, this causes an error with the interpolation package (required in astromodels), so we update numpy
# this gives numpy 1.19.0
pip install numpy 

# When running real example, tables is not optional apparently
#Traceback (most recent call last):
#  File "crab_fit_logparabola.py", line 26, in <module>
#    flat_sky_pixels_size=0.1)
#  File "/Users/chad/miniconda3/envs/hal_test/lib/python3.7/site-packages/hawc_hal/HAL.py", line 59, in __init__
#    self._maptree = map_tree_factory(maptree, roi=roi)
#  File "/Users/chad/miniconda3/envs/hal_test/lib/python3.7/site-packages/hawc_hal/maptree/map_tree.py", line 31, in map_tree_factory
#    return MapTree.from_hdf5(map_tree_file, roi)
#  File "/Users/chad/miniconda3/envs/hal_test/lib/python3.7/site-packages/hawc_hal/maptree/map_tree.py", line 44, in from_hdf5
#    data_analysis_bins = from_hdf5_file(map_tree_file, roi)
#  File "/Users/chad/miniconda3/envs/hal_test/lib/python3.7/site-packages/hawc_hal/maptree/from_hdf5_file.py", line 23, in from_hdf5_file
#    with Serialization(map_tree_file) as serializer:
#  File "/Users/chad/miniconda3/envs/hal_test/lib/python3.7/site-packages/hawc_hal/serialize.py", line 19, in __enter__
#    self._store = HDFStore(self._filename, complib='blosc:lz4', complevel=9, mode=self._mode)
#  File "/Users/chad/miniconda3/envs/hal_test/lib/python3.7/site-packages/pandas/io/pytables.py", line 518, in __init__
#    tables = import_optional_dependency("tables")
#  File "/Users/chad/miniconda3/envs/hal_test/lib/python3.7/site-packages/pandas/compat/_optional.py", line 92, in import_optional_dependency
#    raise ImportError(msg) from None
#ImportError: Missing optional dependency 'tables'.  Use pip or conda to install tables.
pip install tables

pip install git+https://github.com/threeML/astromodels.git

pip install git+https://github.com/threeML/threeML.git

pip install git+https://github.com/threeML/hawc_hal.git

Using the data on the public page linked previously, I stripped down that fit example to the following:
minimal_fail.py

Changing between minimizers (minuit and scipy) do not appear to affect the outcome, as shown in the logs below.
minuit:
failure_minimizer_minuit.log

scipy:
failure_minimizer_scipy.log

Installing the script hal_hdf5_to_fits.py

Feature Request

I notice that the script hal_hdf5_to_fits.py is part of the scripts directory. I think this should be installed when installing the library.

One way to do this is by adding, I believe, in the setup.py the following:

     scripts = ['scripts/hal_hdf5_to_fits.py'],

root-numpy installation

During the installation procedure, when root-numpy is trying to be installed after doing

mamba create --name new_hal -c conda-forge -c threeml numpy scipy matplotlib ipython numba reproject "astromodels>=2" "threeml>=2" root
conda activate new_hal

the following error is displayed:

Collecting root_numpy
  Using cached root_numpy-4.8.0.tar.gz (520 kB)
  Preparing metadata (setup.py) ... done
Skipping wheel build for root_numpy, due to binaries being disabled for it.
Installing collected packages: root_numpy
  Running setup.py install for root_numpy ... error
  error: subprocess-exited-with-error
 
  × Running setup.py install for root_numpy did not run successfully.
  │ exit code: 1
  ╰─> [137 lines of output]
      running install
      /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
        setuptools.SetuptoolsDeprecationWarning,
      
                       _
       _ __ ___   ___ | |_     _ __  _   _ _ __ ___  _ __  _   _
      | '__/ _ \ / _ \| __|   | '_ \| | | | '_ ` _ \| '_ \| | | |
      | | | (_) | (_) | |_    | | | | |_| | | | | | | |_) | |_| |
      |_|  \___/ \___/ \__|___|_| |_|\__,_|_| |_| |_| .__/ \__, |  4.8.0
                         |_____|                    |_|    |___/
      
      writing 'root_numpy/config.json'
      running build
      running build_py
      creating build
      creating build/lib.linux-x86_64-3.7
      creating build/lib.linux-x86_64-3.7/root_numpy
      copying root_numpy/_graph.py -> build/lib.linux-x86_64-3.7/root_numpy
      copying root_numpy/_warnings.py -> build/lib.linux-x86_64-3.7/root_numpy
      copying root_numpy/_tree.py -> build/lib.linux-x86_64-3.7/root_numpy
      copying root_numpy/info.py -> build/lib.linux-x86_64-3.7/root_numpy
      copying root_numpy/_sample.py -> build/lib.linux-x86_64-3.7/root_numpy
      copying root_numpy/__init__.py -> build/lib.linux-x86_64-3.7/root_numpy
      copying root_numpy/_evaluate.py -> build/lib.linux-x86_64-3.7/root_numpy
      copying root_numpy/_hist.py -> build/lib.linux-x86_64-3.7/root_numpy
      copying root_numpy/setup_utils.py -> build/lib.linux-x86_64-3.7/root_numpy
      copying root_numpy/_utils.py -> build/lib.linux-x86_64-3.7/root_numpy
      copying root_numpy/_array.py -> build/lib.linux-x86_64-3.7/root_numpy
      copying root_numpy/_matrix.py -> build/lib.linux-x86_64-3.7/root_numpy
      creating build/lib.linux-x86_64-3.7/root_numpy/tests
      copying root_numpy/tests/test_array.py -> build/lib.linux-x86_64-3.7/root_numpy/tests
      copying root_numpy/tests/test_evaluate.py -> build/lib.linux-x86_64-3.7/root_numpy/tests
      copying root_numpy/tests/__init__.py -> build/lib.linux-x86_64-3.7/root_numpy/tests
      copying root_numpy/tests/test_utils.py -> build/lib.linux-x86_64-3.7/root_numpy/tests
      copying root_numpy/tests/test_tree.py -> build/lib.linux-x86_64-3.7/root_numpy/tests
      copying root_numpy/tests/test_sample.py -> build/lib.linux-x86_64-3.7/root_numpy/tests
      copying root_numpy/tests/test_hist.py -> build/lib.linux-x86_64-3.7/root_numpy/tests
      creating build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/__init__.py -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      creating build/lib.linux-x86_64-3.7/root_numpy/extern
      copying root_numpy/extern/six.py -> build/lib.linux-x86_64-3.7/root_numpy/extern
      copying root_numpy/extern/ordereddict.py -> build/lib.linux-x86_64-3.7/root_numpy/extern
      copying root_numpy/extern/__init__.py -> build/lib.linux-x86_64-3.7/root_numpy/extern
      creating build/lib.linux-x86_64-3.7/root_numpy/tmva
      copying root_numpy/tmva/tests.py -> build/lib.linux-x86_64-3.7/root_numpy/tmva
      copying root_numpy/tmva/__init__.py -> build/lib.linux-x86_64-3.7/root_numpy/tmva
      copying root_numpy/tmva/_evaluate.py -> build/lib.linux-x86_64-3.7/root_numpy/tmva
      copying root_numpy/tmva/_data.py -> build/lib.linux-x86_64-3.7/root_numpy/tmva
      copying root_numpy/testdata/string.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/vary1.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/struct.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/object2.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/fixed2.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/single2.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/directories.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/ntuple.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/test.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/trees.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/vary2.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/fixed1.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/object1.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/vector.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/testdata/single1.root -> build/lib.linux-x86_64-3.7/root_numpy/testdata
      copying root_numpy/config.json -> build/lib.linux-x86_64-3.7/root_numpy
      running build_ext
      building 'root_numpy._librootnumpy' extension
      creating build/temp.linux-x86_64-3.7
      creating build/temp.linux-x86_64-3.7/root_numpy
      creating build/temp.linux-x86_64-3.7/root_numpy/src
      /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/bin/x86_64-conda-linux-gnu-cc -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -Wstrict-prototypes -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -pipe -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -pipe -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include -fPIC -Iroot_numpy/src -I/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include/python3.7m -I/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/numpy/core/include -c root_numpy/src/_librootnumpy.cpp -o build/temp.linux-x86_64-3.7/root_numpy/src/_librootnumpy.o -pthread -std=c++17 -m64 -I/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include -Wno-unused-function -Wno-write-strings
      cc1plus: warning: command-line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
      In file included from /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1969,
                       from /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:12,
                       from /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/numpy/core/include/numpy/arrayobject.h:4,
                       from root_numpy/src/_librootnumpy.cpp:581:
      /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
         17 | #warning "Using deprecated NumPy API, disable it with " \
            |  ^~~~~~~
      /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/bin/x86_64-conda-linux-gnu-c++ -pthread -shared -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,-rpath,/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib -L/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,-rpath,/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib -L/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,--disable-new-dtags -Wl,--gc-sections -Wl,--allow-shlib-undefined -Wl,-rpath,/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib -Wl,-rpath-link,/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib -L/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include build/temp.linux-x86_64-3.7/root_numpy/src/_librootnumpy.o -o build/lib.linux-x86_64-3.7/root_numpy/_librootnumpy.cpython-37m-x86_64-linux-gnu.so -L/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib -lCore -lImt -lRIO -lNet -lHist -lGraf -lGraf3d -lGpad -lROOTVecOps -lTree -lTreePlayer -lRint -lPostscript -lMatrix -lPhysics -lMathCore -lThread -lMultiProc -lROOTDataFrame -Wl,-rpath,/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib -pthread -lm -ldl -rdynamic -lTreePlayer
      building 'root_numpy.tmva._libtmvanumpy' extension
      creating build/temp.linux-x86_64-3.7/root_numpy/tmva
      creating build/temp.linux-x86_64-3.7/root_numpy/tmva/src
      /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/bin/x86_64-conda-linux-gnu-cc -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -Wstrict-prototypes -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -pipe -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -pipe -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include -fPIC -DNEW_TMVA_API -Iroot_numpy/src -Iroot_numpy/tmva/src -I/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include/python3.7m -I/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/numpy/core/include -c root_numpy/tmva/src/_libtmvanumpy.cpp -o build/temp.linux-x86_64-3.7/root_numpy/tmva/src/_libtmvanumpy.o -pthread -std=c++17 -m64 -I/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include -Wno-unused-function -Wno-write-strings
      cc1plus: warning: command-line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
      In file included from /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1969,
                       from /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:12,
                       from /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/numpy/core/include/numpy/arrayobject.h:4,
                       from root_numpy/tmva/src/_libtmvanumpy.cpp:581:
      /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
         17 | #warning "Using deprecated NumPy API, disable it with " \
            |  ^~~~~~~
      root_numpy/tmva/src/evaluate.pyx: In function 'PyObject* __pyx_f_13_libtmvanumpy_evaluate_twoclass(TMVA::MethodBase*, PyArrayObject*, double)':
      root_numpy/tmva/src/evaluate.pyx:59:20: error: 'const TMVA::Event* TMVA::MethodBase::fTmpEvent' is protected within this context
         59 |     _method.fTmpEvent = event
            |                    ^~~~~~~~~
      In file included from root_numpy/tmva/src/_libtmvanumpy.cpp:598:
      /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include/TMVA/MethodBase.h:445:28: note: declared protected here
        445 |       mutable const Event *fTmpEvent; //! temporary event when testing on a different DataSet than the own one
            |                            ^~~~~~~~~
      root_numpy/tmva/src/evaluate.pyx:69:20: error: 'const TMVA::Event* TMVA::MethodBase::fTmpEvent' is protected within this context
         69 |     _method.fTmpEvent = NULL
            |                    ^~~~~~~~~
      In file included from root_numpy/tmva/src/_libtmvanumpy.cpp:598:
      /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include/TMVA/MethodBase.h:445:28: note: declared protected here
        445 |       mutable const Event *fTmpEvent; //! temporary event when testing on a different DataSet than the own one
            |                            ^~~~~~~~~
      root_numpy/tmva/src/evaluate.pyx: In function 'PyObject* __pyx_f_13_libtmvanumpy_evaluate_multiclass(TMVA::MethodBase*, PyArrayObject*, unsigned int)':
      root_numpy/tmva/src/evaluate.pyx:82:20: error: 'const TMVA::Event* TMVA::MethodBase::fTmpEvent' is protected within this context
         82 |     _method.fTmpEvent = event
            |                    ^~~~~~~~~
      In file included from root_numpy/tmva/src/_libtmvanumpy.cpp:598:
      /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include/TMVA/MethodBase.h:445:28: note: declared protected here
        445 |       mutable const Event *fTmpEvent; //! temporary event when testing on a different DataSet than the own one
            |                            ^~~~~~~~~
      root_numpy/tmva/src/evaluate.pyx:88:20: error: 'const TMVA::Event* TMVA::MethodBase::fTmpEvent' is protected within this context
         88 |     _method.fTmpEvent = NULL
            |                    ^~~~~~~~~
      In file included from root_numpy/tmva/src/_libtmvanumpy.cpp:598:
      /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include/TMVA/MethodBase.h:445:28: note: declared protected here
        445 |       mutable const Event *fTmpEvent; //! temporary event when testing on a different DataSet than the own one
            |                            ^~~~~~~~~
      root_numpy/tmva/src/evaluate.pyx: In function 'PyObject* __pyx_f_13_libtmvanumpy_evaluate_regression(TMVA::MethodBase*, PyArrayObject*, unsigned int)':
      root_numpy/tmva/src/evaluate.pyx:101:20: error: 'const TMVA::Event* TMVA::MethodBase::fTmpEvent' is protected within this context
        101 |     _method.fTmpEvent = event
            |                    ^~~~~~~~~
      In file included from root_numpy/tmva/src/_libtmvanumpy.cpp:598:
      /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include/TMVA/MethodBase.h:445:28: note: declared protected here
        445 |       mutable const Event *fTmpEvent; //! temporary event when testing on a different DataSet than the own one
            |                            ^~~~~~~~~
      root_numpy/tmva/src/evaluate.pyx:107:20: error: 'const TMVA::Event* TMVA::MethodBase::fTmpEvent' is protected within this context
        107 |     _method.fTmpEvent = NULL
            |                    ^~~~~~~~~
      In file included from root_numpy/tmva/src/_libtmvanumpy.cpp:598:
      /data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/include/TMVA/MethodBase.h:445:28: note: declared protected here
        445 |       mutable const Event *fTmpEvent; //! temporary event when testing on a different DataSet than the own one
            |                            ^~~~~~~~~
      error: command '/data/disk01/home/sara/hawc_software/miniconda3/envs/new_hal/bin/x86_64-conda-linux-gnu-cc' failed with exit status 1
      [end of output]
    note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> root_numpy
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.

I tried then to use uproot but, the 3ML codes just don't recognize it.

automated tests fail on osX due to issues with ROOT

A bunch of the travis CI tests fail on osX, ultimately due to problems with pyROOT/ROOT.

Tests work fine on linux now, as well as locally on my laptop. Any ideas? @grburgess @giacomov

>   import libPyROOT as _root
E   ImportError: dlopen(/Users/travis/miniconda/envs/test-environment/lib/python2.7/site-packages/libPyROOT.so, 2): Library not loaded: @rpath/libssl.1.0.0.dylib
E     Referenced from: /Users/travis/miniconda/envs/test-environment/lib/root/libNet.5.so
E     Reason: image not found
../../../miniconda/envs/test-environment/lib/python2.7/site-packages/ROOT.py:103: ImportError

Using asymmetric PSF in HAL

Feature Request

We need to read asymmetric PSF (asymmetric double gaussian function) from detector response file and use it to convolve point and extended sources.

Writing model/residual map when using HealpixMapROI

I'm trying to save the residual and model maps of my analysis. I wasn't able due to the following error

tables.exceptions.HDF5ExtError: HDF5 error back trace

  File "H5A.c", line 259, in H5Acreate2
    unable to create attribute
  File "H5Aint.c", line 280, in H5A_create
    unable to create attribute in object header
  File "H5Oattribute.c", line 347, in H5O_attr_create
    unable to create new attribute in header
  File "H5Omessage.c", line 224, in H5O_msg_append_real
    unable to create new message
  File "H5Omessage.c", line 1945, in H5O_msg_alloc
    unable to allocate space for message
  File "H5Oalloc.c", line 1142, in H5O_alloc
    object header message is too large

End of HDF5 error back trace

Can't set attribute 'metadata' in node:
 /ROI (Group) ''.

It looks like the header of the ROI object is too large.

I did a quick test, using HealpixConeROI, and my program was able to run fine.

I printed the information of the ROI. When using the Healpix Map one:

{'roifile': '../../FitsFiles/data_roi_aquila.fits', 'model_radius_deg': 19.0, 'ra': 267.63, 'roimap': array([0., 0., 0., ..., 0., 0., 0.]), 'threshold': 0.5, 'dec': -3.72, 'ROI type': 'HealpixMapROI'}

and when using Healpix Cone:

{'model_radius_deg': 19.0, 'dec': -3.72, 'ra': 267.63, 'ROI type': 'HealpixConeROI', 'data_radius_deg': 10.0}

It looks like the header is trying to save the roimap, which might cause the problem?

Pull request 43 wasn't implemented

Feature Request

The pull request #43 wasn't implemented and became stale. As per its description,

the branch allows for subsets of data to be properly analyzed which may only contain (for example) 300 days of data taken over 1500 days. This allows the user to set the HAL plugin to compute the flux over the full 1500 days rather than the 300 days of data contained in the map itself.

High HD5 memory usage

Reading in maptrees in .hd5 format with HAL consumes large amounts of memory (order 20GB) even with fairly small ROIs (~ 4 degree data radius, 9 degree model radius).

Save InvalidPSF to HDF5 format

We can't convert detector response files of the energy estimators to HDF5 because they contain InvalidPSF instances.
One way to solve it would be to allow writing of the InvalidPSF class. Any opinion @giacomov ?

Speed up Issue with HAL

Feature Request

During the HAWC collaboration meeting in Puerto Vallarta, it was noted that 3ML analysis was much slower with respect gammapy analysis. Quentin looked into possible bottleneck of the HAL code. Here is a summary (verbatim from him):

  • The PSF for extended sources is taken at the center of the ROI and not at the position of each source, could be a problem for large ROIs:
    https://github.com/search?q=repo%3AthreeML/hawc_hal%20point_source_image&type=code
    def get_source_map(self, response_bin_id, tag=None, integrate=False, psf_integration_method='fast'):
  • The PSF convolutor for extended source uses the default psf_integration_method=‘exact’ while for point source it is set to fast
    central_response_bins = self._response.get_response_dec_bin(self._roi.ra_dec_center[1])

    hawc_hal/hawc_hal/HAL.py

    Lines 215 to 217 in ce74038

    self._psf_convolutors[bin_id] = PSFConvolutor(
    central_response_bins[bin_id].psf, self._flat_sky_projection
    )

    interpolator = PSFInterpolator(psf_wrapper, flat_sky_proj)
    psf_stamp = interpolator.point_source_image(flat_sky_proj.ra_center, flat_sky_proj.dec_center)
  • For extended sources the psf convolution is not cached if the source parameters do not change

    hawc_hal/hawc_hal/HAL.py

    Lines 999 to 1030 in ce74038

    for ext_id in range(n_ext_sources):
    this_conv_src = self._convolved_ext_sources[ext_id]
    expectation_per_transit = this_conv_src.get_source_map(energy_bin_id)
    if this_ext_model_map is None:
    # First addition
    this_ext_model_map = expectation_per_transit
    else:
    this_ext_model_map += expectation_per_transit
    # Now convolve with the PSF
    if this_model_map is None:
    # Only extended sources
    this_model_map = (
    self._psf_convolutors[energy_bin_id].extended_source_image(this_ext_model_map)
    * data_analysis_bin.n_transits
    )
    else:
    this_model_map += (
    self._psf_convolutors[energy_bin_id].extended_source_image(this_ext_model_map)
    * data_analysis_bin.n_transits
    )
  • The following should be moved to a function that return npred for each source so it can be cached if its parameters do not change, it seems that only the psf interpolation is cached

    hawc_hal/hawc_hal/HAL.py

    Lines 974 to 992 in ce74038

    this_conv_src = self._convolved_point_sources[pts_id]
    expectation_per_transit = this_conv_src.get_source_map(
    energy_bin_id,
    tag=None,
    psf_integration_method=self._psf_integration_method,
    )
    expectation_from_this_source = expectation_per_transit * data_analysis_bin.n_transits
    if this_model_map is None:
    # First addition
    this_model_map = expectation_from_this_source
    else:
    this_model_map += expectation_from_this_source
  • many functions use this to compute npred but there is no caching
    def _get_expectation(self, data_analysis_bin, energy_bin_id, n_point_sources, n_ext_sources):
  • the likelihood could be evaluated on the flat geometry without need to reproject to healpix, could help if reprojection is slow

    hawc_hal/hawc_hal/HAL.py

    Lines 1032 to 1045 in ce74038

    # Now transform from the flat sky projection to HEALPiX
    if this_model_map is not None:
    # First divide for the pixel area because we need to interpolate brightness
    # this_model_map = old_div(this_model_map, self._flat_sky_projection.project_plane_pixel_area)
    this_model_map = this_model_map / self._flat_sky_projection.project_plane_pixel_area
    this_model_map_hpx = self._flat_sky_to_healpix_transform[energy_bin_id](
    this_model_map, fill_value=0.0
    )
    # Now multiply by the pixel area of the new map to go back to flux
    this_model_map_hpx *= hp.nside2pixarea(data_analysis_bin.nside, degrees=True)
  • this loop could be parallelized
    for bin_id in self._active_planes:
  • this loop could be parallelized
    for pts_id in range(n_point_sources):
  • not sure to understand how the parameter change property is used but it could differentiate spatial and spectral parameters
    because if only spectral parameters change it is not necessary to repeat the PSF convolution, one can rescale the cached value with a different weight in each energy bin. (edited)
    def _parameter_change_callback(self, this_parameter):

Crashing when computing errors

Describe the bug

I am attempting to perform a 3ML likelihood analysis using the updated python3 version of hawc_hal and on the python3_quick branch. I am using the newest iteration of the neural network energy reconstructed data map (maptree and response file linked below) and I'm encountering an odd error. After the fit initially converges, I get this error:

Traceback (most recent call last):
File "./fitModel.py", line 584, in
errAll=jl.get_errors()
File "/data/disk01/home/igherzog/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/threeML/classicMLE/joint_likelihood.py", line 430, in get_errors
errors = self._minimizer.get_errors()
File "/data/disk01/home/igherzog/hawc_software/miniconda3/envs/new_hal/lib/python3.7/site-packages/threeML/minimizer/minuit_minimizer.py", line 268, in get_errors
"MIGRAD results not valid, cannot compute errors."
threeML.minimizer.minimization.CannotComputeErrors: MIGRAD results not valid, cannot compute errors.

It throws this error when performing the error calculations in the fitModel.py script (see below in the Reproduce section). I do not get this error if I use an older iteration of the neural network energy energy estimator or if I use a simpler model. I am not able to attach my model file as a yml file (here it is as a .txt file
NN_DBE_final_model.txt)
but I can fit the Crab (single source, log-parabola) and, moving to my region, each of my sources individually but when I fit them together I get the above error.

To Reproduce
To get the error shown above I run this script on the computing cluster at UMD that HAWC uses. I'm using my hawc_hal environment (path:/data/disk01/home/igherzog/hawc_software/miniconda3/envs/new_hal) and am using a fresh install of the environment.

python /data/disk01/home/igherzog/Grad_stuff/J2031_415/threeml-analysis-scripts/fitModel/fitModel.py --use-bins B2C0Ea B2C0Eb B2C0Ec B2C0Ed B2C0Ee B3C0Eb B3C0Ec B3C0Ed B3C0Ee B3C0Ef B4C0Ec B4C0Ed B4C0Ee B4C0Ef B4C0Eg B5C0Ec B5C0Ed B5C0Ee B5C0Ef B5C0Eg B5C0Eh B6C0Ed B6C0Ee B6C0Ef B6C0Eg B6C0Eh B6C0Ei B7C0Ee B7C0Ef B7C0Eg B7C0Eh B7C0Ei B8C0Eg B8C0Eh B8C0Ei B8C0Ej B9C0Eg B9C0Eh B9C0Ei B9C0Ej B9C0Ek B10C0Eh B10C0Ei B10C0Ej B10C0Ek B10C0El --ROI-radius 5 -o . --map-tree /lustre/hawcz01/scratch/userspace/dezhih/service/nn_estimator/nn_sky_map/skymap/pass5_nn_version4_chunk1031_chunk1143/combined/combined-chunk103-chunk1142/rejiggered-chunk103-chunk1142/Pass5-Version4-NN-maptree-ch103-ch1143.root --model /data/disk01/home/igherzog/Grad_stuff/J2031_415/threeml-analysis-scripts/fitModel/Models/NHIT_systematic_studies/final_models/NN_DBE_final_model.yml --ROI-template /data/disk01/home/igherzog/Grad_stuff/J2031_415/threeml-analysis-scripts/fitModel/fits_files/2_degree_mask.fits --ROI-center 306.456 40.731 --like --Name NN-trial --det-res /lustre/hawcz01/scratch/userspace/dezhih/service/nn_estimator/nn_sky_map/skymap/pass5_nn_version4/combined/rejiggered/maptree/detRes-NN-allDecs-aligned.root --SaveModel --estimator P5_NN_2D --hawcPlugin hal

Expected behavior
I expected the fit to converge normally (see "good" log file attached below) and finish without crashing. It displays two sets of resultant fit parameters separated by the error calculation step my analysis is crashing on.

Log files
successful-fit-result.txt
failed-fit-result.txt

Desktop (please complete the following information):
CentOS 7 on the UMD cluster that HAWC uses for part of its analysis.

Additional context
Please let me know if this is the wrong place for this, HAWC also has a gitlab repository but I think this is an issue with 3ML and not the fitting script used.

installation problem

conda 4.9.2
Mac: 11.2.3 Big Sur
XCode: 12.3

1)starting with conda list1 :
2)after executing:
conda create --name new_hal -c conda-forge -c threeml numpy scipy matplotlib ipython numba reproject "astromodels>=2" "threeml>=2" root

( packages in list 2)

  1. scipy has a conflict with another/other packages: "ValueError: source code string cannot contain null bytes"

  2. also after a full installation import HAL fails due to scipy problem

list1.txt
list2.txt

Cheers
Amid

installation problems about python3 environment

Describe the bug

I can't install the environment on cluster with recommended command lineconda create --name new_hal -c conda-forge -c threeml numpy scipy matplotlib ipython numba reproject "astromodels>=2" "threeml>=2" root, it will stay forever at the step solving environment. So I use the command shown below to do the install:

conda create --name new_hal
conda install -c conda-forge numpy scipy matplotlib
conda install -c conda-forge ipython numba reproject
conda install -c conda-forge root
pip install git+https://github.com/threeml/astromodels.git
pip install git+https://github.com/threeml/threeML.git
pip install git+https://github.com/threeml/hawc_hal.git
pip install --no-binary :all: root_numpy

It seems work, but then I met some problems while running the crab example use python3_quick branch (file path: threeml-analysis-scripts/fitModel/crab_example.sh), it will crash. Also the crashing will happen with the test example provided by https://github.com/threeML/hawc_hal

I'll put the log files below to help with the troubleshooting procedure.

Log files

install_new_hal.log
pytest_hawc_hal.log
pytest3ML.log
pytestAstromodels.log
testMr421.log
testCrab.log

Misleading error message

On line 254 in HAL.py, the error message

raise ValueError("Bin %s it not contained in this response" % this_bin)

leads the user to believe the problem lies in the response file, when in fact the problem is with the maptree not containing the bin that is attempted to be used. I can fix this next week if no one gets to it before then, possibly at the workshop.

Plotting residuals (seemingly) incompatible with non-int bins

Describe the bug
When using hawc_hal with the 2D bins, if one wants to plot the binned count residuals one get an error

  File "minimal_crab.py", line 71, in <module>
    fig = hawc.display_spectrum()
  File "/data/disk01/home/cbrisboi/hawc_software/miniconda3/envs/hal_prod/lib/python3.7/site-packages/hawc_hal/HAL.py", line 410, in display_spectrum
    return self._plot_spectrum(net_counts, yerr, model_only, residuals, residuals_err)
  File "/data/disk01/home/cbrisboi/hawc_software/miniconda3/envs/hal_prod/lib/python3.7/site-packages/hawc_hal/HAL.py", line 415, in _plot_spectrum
    planes = np.array(self._active_planes, dtype=int)
ValueError: invalid literal for int() with base 10: '1c'

To Reproduce

Here is a minimal example produced from the Markarian example in the README.

from hawc_hal import HAL, HealpixConeROI
import matplotlib.pyplot as plt
from threeML import *

# Define the ROI                                                                                                                             
ra_crab, dec_crab = 83.63, 22.01
data_radius = 3.0
model_radius = 8.0

roi = HealpixConeROI(data_radius=data_radius,
                     model_radius=model_radius,
                     ra=ra_crab,
                     dec=dec_crab)

# Instance the plugin                                                                                                                        
                                                                               
response ="/data/scratch/userspace/kmalone/sweets-for-systematics/stages/bobs-ednas-thresh0p2/detRes-GP-allDecs.root"
maptree ="/data/scratch/userspace/kmalone/datasets/energy/maps/combined/dataset-ch103-ch603/rejiggered-files/maptree-ch103-ch603.root"

hawc = HAL("HAWC",
           maptree,
           response,
           roi)

# Use from bin 1 to bin 9                                                                                                                                                                                                                       
hawc.set_active_measurements(bin_list=['1c', '1d', '1e', '2c', '2d', '2e', '3c', '3d', '3e', '3f', \
                                      '4c', '4d', '4e', '4f', '4g', '5d', '5e', '5f', '5g', '5h', \
                                      '6d', '6e', '6f', '6g', '6h', '7f', '7g', '7h', '7i', '8f', \
                                      '8g', '8h', '8i', '8j', '9g', '9h', '9i', '9j', '9k', '9l'])

# Define model as usual                                                                                                                      
spectrum = Log_parabola()
source = PointSource("crab", ra=ra_crab, dec=dec_crab, spectral_shape=spectrum)

spectrum.piv = 1 * u.TeV
spectrum.piv.fix = True

spectrum.K = 1e-14 / (u.TeV * u.cm ** 2 * u.s)  # norm (in 1/(keV cm2 s))                                                                    
spectrum.K.bounds = (1e-25, 1e-19)  # without units energies are in keV                                                                      

spectrum.beta = 0  # log parabolic beta                                                                                                      
spectrum.beta.bounds = (-4., 2.)

spectrum.alpha = -2.5  # log parabolic alpha (index)                                                                                         
spectrum.alpha.bounds = (-4., 2.)

model = Model(source)

data = DataList(hawc)

jl = JointLikelihood(model, data, verbose=False)
jl.set_minimizer("minuit")
param_df, like_df = jl.fit()

# See the model in counts space and the residuals                                                                                            
fig = hawc.display_spectrum()
# Save it to file                                                                                                                            
fig.savefig("hal_crab_residuals.png")

Expected behavior
I expect a plot to be output, and saved.
A clear and concise description of what you expected to happen.

Log files

[WARNING ] The GSL library or the pygsl wrapper cannot be loaded. Models that depend on it will not be available.
[WARNING ] The ebltable package is not available. Models that depend on it will not be available
[INFO    ] Starting 3ML!

WARNING RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility. Expected 16 from C header, got 88 from PyObject

[WARNING ] The cthreeML package is not installed. You will not be able to use plugins which require the C/C++ interface (currently HAWC)
[WARNING ] Could not import plugin HAWCLike.py. Do you have the relative instrument software installed and configured?
[WARNING ] Could not import plugin FermiLATLike.py. Do you have the relative instrument software installed and configured?

WARNING DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. 
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

[INFO    ] Creating singleton for /data/scratch/userspace/kmalone/sweets-for-systematics/stages/bobs-ednas-thresh0p2/detRes-GP-allDecs.root

WARNING DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. 
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

[WARNING ] We have set the min_value of crab.spectrum.main.Log_parabola.K to 1e-99 because there was a postive transform
[INFO    ] set the minimizer to minuit
[INFO    ] set the minimizer to MINUIT
[WARNING ] The current value of the parameter K (1.0) was above the new maximum 1e-19.
Best fit values:

                                                         result  \
parameter                                                         
crab.spectrum.main.Log_parabola.K      (3.76 +/- 0.07) x 10^-20   
crab.spectrum.main.Log_parabola.alpha          -2.363 +/- 0.028   
crab.spectrum.main.Log_parabola.beta    (1.08 +/- 0.09) x 10^-1   

                                                  unit  
parameter                                               
crab.spectrum.main.Log_parabola.K      1 / (cm2 keV s)  
crab.spectrum.main.Log_parabola.alpha                   
crab.spectrum.main.Log_parabola.beta                    

Correlation matrix:

 col0  col1  col2
----- ----- -----
 1.00 -0.72 -0.48
-0.72  1.00  0.92
-0.48  0.92  1.00

Values of -log(likelihood) at the minimum:

       -log(likelihood)
HAWC       66270.311832
total      66270.311832

Values of statistical measures:

     statistical measures
AIC         132546.623714
BIC         132579.890109
Traceback (most recent call last):
  File "minimal_crab.py", line 67, in <module>
    fig = hawc.display_spectrum()
  File "/data/disk01/home/cbrisboi/hawc_software/miniconda3/envs/hal_prod/lib/python3.7/site-packages/hawc_hal/HAL.py", line 410, in display_spectrum
    return self._plot_spectrum(net_counts, yerr, model_only, residuals, residuals_err)
  File "/data/disk01/home/cbrisboi/hawc_software/miniconda3/envs/hal_prod/lib/python3.7/site-packages/hawc_hal/HAL.py", line 415, in _plot_spectrum
    planes = np.array(self._active_planes, dtype=int)
ValueError: invalid literal for int() with base 10: '1c'

Desktop (please complete the following information):
(not a desktop, but its the machine I'm running on)

  • OS: CentOS 7
  • Version 7.6.1810

Additional context
I should have some time to investigate this next week.

Old maptrees unable to open

Previously, we used integers, rather than strings to refer to the bins. They were labeled 'id' rather than 'name' as we have now. I can manually overcome this label switch, however due to the type difference, a fascinating error message occurs.

This was while running the example to extract an ROI 3 degrees around the crab, from the example in hal already (but for Mrk 421).

Traceback (most recent call last): File "make_crab_set.py", line 18, in <module> m.write("crab_maptree.hd5") File "/Users/chad/hawc/chad_hal/hawc_hal/maptree/map_tree.py", line 171, in write 'Bin name inconsistency: {} != {}'.format(bin_id, analysis_bin.name) AssertionError: Bin name inconsistency: 0 != 0

unable to fit templates

macOS catalina
conda: 4.9.2
Xcode: 12.0.1

(The same problem happens on UMD)

pytest --pyargs threeML: 148 passed, 18 skipped, 2 xfailed, 4 xpassed, 2096 warnings in 986.32s (0:16:26)
pytest --pyargs astromodels: 104 passed, 1 skipped, 107 warnings in 40.51s
test HAL: 1 passed, 363 warnings in 708.12s (0:11:48)

Problem:
While the code workes properly to fit PS and Extended sources, it is unable to fit a template using "SpatialTemplate_2D()",
It is not able nor to fit nor subtracting a previously fitted template.
(I tried ROOT and minunit)

Amid
temp_fit.txt

ROOT file interface

See #43 and #44 : Lots of errors like

tests/test_root_to_hdf.py:72: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/test_root_to_hdf.py:50: in do_one_test_maptree
    m = map_tree_factory(geminga_maptree, roi_)
hawc_hal/maptree/map_tree.py:27: in map_tree_factory
    return MapTree.from_root_file(map_tree_file, roi, n_transits)
hawc_hal/maptree/map_tree.py:58: in from_root_file
    data_analysis_bins = from_root_file(map_tree_file, roi, n_transits)
hawc_hal/maptree/from_root_file.py:73: in from_root_file
    with open_ROOT_file(map_tree_file) as f:
../../../miniconda/envs/test-environment/lib/python3.7/contextlib.py:112: in __enter__
    return next(self.gen)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
filename = PosixPath('/home/travis/build/threeML/hawc_hal/tests/data/geminga_maptree.root')
    @contextlib.contextmanager
    def open_ROOT_file(filename):
        """
        Open a ROOT file in a context. Will close it no matter what, even if there are exceptions
    
        :param filename:
        :return:
        """
    
>       f = ROOT.TFile(filename)
E       TypeError: none of the 2 overloaded methods succeeded. Full details:
E         TFile::TFile() =>
E           TypeError: takes at most 0 arguments (1 given)
E         TFile::TFile(const char* fname, const char* option = "", const char* ftitle = "", int compress = ROOT::RCompressionSetting::EDefaults::kUseCompiledDefault) =>
E           TypeError: could not convert argument 1 (bad argument type for built-in operation)

../../../miniconda/envs/test-environment/lib/python3.7/site-packages/threeML/io/cern_root_utils/io_utils.py:28: TypeError

pip fails to install hawc_hal due to pytables

I'm getting the following error installing hawc_hal with pip (this is with python 3.7.8, astromodels 2.0.0 and threeML 2.0.0 from conda):

ERROR: Could not find a version that satisfies the requirement pytables (from astromodels->hawc-hal==1.0) (from versions: none)
ERROR: No matching distribution found for pytables (from astromodels->hawc-hal==1.0)

I was able to work around this by using pip install --no-deps to install hawc_hal (and installing all dependencies manually).

However, the same error is now also causing the automated tests to fail.

Any idea where this is coming from? I do have pytables installed, but pip isn't picking up on it:

(st3ml) √ my3ML/hawc_hal $ pip list | grep pytables                                                                                                                    22:42:50
(st3ml) ?1 my3ML/hawc_hal $ conda list pytables                                                                                                                        22:42:55
# packages in environment at /Users/hfleisc1/opt/anaconda3/envs/st3ml:
#
# Name                    Version                   Build  Channel
pytables                  3.6.1            py37he95298d_3    conda-forge

Also, I don't find pytables mentioned anywhere in the hawc_hal dependencies.

root-numpy faills to install

Describe the bug
I tried to install a new environment in my laptop with the following instructions:

mamba create --name new_hal -c conda-forge -c threeml numpy scipy matplotlib ipython numba reproject "astromodels>=2" "threeml>=2" root
conda activate new_hal
pip install --no-binary :all: root_numpy
pip install git+https://github.com/threeml/hawc_hal.git

Everything seems to work up the point of installing root-numpy and I obtain the following output.
root_numpy_installation_output.txt

Here are the details about my OS:

VERSION="35 (Workstation Edition)"
ID=fedora
VERSION_ID=35
VERSION_CODENAME=""
PLATFORM_ID="platform:f35"
PRETTY_NAME="Fedora Linux 35 (Workstation Edition)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:35"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f35/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=35
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=35
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation

Alternative to root_numpy?

root_numpy is no longer maintained. We should find an alternative package for loading root files (or switch to hdf5 entirely).

From https://github.com/scikit-hep/root_numpy:

root_numpy has not been actively maintained in several years. This is mostly due to the emergence of new alternatives which are both faster and more flexible.

  • uproot provides support for reading and writing ROOT files without the need for an installation of ROOT. See here for details.
  • ROOT now natively supports converting objects into numpy arrays using either directly using TTree or with the newer RDataFrame

Test fit results

Unless I'm missing something, test_geminga_paper seems to be the only one that actually compares the best-fit parameters to expected values. The other tests just run through the analysis and pass if there is no crash.
We should implement similar checks on the fit results for existing tests e.g. test_complete_analysis, and also something similar for the TS values etc.

Trying hawc_hal with py3

I started a py3 environment to also move the HAWC software to python 3. I installed astromodels, threeml and hawc_hal.
I tried this at the UMD cluster. (Linux 3.10.0-957.el7.x86_6)

When trying to run hawc_hal, the program stops. There's no concise error for a test analysis I try to run on the crab.

terminate called without an active exception
Aborted (core dumped)

I cloned hawc_hal and installed it so that I could run tests. Running pytest:

Passed:

  • test_serialization.py
  • test_bilinear_interpolation.py
  • test_psf.py
  • test_copy.py
  • test_roi.py
  • test_on_point_source.py

Failed:

  • test_root_to_hdf.py
  • test_plugin_creation.py
  • test_complete_analysis.py
  • test_model_residual_maps.py
  • test_maptree.py
  • test_geminga_paper.py

Here's an example of one output

====================== test session starts ======================
platform linux -- Python 3.8.5, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /data/disk01/home/hayalaso/Software/hawc_software_py3/externals/hawc_hal
collected 2 items                                               

test_plugin_creation.py Fatal Python error: Aborted

Thread 0x00007fac8a11a700 (most recent call first):
  File "/data/disk01/home/hayalaso/Software/hawc_software_py3/externals/ape/External/root/6.16.00/lib/root/ROOT.py", line 484 in _processRootEvents
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/threading.py", line 870 in run
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/threading.py", line 932 in _bootstrap_inner
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/threading.py", line 890 in _bootstrap

Current thread 0x00007facd2e2a740 (most recent call first):
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/hawc_hal-1.0-py3.8.egg/hawc_hal/maptree/from_root_file.py", line 208 in _read_partial_tree
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/hawc_hal-1.0-py3.8.egg/hawc_hal/maptree/from_root_file.py", line 140 in from_root_file
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/hawc_hal-1.0-py3.8.egg/hawc_hal/maptree/map_tree.py", line 60 in from_root_file
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/hawc_hal-1.0-py3.8.egg/hawc_hal/maptree/map_tree.py", line 29 in map_tree_factory
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/hawc_hal-1.0-py3.8.egg/hawc_hal/HAL.py", line 62 in __init__
  File "/data/disk01/home/hayalaso/Software/hawc_software_py3/externals/hawc_hal/tests/test_plugin_creation.py", line 17 in test_plugin_from_root
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/python.py", line 183 in pytest_pyfunc_call
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/python.py", line 1641 in runtest
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/runner.py", line 162 in pytest_runtest_call
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/runner.py", line 255 in <lambda>
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/runner.py", line 311 in from_call
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/runner.py", line 254 in call_runtest_hook
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/runner.py", line 215 in call_and_report
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/runner.py", line 126 in runtestprotocol
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/runner.py", line 109 in pytest_runtest_protocol
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/main.py", line 348 in pytest_runtestloop
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/main.py", line 323 in _main
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/main.py", line 269 in wrap_session
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/main.py", line 316 in pytest_cmdline_main
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162 in main
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185 in console_main
  File "/data/disk01/home/hayalaso/Software/miniconda3/envs/aerie_py3/bin/pytest", line 8 in <module>
Aborted (core dumped)

multiprocessing problem in from_root_file.py

Hello, I encountered a problem when I fit some HAWC private data on my laptop. I have successfully installed threeML and hawc_hal. When I did the fit using fitModel.py, It crashed at line 249 in from_root_file.py (

with multiprocessing.Pool(processes=n_workers) as executor:
). It seems that the problem is produced by the multiprocessing module. Also, It seems that that module starts more than one process("Srarting 3ML" occurred 3 times). If I run the same script on a HAWC server, it can work smoothly.

My laptop information:
MacBook Pro
Apple M2 Max
macOS Ventura version 13.5
Xcode Version 15.0.1

miniconda https://repo.anaconda.com/miniconda/Miniconda3-py311_23.11.0-2-MacOS-x86_64.sh (I alse tried arm64, it also has the same problem )

I also set environment when installing threeML:
export OMP_NUM_THREADS=1
export MKL_NUM_THREADS=1
export NUMEXPR_NUM_THREADS=1

19:20:33 INFO Starting 3ML! init.py:35
(Some warning )
19:20:44 WARNING No fermitools installed lat_transient_builder.py:44
…………………………

Performing likelihood analysis...

Specifying ntransits currently does not work in hal, this is all @chad's fault.
Loading as MODEL file: new_source.model

……………………………………

Using analyis bins [……………………]
19:20:37 INFO Using transits contained in maptree HAL.py:79
INFO Reading Maptree! from_root_file.py:214
()
…………………………………………

19:20:42 INFO Starting 3ML! init.py:35
(Some warning information)……………………
19:20:44 WARNING No fermitools installed lat_transient_builder.py:44

Performing likelihood analysis...

Specifying ntransits currently does not work in hal, this is all @chad's fault.
Loading as MODEL file: new_source.model
(Some warning information)……………………
Using analyis bins [……………………]
19:20:46 INFO Using transits contained in maptree HAL.py:79
INFO Reading Maptree! from_root_file.py:214
Traceback (most recent call last):
File "", line 1, in
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/spawn.py", line 131, in _main
prepare(preparation_data)
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 291, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "/Users/zwang/work/source_ana/new_source/fitModel.py", line 515, in
like = HAL("HAWC", map_tree, det_res, roi, options.pixelwidth) #,n_transits)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/site-packages/hawc_hal/HAL.py", line 88, in init
self._maptree = map_tree_factory(
^^^^^^^^^^^^^^^^^
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/site-packages/hawc_hal/maptree/map_tree.py", line 26, in map_tree_factory
return MapTree.from_root_file(map_tree_file, roi, n_transits, n_workers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/site-packages/hawc_hal/maptree/map_tree.py", line 64, in from_root_file
data_analysis_bins, transits = from_root_file(
^^^^^^^^^^^^^^^
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/site-packages/hawc_hal/maptree/from_root_file.py", line 249, in from_root_file
with multiprocessing.Pool(processes=n_workers) as executor:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/pool.py", line 215, in init
self._repopulate_pool()
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/pool.py", line 329, in _repopulate_pool_static
w.start()
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
^^^^^^^^^^^^^^^^^
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/context.py", line 288, in _Popen
return Popen(process_obj)
^^^^^^^^^^^^^^^^^^
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 32, in init
super().init(process_obj)
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/popen_fork.py", line 19, in init
self._launch(process_obj)
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/spawn.py", line 164, in get_preparation_data
_check_not_importing_main()
File "/Users/zwang/software/hawc_software_test/miniconda3/envs/hal_analysis/lib/python3.11/multiprocessing/spawn.py", line 140, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

    To fix this issue, refer to the "Safe importing of main module"
    section in https://docs.python.org/3/library/multiprocessing.html

19:20:52 INFO Starting 3ML! init.py:35
WARNING WARNINGs here are NOT errors init.py:36
WARNING but are inform you about optional packages that can be installed init.py:37
WARNING to disable these messages, turn off start_warning in your config file init.py:40
WARNING Multinest minimizer not available
……………………

HAL get_simulated_dataset does not update name of BKG model

Description
When using get_simulated_dataset, the new HAL instance keep the name of the original instance for the Background normalization model

MWE and error

For any maptree and detector response:

hawc = HAL("HAWCCR",maptree,detres,roi)
hawc.set_active_measurements(bin_list=analysis_bins)
hawc.set_model(this_likelihood_model)

simhawc = hawc.get_simullated_dataset('SimHAWC')
thisdata = DataList(simhawc)

njl = JointLikelihood(this_likelihood_model,thisdata,verbose=False)
par,stats = njl.fit(quiet=True)

 ERROR     This is a bug of the plugin for <class 'hawc_hal.HAL.HAL'>: nuisance     joint_likelihood.py:137
                  parameters must contain the instance name 

If I check the _nuisance_parameters dict, I get:

simhawc._nuisance_parameters

OrderedDict([('HAWCCR_bkg_renorm',
              Parameter HAWCR_bkg_renorm = 1.0 []
              (min_value = 0.5, max_value = 1.5, delta = 0.01, free = False))])

Expected behavior

simhawc._nuisance_parameters

OrderedDict([('SimHAWC_bkg_renorm',
              Parameter SimHAWC_bkg_renorm = 1.0 []
              (min_value = 0.5, max_value = 1.5, delta = 0.01, free = False))])

Desktop (UMD-Fiesta)

  • OS: CentOS Linux
  • Version 7 Core

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.