Coder Social home page Coder Social logo

plp's Introduction

DOI

IGRINS Pipeline Package

The IGRINS PLP is designed to produce decent quality of processed data of all the observing data without (or minimum) human interaction. It was also designed to be adaptable for a real time processing during the observing run.

The key concept of this pipeline is having a "recipe" to process a certain data group. These approach is utilized by many other observatories/instruments. In most cases, the recipe needs to be set and recorded in the header (or log) when the observation is executed. Unfortunately, this is not properly done. Therefore, to run the pipeline, you should have an input file describing which recipe should be used to which data sets.

IGRINS pipeline package is currently in active development. Version 3 is the latest version and is recommended for use for reducing data from the McDonald 2.7m, LDT/DCT, and Gemini-South telescopes. Version 2 will still work for those who need it. Version 1, that was originally developed by Prof. Soojong Pak's team at Kyung Hee University (KHU), is deprecated and is not recommended to use. Versions 2-3 were developed by the pipeline team at KASI (led by Dr. Jae-Joon Lee) in close collaboration with KHU's team. Additional development and testing for v3 has been carried out by Kyle Kaplan and Erica Sawczynec at the University of Texas at Austin Department of Astronomy.

Downloads

Documentation

The Raw & Reduced IGRINS Spectral Archive (RRISA)

IGRINS data is made publically availiable through the The Raw & Reduced IGRINS Spectral Archive (RRISA). The current version of RRISA (v1) uses the IGRINS PLP v2 for it's data reductions. The raw data are also availiable for RRISA for those who want to perform their own data reduction with the IGRINGS PLP.

Publications

The version 1 pipeline is described in the following publication.

plp's People

Contributors

leejjoon avatar kgullikson88 avatar kfkaplan avatar

Stargazers

 avatar Savio Oliveira avatar Erica Sawczynec avatar  avatar Arjun Savel avatar  avatar Megan Tannock avatar Joe Durbak avatar Simon Conseil avatar  avatar Brittany Miles avatar Fujun Du avatar Jennifer Sobeck avatar John Arban Lewis avatar  avatar  avatar  avatar gully avatar

Watchers

 avatar James Cloos avatar gully avatar  avatar Henry Roe avatar  avatar Moo-Young Chun avatar  avatar Hwihyun Kim avatar Narae Hwang avatar Timur Sahin avatar Erica Sawczynec avatar Savio Oliveira avatar

plp's Issues

When reducing the standards and science targets, saving .png files causes an error

Original report from Kyle Kaplan.


When reducing the standards and science targets, saving .png files causes an error related to "fontsize=XXX" when defining a legend for a plot such as in
"p1.legend(loc='upper left',bbox_to_anchor=(1,1), fontsize=11)"
According to http://stackoverflow.com/questions/18920712/matplotlib-legend-fontsize:
Setting the font size via kwarg does not work because you are using an antiquated version of matplotlib. The error it is giving you,
TypeError: init() got an unexpected keyword argument 'fontsize' means that fontsize is not a valid keyword argument of the init function.
-Solution is to remove "fontsize=11" from the three instances in PL_Display.py like...
p1.legend(loc='upper left',bbox_to_anchor=(1,1), fontsize=11)
is replaced with...
p1.legend(loc='upper left',bbox_to_anchor=(1,1))

PLP not compatible with python 3.12

I have found that the PLP is not commpatible with Python 3.12. It should run with python 3.11 or below so anyone running into this issue should create a python environment that runs in python 3.11 for running the PLP.

This stems from
import imp
being depreciated and removed from python v3.12.

This package imp is imported in igrins/igrins_recipes/__init__.py.

Need to implement some sort of solution, like what is discussed in https://discuss.python.org/t/how-do-i-migrate-from-imp/27885/9

Shift in dispersion direction due to piont source target not centered on slit

This is not necessarily a problem to be solved in the PLP perse, but it is an issue we have come across that is worth noting for future reference. When a point source target (ie. a star) is not centered on the IGRINS slit, it can induce a shift in dispersion on the detector akin to what we see with shifts from flexure. This shift is not related to the flexure, but creates a similar effect where telluric corrections then result in "p-cygni" type residuals for deep telluric lines. This shift is normally subpixel. It can be seen in the 2D echellograms as a shift in the telluric absorption lines relative to the sky emission lines, which should not move (barring flexure). It mostly occurs in data from McDonald Observatory where the slit is widest on the sky when guiding on the target is poor due to bad seeing, wind, or being too dim to track on. The latest PLP version (currently branch reimplement_cr_reject but will be part of a future version 3) outputs text files in the outdata directory for a night called telluric_shift_H.csv and telluric_shift_K.csv. These show the shift in pixels in telluric absorption features from the cross-correlation between individual exposures within a single set of A or B nods on a target.

There is no universal solution within the PLP to try to fix this but should probably be addressed on a case by case basis as follows:

  • If the shift is from a single exposure in a target with many exposures, the single exposure could be excluded from the data reduction for that target.
  • Portions of the spectrum with bad residuals where there are deep telluric absorption features could be masked out.
  • A target with this shift in all its exposures could possibly be shifted in pixels to match its standard star.

Add parameter to optionally gzip all output fits files

My preference is to keep all my fits files gzip'd on disk to save disk space.

I created Issue #13 (and pull request #14) to allow for raw fits files in the indata directory to be gzip'd.

It would be great to have a parameter to control the output of plp and optionally gzip all output fits files as well.

The straight-forward part of this is adding to the libs/fits.py to write gzip'd fits files. What I'm less certain of is where the parameter controlling gzip'd vs. not-gzip'd output should go:

  • Somehow in to the recipe_log files?
  • As a command line flag? e.g.: python ./igr_pipe.py flat 20140316 --gzip-output

If someone tells me the preferred method for passing a parameter to plp, I'll implement it and submit a pull request.

Thanks,
-Henry

better extraction of slit profile

Optimal spectral extraction uses a slit profile (profile of a star along the slit direction). Currently, a single slit profile is used for all orders, which is a summation (then normalized) of slit profiles extracted from central part of each orders (x pixel range between 800 ~ 1200).

This may needs to be improved as current implemenation does not account the instrumental flexure.

*) as the effect of flexure will different for different orders, slit profile will be slightly different between orders.

*) Also, the slit profile may slightly change (shift) along the wavelength direction (within a given order) due to the flexure.

The recipe creator code does not work

When I tried to follow the instructions to make a recipe file, I got the following error:

python igr_pipe.py prepare-recipe-logs 20140709
usage: igr_pipe.py [-h]

               {flat,stellar-ab,extended-ab,sky-wvlsol,extended-onoff,a0v-ab,plot-spec,publish-html,thar}
               ...
igr_pipe.py: error: invalid choice: 'prepare-recipe-logs' (choose from 'flat', 'stellar-ab', 'extended-ab', 'sky-wvlsol', 'extended-onoff', 'a0v-ab', 'plot-spec', 'publish-html', 'thar')

igr_pipe does not seem to recognize prepare-recipe-logs as a valid option.

qlook branch does not use correct standard

Hi @leejjoon this is something you might want to be aware of in your reductions!

In the old pipeline GROUP2 in the recipe file corresponds to the filenumber of the standard that you want to use for the reduction. In the new pipeline this filenumber is not always followed.

We were comparing spec_a0v.fits files from the old and new pipeline to observe improvements in the flexure correction but noticed sometimes there was a large change in flux between the two pipelines for seemingly random groupings of observations throughout the night. It turns out in these cases that new pipeline is not listening to the GROUP2 filenumber and instead looks to be picking a standard filenumber based on proximity in the recipe log (?maybe?).

Here is an example from 20161013 if you want to verify yourself:

HR  1237 V 6.3, STD, 276, 1, 30.000000, A0V_AB, 276 277 278 279 280 281 282 283, A B B A A B B A
HBC388, TAR, 284, 276, 60.000000, STELLAR_AB, 284 285 286 287 288 289, A B B A A B
IPTau, TAR, 290, 276, 60.000000, STELLAR_AB, 290 291 292 293 294 295, A B B A A B
V819Tau, TAR, 296, 276, 60.000000, STELLAR_AB, 296 297 298 299 300 301, A B B A A B
LkCa14, TAR, 302, 276, 60.000000, STELLAR_AB, 302 303 304 305 306 307, A B B A A B
V836Tau, TAR, 308, 276, 60.000000, STELLAR_AB, 308 309 310 311 312 313, A B B A A B
FTTau, TAR, 314, 276, 60.000000, STELLAR_AB, 314 315 316 317 318 319, A B B A A B
167-317, TAR, 320, 328, 180.000000, STELLAR_AB, 320 321 322 323 324 325 326 327, A B B A A B B A
HD  65158 V 7.2, STD, 328, 1, 30.000000, A0V_AB, 328 329 330 331, A B B A

All of the objects before filenumber 320 should be using the same standard (276) but I find that with the new pipeline file numbers 302, 308, and 314 actually use the standard 328. Here are some plots showing the standard comparisons

Between standards used in 314 and 320--they should be different but are the same:
image

Between the standards used in 314 and 296--they should be the same but are different:
image

Difference in Continuum Shape of New and Old PIpeline

Here is an example of what the continuum looks like in the new pipeline for a .spec_a0v.fits file

bad_cont

This is what the continuum looks like in the old pipeline
old_pipe

We found that the new version of the pipeline is flattening the Vega spectrum (but we are unsure why this was implemented). Kyle just pushed a fix for this to the reimplement_cr_reject branch that removes the flattening.

This is what the continuum looks like for this branch of the pipeline with this fix
new_pipe

Additionally Kyle brought up the slight increase in the flux in the new pipeline (Issue #25; < 2%). We posited that as long as the A0 and the target were reduced with the same pipeline this flux difference wouldn't matter in the spec_a0v.fits files since the ratio between the target and standard would stay the same. Just want to note that things work out like we expected. Here is a single order in the old pipeline, blue, and the new pipeline (non-flattened Vega), orange, and we can see the flux doesnt have any consistent offset which is what we had hoped for
good_norm

I'm basically raising this issue to document the issue occurring in the new pipeline and to ask @leejjoon if he would be upset if we turned the Vega flattening off in the next release.

Contents of the output files

There isn't much documentation on the output of the pipeline and I'm having a little trouble parsing all the files. There also seems to be a lot of the same data in duplicate locations. I've gather that my telluric-corrected spectrum is in a file named something like SDCH_20161112_0026.spec_a0v.fits, with five extensions: telluric-corrected spectrum, wavelength, target spectrum, A0V spectrum, and a Vega spectrum. I'm left with two questions.

  1. How do I figure out which A0V spectrum was used for telluric correction for each target?

  2. Is there a quick way to get the variance of the telluric-corrected spectrum?

The 2nd and 3rd extensions of the .spec_a0v.fits are the same as the 2nd and 1st extensions of the .spec.fits file of the target, respectively. I've figured out that the telluric-corrected spectrum is nearly exactly = target / (A0V / Vega). If I choose the right A0V, I can just do the target/(A0V/Vega) myself and get nearly exactly what is in the .spec_a0v.fits file. So I could just propagate the variances myself, but that requires going through and confirming I am using the right A0V in each case. Knowing which A0V the pipeline used would make this easier and not a big deal.

Furthermore, the wavelengths in the .spec_a0v.fits file are the same as in the .spec.fits file. I would have guessed they would be the same as what is in the .wave.fits file for the A0V star because that has gone through a fine-tuning based on a telluric absorption model, correct?

1D extraction in the .spec.fits file for EXTENDED-ONOFF sources appears to assume nodding on slit

I have noticed that for EXTENDED-ONOFF targets, for the qlook (and by extension flexure) branches, that the 1D spectrum in the .spec.fits files appears to not be extracting the spectrum correctly and the extraction hovers around zero counts. I suspect the 1D extraction is incorrectly assuming ABBA nodding of the target on slit. For comparison, summing along the slit axis in the .spec2d.fits files results in the correct 1D extraction. See the attached plot of an H band order from an emission line nebula where summing the order found in the .spec2d.fits file gives the correct extraction for the extended object while the 1D spectrum found in the .spec.fits file appears to be ABBA extracted.
Screenshot 2023-09-05 at 5 58 59 PM

Targets labeled STELLAR-AB or A0V-AB are correctly extracted in 1D so for point sources nodded on the slit, there is no issue here, this only applies to EXTENDED-ONOFF.

Missing library get_destripe_mask?

I find the following issue when trying to run the wavelength solution parts of the PLP thar or sky-wvlsol in the latest "fix numpy" issue of the PLP:

File "/Volumes/IGRINS_Data/plp/recipes/libs/image_combine.py", line 14, in make_combined_image_sky
from get_destripe_mask import get_destripe_mask
ImportError: No module named get_destripe_mask

From what I can tell the library image_combine.py is trying to import the library get_destripe_mask but get_destripe_mask.py does not appear to exist anywhere. I suspect it just needs to be added to the github repository.

-Kyle

UTDATE error

When I try to run the 2.1-alpha.1 version of the PLP, the PLP does not appear to be correctly reading in the date when trying to run the "thar" or "sky-wvlsol" recipes. It appears to work fine for the "flat" but as an example I get the following error when trying to run "thar" on the date 20141023:

grad11kk:plp-2.1-alpha.1 kfkaplan$ python ./igr_pipe.py thar 20141023
[31, 32, 33, 34, 35, 36, 37, 38, 39, 40]
/Volumes/IGRINS_Data/plp-2.1-alpha.1/recipes/libs/igrins_config.py:36: UserWarning: no recipe.config is found. Internal default will be used.
warnings.warn("no recipe.config is found. Internal default will be used.")
loading calib/primary/20140525/FLAT_.centroid_solutions.json
Traceback (most recent call last):
File "./igr_pipe.py", line 54, in
argh.dispatch(parser)
File "./external/argh/argh/dispatching.py", line 125, in dispatch
for line in lines:
File "./external/argh/argh/dispatching.py", line 202, in _execute_command
for line in result:
File "./external/argh/argh/dispatching.py", line 185, in _call
result = args.function(_positional, *keywords)
File "/Volumes/IGRINS_Data/plp-2.1-alpha.1/recipes/recipe_thar.py", line 30, in thar
starting_obsids, config_file)
File "/Volumes/IGRINS_Data/plp-2.1-alpha.1/recipes/libs/recipe_base.py", line 40, in call
self.process(utdate, bands, starting_obsids, config_file)
File "/Volumes/IGRINS_Data/plp-2.1-alpha.1/recipes/libs/recipe_base.py", line 59, in process
self.run_selected_bands_with_recipe(utdate, selected, bands)
File "/Volumes/IGRINS_Data/plp-2.1-alpha.1/recipes/libs/recipe_base.py", line 36, in run_selected_bands_with_recipe
self.run_selected_bands(utdate, selected2, bands)
File "/Volumes/IGRINS_Data/plp-2.1-alpha.1/recipes/recipe_thar.py", line 24, in run_selected_bands
self.config)
File "/Volumes/IGRINS_Data/plp-2.1-alpha.1/recipes/recipe_thar.py", line 361, in process_thar_band
thar_products = get_thar_products(helper, band, obsids)
File "/Volumes/IGRINS_Data/plp-2.1-alpha.1/recipes/recipe_thar.py", line 108, in get_thar_products
ap = get_simple_aperture(helper, band, thar_master_obsid)
File "/Volumes/IGRINS_Data/plp-2.1-alpha.1/recipes/recipe_thar.py", line 93, in get_simple_aperture
bottomup_solutions = get_bottom_up_solution(helper, band, thar_master_obsid)
File "/Volumes/IGRINS_Data/plp-2.1-alpha.1/recipes/recipe_thar.py", line 78, in get_bottom_up_solution
mastername=flaton_basename)
File "/Volumes/IGRINS_Data/plp-2.1-alpha.1/recipes/libs/products.py", line 152, in load
v = self.load_one(fn)
File "/Volumes/IGRINS_Data/plp-2.1-alpha.1/recipes/libs/products.py", line 239, in load_one
return json.load(open(fn))
IOError: [Errno 2] No such file or directory: 'calib/primary/20140525/FLAT
.centroid_solutions.json'
grad11kk:plp-2.1-alpha.1 kfkaplan$

[register-sky] Error: No JSON object could be decoded

Hi,

I'm trying to process some of my data using the PLP v2 and when I execute the 'register-sky' pipeline I got the message: "No JSON object could be decoded". Do you have any idea what could be causing this error?

$ python ./igr_pipe.py register-sky 20210427
loading /home/navarete/plp-master/calib/primary/20210427/FLAT_SDCH_20210427_0001.hotpix_mask.fits
loading /home/navarete/plp-master/calib/primary/20210427/FLAT_SDCH_20210427_0011.deadpix_mask.fits
loading /home/navarete/plp-master/calib/primary/20210427/FLAT_SDCH_20210427_0011.bias_mask.fits
saving /home/navarete/plp-master/outdata/20210427/SDCH_20210427_0035.stacked.fits
WARNING: AstropyDeprecationWarning: "clobber" was deprecated in version 2.0 and will be removed in a future version. Use argument "overwrite" instead. [astropy.utils.decorators]
WARNING:astropy:AstropyDeprecationWarning: "clobber" was deprecated in version 2.0 and will be removed in a future version. Use argument "overwrite" instead.
loading /home/navarete/plp-master/outdata/20210427/SDCH_20210427_0035.stacked.fits
loading /home/navarete/plp-master/calib/primary/20210427/FLAT_SDCH_20210427_0011.centroid_solutions.json
/home/navarete/anaconda3/envs/igr-pipe/lib/python2.7/site-packages/numpy/lib/nanfunctions.py:1076: RuntimeWarning: Mean of empty slice
  return np.nanmean(a, axis, out=out, keepdims=keepdims)
saving /home/navarete/plp-master/calib/primary/20210427/SDCH_20210427_0035.oned_spec.json
Traceback (most recent call last):
  File "./igr_pipe.py", line 64, in <module>
    argh.dispatch(parser, argv=argv)
  File "/home/navarete/plp-master/igrins/external/argh/dispatching.py", line 125, in dispatch
    for line in lines:
  File "/home/navarete/plp-master/igrins/external/argh/dispatching.py", line 202, in _execute_command
    for line in result:
  File "/home/navarete/plp-master/igrins/external/argh/dispatching.py", line 185, in _call
    result = args.function(*positional, **keywords)
  File "/home/navarete/plp-master/igrins/libs/recipe_factory.py", line 43, in _recipe_func
    **kwargs)
  File "/home/navarete/plp-master/igrins/libs/recipe_base.py", line 115, in process
    **kwargs)
  File "/home/navarete/plp-master/igrins/libs/recipe_factory.py", line 22, in run_selected_bands_with_recipe
    self.config, **kwargs)
  File "/home/navarete/plp-master/igrins/recipes/process_wvlsol_v0.py", line 548, in process_band
    identify_orders(obsset)
  File "/home/navarete/plp-master/igrins/recipes/process_wvlsol_v0.py", line 231, in identify_orders
    ref_spec_path, ref_spectra = obsset.fetch_ref_data(kind=ref_spec_key)
  File "/home/navarete/plp-master/igrins/libs/obs_set.py", line 139, in fetch_ref_data
    return self.caldb.fetch_ref_data(self.band, kind)
  File "/home/navarete/plp-master/igrins/libs/cal_db.py", line 326, in fetch_ref_data
    kind=kind)
  File "/home/navarete/plp-master/igrins/libs/master_calib.py", line 111, in fetch_ref_data
    return fn, loader(fn)
  File "/home/navarete/plp-master/igrins/libs/master_calib.py", line 83, in json_loader
    return json.load(open(fn))
  File "/home/navarete/anaconda3/envs/igr-pipe/lib/python2.7/json/__init__.py", line 291, in load
    **kw)
  File "/home/navarete/anaconda3/envs/igr-pipe/lib/python2.7/json/__init__.py", line 339, in loads
    return _default_decoder.decode(s)
  File "/home/navarete/anaconda3/envs/igr-pipe/lib/python2.7/json/decoder.py", line 364, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/home/navarete/anaconda3/envs/igr-pipe/lib/python2.7/json/decoder.py", line 382, in raw_decode
    raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded

Bright columns of pixels in extended ON-OFF combined images

I have noticed an issue for the combined frames of extended ON-OFF targets that the columns of pixels near bright lines have higher than average counts across the whole detector. This issue does not arise for ABBA combined frames, only for extended ON-OFF combined frames.

To test if the issue was the detectors themselves, or the PLP, I manually created ON-OFF quads but the issue did not occur, so I suspect it has something to do with how the PLP zeros out detector noise for each column of pixels. Whatever algorithm is being used is confused when there is a single bright line across a column on the detector. I have tested using the median of pixel columns to zero out vertical detector noise and that seems to work fine, so that might be the solution.

See the screenshots below:
vy2-2_h-band
vy2-2_k-band

Sometimes K band only gets 25 orders extracted

Noticed this bug or maybe feature in the pipeline from qlook branch onward that makes my life a little harder when comparing the old and new pipeline outputs. Sometimes the new pipeline will throw out the top order in K-band and the final pipeline products will only feature 25 orders instead of 26. I cant find any sort of consistency in how it occurs (ie. in the same night some K band spectra will have 25 orders and some will have 26).

I would prefer if there is a consistent number of orders for all the spectra in both bands (whether that be always chopping of the first order of K band or always keeping the first order).

Do you know where this stems from in the changes you made @leejjoon?

Reimplement recipe_tell_wavsol.py

In v2.2.0 of the IGIRNS PLP, there was a recipe called tell-wvsol which created wavelength solutions for the A0V standard stars based on a template telluric absorption spectrum. Would it be possible to reimplement this recipe in the newest qlook version of the IGRINS PLP?

Attempt to correct readout pattern for extended objects with continuum seems to result in a poor readout pattern correction (amp bias correction?)

I am currently working on a target that is extended and has a lot of continuum and fills the IGRINS slit and the PLP's readout pattern removal code seems to overcorrect for the readout pattern (row amplifier bias? not really clear to me exactly what it is correcting). Below are some images of my target in the H band showing the overcorrection where the continuum is bright. I suspect stray light from the bright continuum hitting the detector beyond the order edges is causing this.

Screenshot 2024-01-29 at 8 19 05 PM
Screenshot 2024-01-29 at 8 19 14 PM

I tracked the issue down to the PLP's readout pattern removal code (in the qlook and reimplement_cr_reject branches). Specifically in
igrins/procedures/readout_pattern_helper.py the following function

    if mask is None:
        mask = np.zeros(d.shape, dtype=bool)
    else:
        mask = mask.copy()

    mask[:4] = True
    mask[-4:] = True

    p = [pipes[k] for k in ['p64_1st_order',
                            'col_wise_bias_c64',
                            'amp_wise_bias_r2',
                            'col_wise_bias']]

    return apply_pipe(d, p, mask=mask)

where amp_wise_bias_r2 appears to be the cause.

I tracked the "pipe" for amp_wise_bias_r2 to igrins/procedures/readout_pattern.py where it points to the following function

class PatternAmpP2(PatternBase):
    @classmethod
    def get(kls, d, mask=None):
        """
        returns a tuple of two 32 element array. First is per-amp bias values.
        The second is the [1,-1] amplitude for each amp.
        """
        d = np.ma.array(d, mask=mask).filled(np.nan)

        do = d.reshape(32, 32, 2, -1)
        av = np.nanmedian(do, axis=[1, 3])

        amp_bias_mean = np.mean(av, axis=1)
        amp_bias_amp = av[:, 0] - amp_bias_mean

        return amp_bias_mean, amp_bias_amp

    @classmethod
    def broadcast(kls, d, av_p):
        av, p = av_p
        k = p[:, np.newaxis] * np.array([1, -1])
        v = np.zeros((32, 32, 2, 1)) + k[:, np.newaxis, :, np.newaxis]
        avv = av.reshape(32, 1, 1, 1) + v
        return avv.reshape(2048, 1)

To try to resolve the issue, I tried to simply comment out amp_wise_bias_r2 in igrins/procedures/readout_pattern_helper.py like so:

    if mask is None:
        mask = np.zeros(d.shape, dtype=bool)
    else:
        mask = mask.copy()

    mask[:4] = True
    mask[-4:] = True

    p = [pipes[k] for k in ['p64_1st_order',
                            'col_wise_bias_c64',
                            #'amp_wise_bias_r2',
                            'col_wise_bias']]

    return apply_pipe(d, p, mask=mask)

and the result is a stacked H band frame that looks much better here when I blink it with what I had before:

ds9

It is very tempting to just leave amp_wise_bias_r2 turned off as a solution, but my main worry is that I am disabling pattern correction or bias subtraction that should be left on, so I am not sure this is the solution. My attempts to modify class PatternAmpP2(PatternBase) did not result in any improvement.

plp - can not produce interactive html page

Hi Guys,

I've struggled with running plp. It was difficult to figure out which versions of the required packages were compatible. At some stage, I was getting the following error message:

Now generating figures
Traceback (most recent call last):
File "igr_pipe.py", line 58, in
argh.dispatch(parser)
File "./external/argh/argh/dispatching.py", line 125, in dispatch
for line in lines:
File "./external/argh/argh/dispatching.py", line 202, in _execute_command
for line in result:
File "./external/argh/argh/dispatching.py", line 185, in _call
result = args.function(_positional, *_keywords)
File "/home/sahin/postdoc_UT/IGRIN/plp-2.1-alpha.2/recipes/recipe_extract.py", line 37, in extract
subtract_interorder_background=subtract_interorder_background,
File "/home/sahin/postdoc_UT/IGRIN/plp-2.1-alpha.2/recipes/recipe_extract.py", line 106, in abba_all
obsids, frametypes,
File "/home/sahin/postdoc_UT/IGRIN/plp-2.1-alpha.2/recipes/recipe_extract.py", line 492, in process
wvl)
File "/home/sahin/postdoc_UT/IGRIN/plp-2.1-alpha.2/recipes/recipe_extract.py", line 782, in get_a0v_flattened
figout=figout)
File "/home/sahin/postdoc_UT/IGRIN/plp-2.1-alpha.2/recipes/libs/a0v_flatten.py", line 569, in get_a0v_flattened
plot_flattend_a0v(spec_flattener, wvl, s_list, orderflat_response, data_list, fout=figout)
File "/home/sahin/postdoc_UT/IGRIN/plp-2.1-alpha.2/libs/a0v_flatten.py", line 477, in plot_flattend_a0v
ax1=ax1, ax2=ax2)
File "/home/sahin/postdoc_UT/IGRIN/plp-2.1-alpha.2/recipes/libs/a0v_flatten.py", line 288, in plot_fitted
color1 = ax1._get_lines.color_cycle.next()
AttributeError: '_process_plot_var_args' object has no attribute 'color_cycle'

With Greg's and Monika's suggestions, I've tried to install several different version of the packages to see which one would help me run the code without any error messages. I've found out that matplotlib ver.1.4.0 and pandas' latest versions to be compatible. Then I've run the script to produce interactive webpage, however, it produces and error message (please see attached pdf for both example output for the plp run in my laptop and error message that I get for interactive web page creation stage).

I would be glad if one of you guys are comment on the content of the log file for plp run and error message for producing html page. I'm curious to see whether the code runs ok! and the output is expected.

Thank you
Timur Sahin

EXAMPLE_OUTPUT_22_JAN_2016.pdf

Readout pattern removal slightly worse in qlook branch than v2.2.0

I have downloaded and tested the version of the IGRINS PLP that is in the qlook branch. For the most part it works great but I have noticed a few issues. One of the issues is that the readout pattern removal seems to be slightly worse in the qlook version when compared to the older version of the PLP (v2.2.0). I have attached a few animated gifs comparing output from the old and new and you can see that some of the columns are not as well removed in the qlook branch version, while in v2.2.0 the pattern removal looks really good. I tried to see what might be going on in destriper.py but I cannot find any differences in the algorithms that might be the cause of this. I wonder if it could have something to do with running in python 3 vs. python 2.
pattern
pattern_2

Flux scaling differences between PLP versions

There is a slight difference in the flux outputted between the old (v2.2.0) and new (branch reimplement_cr_reject) version of the PLP where the flux/counts on the reduced data in the new version is about 3-4% larger than the old version. It is not entirely clear why. Below is an image of a .spec.fits file of the same reduced A0V star spectrum with the new PLP divided by the old PLP. It seems to vary across the detector. It might have something to do with how flats are treated or how the background is treated. We will need to look into this further. Relative flux calibration will negate this effect so it is not much of a concern science wise.

image

Allow for indata *.fits files to be compressed with gzip

FITS files typically compress nicely & to save disk space I would like to be able to store my raw IGRINS data in gzip'd form. (I usually do a gzip --best *.fits in my raw data directories.)

When reading in a fits file, ideally igrins-plp should seamlessly read either a gzip'd or non-gzip'd file.

I've a simple fix to enable this behavior that I will submit as a pull request. Basically, I've written a simple wrapper to the fits routines and then replace a bunch of import statements throughout the rest of the code to import my wrapper rather than astropy.io.fits. I think it's self explanatory.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.