Coder Social home page Coder Social logo

calibrate_spectra's Introduction

Repository to study core-collapse templates created from different objects ... more later

Requirements

  • SNCosmo and its dependencies (including astropy)

Installation

calibrate_spectra's People

Contributors

rbiswas4 avatar

Watchers

 avatar  avatar

calibrate_spectra's Issues

Track down the factor of 1000.

When I do synthetic photometry with the set of mangled spectra which usually have values of O 10^{-15} ergs/cm^2/sec/Ang, I get fluxes in ergs/cm^2/sec of the order of 10^{-12). The photometry tables in the example dataset have fluxes of the order of 10^{-15} which is roughly a factor of 1000 smaller. So, somewhere one of us has made a bug (likely forgetting or double counting the measure factor of the integral)

We need to track this down. @nkarp, @rbiswas4

Filter Sets being used.

I am worried that the filters that we have been using are not identical. Part of this is driven by the fact that in the example notebook, I show that the filters i, r of the set I am using don't have a complete overlap with the spectra. So doing the synthetic photometry to match to data which was done would not be possible, unless there was a relaxed threshold of what level of non-overlap would be tolerated or something else I don't understand. The most likely reason is that there was a different set of filters being used. And the photometry comparison we are doing is only good if the filters used are correct.

So, I think it would be good to get the same set of filters in this study rather than rely on the set I was using. I think the easiest way of doing this is by adding a set of ascii files (one for each filter). The ascii files should have two columns the first representing wavelength (Angstrom would be great, but anything would do) and the second representing transmission probabilities (so the values should be a float in (0, 1) . It is probably best to add them to example_data directory in transientSources in a PR (or email works fine).

Repeat tests for photometry using the mangled spectra themselves

In the example notebook, we have done synthetic photometry with the sncosmo.source built with the set of mangled spectra and phases that had been pre-calculated and compared this to the photometry table. The sncosmo.source object uses an interpolation of the SEDs provided. We need to add two tests:

  • Does it reproduce the mangled spectra that were input to the interpolation procedure at the actual times?
  • Does the photometry done on these mangled spectra only match the photometry much better?

Both of these are essentially ways of confirming that the problems seen in the example notebook lies
in the time interpolation of spectra.

In mangling the spectra, is there any guarantee that fluxes will be positive semidefinite?

Since the spectra are noisy and `wiggly' it is likely that matching the spectra to photometry will cause the SEDs to be negative at particular wavelengths that have troughs for dim objects. For example, this happens in the SALT model where very similar things are done. Is there a prior or constraint being used to prevent this? If not should we think about adding priors / regularization terms or is there a worry that this would be bad in terms of biasing results?

Note that in the steps where we do work with spectra, for example in creating images, negative spectra are currently a problem. For the other spectra (which already have this problem), we currently work around this with a hack by forcing the spectra to be positive by changing negative values o a small positive value (and assuming that this should not change anything important). We can certainly do the same with these spectra if it does contain negative values. However, as we are thinking of better ways of doing these steps, it would be something to keep in mind.

Conventions for Data Files

This partly depends on the scope of what we would like to do. If this is only to set up a small test like what we have already done, this does not really matter, and we should not think about this at all. But if we would like to be able to use many SN, run these kinds of tests easily and so on, it would be useful to define conventions for datasets. It would be useful to start doing this on a very small number of objects
and then bring in new objects into that format.

The things I would like to have if we would like to do this for a number of objects are (feel free to suggest modifications or additions) :

  • Can we easily track the original raw dataset for a particular SN. (eg. files that we have on disk, or a database that say CFA maintains etc.).
  • An intermediate/uniform format that our codes will deal with
  • codes to convert the original datasets to this uniform format.

I think it would be very good to have all the information regarding the SN in the uniform data format
as some form of metadata we choose and not to have to infer things like MJD/ phases of spectra from filenames. If we agree on the need, we can start thinking of how we would like to do this.

Add notes on conventions

Add latex notes to clarify conventions chosen so that everyone can either conform to it, or at least know what conversion has to be used from their code to match these conventions.

Can we change the photometry files to have units of counts/cm^2/sec rather than ergs/cm^2/sec

In photometric data, we do not actually measure the energy flux, but rather something connected to photon flux and so the flux units should be number / cm^2 /sec. So, it would be nice to have this set of units. Roughly speaking, this should result in fluxes that are divided out by a photon energy of an effective wavelength of that band, but in more detail that effective wavelength depends on the spectra in question.

If everything was linear, and all noise was Gaussian, then all of our results would definitely be invariant to such these kinds of changes. However, since the above assumptions are not true, I think in principle it might matter what we choose to use as 'data', and it is best to stay as close to the form of the real data if possible. Hopefully, even if there is an effect, it will be small enough.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.