Coder Social home page Coder Social logo

Comments (16)

cdeil avatar cdeil commented on May 26, 2024

My understanding is that science results always depend on the product aeff * obs_time * (1 - dead_time_fraction), so whether to store the dead time fraction separately, or multiply it in DL3 with aeff or obs_time is a matter of taste how one wants to structure the book-keeping.

If CTA has complicated dead time fractions, it might be the easiest to produce DL3 where dead_time_fraction=0 and the dead-times have been applied to aeff.

What is CTA doing at the moment concerning dead time in prod2 and prod3 IRFs?

from gamma-astro-data-formats.

kosack avatar kosack commented on May 26, 2024

Does the per-telescope deadtime really matter? Isn't it the overall deadtime as measured by the SWAT (central trigger software) the more important quantity, which is a sort of "average" or "effective" deadtime for the system. It's similar to other quantities that we may consider as averages like the trigger pattern. Of course one could imagine if we really decide to split IRFs by trigger pattern (probably not easy due to the number involved), then it may make sense to talk about dead-times per effective multiplicity group (section of subarray triggered), etc.

from gamma-astro-data-formats.

cdeil avatar cdeil commented on May 26, 2024

Does the per-telescope deadtime really matter?

No, at DL3 per-telescope quantities don't matter.
The data model is EVENTS and IRFs for sub-arrays.

Everything per-telescope should happen at lower levels as part of the DL3 production.

One thing that is missing in the current DL3 spec is "event types", sub-partitions of the events by quality. These could be split by trigger pattern or other things. Each has their own effective area and deadtime fraction.

from gamma-astro-data-formats.

jknodlseder avatar jknodlseder commented on May 26, 2024

I think nothing is done in Prod2 or Prod3 with the deadtime, Gernot may confirm.

As I understand so far from discussions with Karl and Gernot, deadtime depends on trigger pattern (which cameras triggered, hence on energy) and time (for example rising moon). On VERITAS the deadtime is computed on a minute time scale.

If for a given sub-array a camera is dead while an event occurs, the multiplicity will be reduced. Hence without deadtime correction, the trigger pattern is different between MC and real world. Another related question is whether telescopes that have not triggered are actually used in the reconstruction (in the sens that it is verified that all telescopes that should have seen a shower have actually seen it). Then it becomes important to know when exactly a camera has been dead.

I agree that this is all low-level stuff, and should not be exposed at DL3 level. But I think the way how deadtime is computed will actually impact the way how it's stored in DL3.

from gamma-astro-data-formats.

GernotMaier avatar GernotMaier commented on May 26, 2024

No deadtime was taken into account in prod2/prod3 (we did some test on the impact of a 10% dead times for the NectarCam MSTs, do make su that this doesn’t have impact on the array layout selection)

On 20 Jun 2016, at 15:30, Jürgen Knödlseder [email protected] wrote:

I think nothing is done in Prod2 or Prod3 with the deadtime, Gernot may confirm.

As I understand so far from discussions with Karl and Gernot, deadtime depends on trigger pattern (which cameras triggered, hence on energy) and time (for example rising moon). On VERITAS the deadtime is computed on a minute time scale.

If for a given sub-array a camera is dead while an event occurs, the multiplicity will be reduced. Hence without deadtime correction, the trigger pattern is different between MC and real world. Another related question is whether telescopes that have not triggered are actually used in the reconstruction (in the sens that it is verified that all telescopes that should have seen a shower have actually seen it). Then it becomes important to know when exactly a camera has been dead.

I agree that this is all low-level stuff, and should not be exposed at DL3 level. But I think the way how deadtime is computed will actually impact the way how it's stored in DL3.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.


Dr Gernot Maier
Deutsches Elektronensynchrotron (DESY)
Platanenallee 6, D-15738 Zeuthen, Germany

from gamma-astro-data-formats.

TarekHC avatar TarekHC commented on May 26, 2024

Today @moralejo and I discussed about this issue, and the only plausible solution we came up with was to include the dead time per telescope directly in the MC simulation. Then the deadtime/livetime time is not needed anymore, as the effective areas already contain the effect of the different dead times per telescope type. Then we should make sure that within the GTIs, data periods with the nominal dead time per telescope type are used.

@kosack @GernotMaier Do you agree with this approach? This would make science tools "blind" to the deadtime, which is probably the objective.

from gamma-astro-data-formats.

GernotMaier avatar GernotMaier commented on May 26, 2024

Maybe I misunderstood the idea, but including the dead time in the MC is something I don't feel is necessary:

  • what we care about is the effective/average deadtime for the whole system (or subarray).
  • this value depend both on the characteristics of the telescopes / cameras and environmental conditions (mainly NSB).
  • it will be a significant challenge to model the dead time of the system (and it also might depend on factors no included in the MC; e.g. buffer sizes in the readout). I don't see a necessity for this given that we will get precise dead time values from ACTL.

I really think this should be an item clearly separated from the MC.

from gamma-astro-data-formats.

gbelanger avatar gbelanger commented on May 26, 2024

For IBIS/ISGRI on INTEGRAL, the value of the dead time keyword is a header is an average of the 8 modules, but the set of keywords relating to ONTIME and EXPOSURE, are not consistent with the DEADC as one cannot reconstruct them using the others.

  • ONTIME is the sum of GTI (GTI calculated per module).
  • DEADC is an average, since there is one dead-time per MCE, the one used in calculating ONTIME. Therefore, one cannot get EXPOSURE = ONTIME*(1-DEADC).
  • EXPOSURE is ONTIME*MEANTIMEEFF.
  • MEANTIMEEFF is the percentage of average effective ISGRI time, taking into account the GTI and dead time per MCE. Therefore, it is not exactly equal to (1-DEADC).

Now, I gathered this information from the current PI, but nobody else knows this in detail, because the coders have come and gone many years ago. Would be nice if this information was in the fits files, and not scattered all over the place in different files produced at different stages of the pipeline. So, I think it should be included, in some way, into the final product that will be used by people, image, spectrum or time series file.

from gamma-astro-data-formats.

moralejo avatar moralejo commented on May 26, 2024

Let me explain the issue of "absorbing" the deadtime into the MC-calculated Aeff. It is indeed dead time per telescope which matters, because there is no well-defined "system dead time", even if you make it energy dependent. In the very large SST array you can even have simultaneous events in distants parts of the array.

Consider this example: you may have an event triggering SSTs 30, 31 and 32, and, following shortly afterwards, another one that should have triggered e.g. 32, 33 and 34 - but 32 was busy with the previous event, and only 33 and 34 end up in the data stream. It is not that you lose the event, you have it, but with a smaller multiplicity. Now, you cannot simply correct the real data for such an effect by introducing a "system" dead time fraction. In the least, you would need to make a toy MC to understand the impact of such single-telescope dead times in the "quality" of the recorded events. Since we make an MC anyway, why not to include this single-telescope dead time into it?

I understand that the dead times will be small enough to make the kind of event I mentioned above rather rare, and perhaps that makes this irrelevant for practical purposes (I really do not know). But implementing the telescope-wise dead time in the MC is probably so simple that there is no big reason not to do it.

from gamma-astro-data-formats.

GernotMaier avatar GernotMaier commented on May 26, 2024

Ok - didn't think of that. I have little feeling how big the effect is, would be nice to see a study on that. The effect is probably small, but if not, dead time would not only affect the acceptance, but also the energy and angular resolution. If this is the case, it would anyway need to be included in the MC.

If it is simply an energy dependent correction factor: why not keeping it separately? It would keep things just cleaner and more obvious (but your suggestion would also work).

from gamma-astro-data-formats.

TarekHC avatar TarekHC commented on May 26, 2024

If it is simply an energy dependent correction factor: why not keeping it separately? It would keep things just cleaner and more obvious (but your suggestion would also work).

What about divergent pointing? It would also depend on the FoV, and it would become an absolute nightmare to factorize it.

Including it in the MC is probably the safest and easiest solution to implement.

from gamma-astro-data-formats.

TarekHC avatar TarekHC commented on May 26, 2024

@jknodlseder @GernotMaier @cdeil @moralejo

Did we converge on this? With an energy/FoV dependent deadtime, I see no easy solution to incorporate it into the DL3 unless we directly include it into the MC (and therefore, already inside the IRFs).

If we all agree with this solution (for the specific case of CTA), should it be written somewhere?

from gamma-astro-data-formats.

cdeil avatar cdeil commented on May 26, 2024

What we have now (a single number DEADC in the EVENTS header) works for all existing instruments, no?

Yes, in the future CTA might have something more complex, but IMO we could just wait for such data to appear and then define a format for it.

@TarekHC - If you want to define something now, I think this would work best: talk to @GernotMaier and @kosack or others from CTA first (probably easiest in a telcon, not via Github issue, this is a too complex topic) to make sure it covers what you expect CTA will need, i.e. what kind of info CTA MC / pipe will produce in the future, and then come back with a proposal. Assigning this issue to you.

from gamma-astro-data-formats.

GernotMaier avatar GernotMaier commented on May 26, 2024

Fine with me.

For reference, here are the relevant requirements for CTA:

A-PERF-0810 System Deadtime
Description:

The fraction of the time that the system as a whole (i.e. elements of the system may be unavailable for longer periods) is unavailable for recording of events, due to inefficiency in data collection, transport and storage, during observations (with telescopes ontarget) must be <2 %.

A-PERF-2250 Telescope Deadtime
Description:

The fraction of the time that an individual telescope is unavailable for recording of events (due for example to inefficiency in data collection, transport and storage) during observations must be <8 %.

from gamma-astro-data-formats.

cdeil avatar cdeil commented on May 26, 2024

Just a note: there was some discussion that's related to this issue in #97.

I briefly re-read the discussion here, and to be honest I'm not sure if any change is needed to what we already have in the DL3 spec for CTA. DL3 is only concerned with array-level AEFF and BKG rate, per-telescope deadtime concerns are lower-level, there are no per-telescope parameters in DL3. (so some of the discussion above is relevant for CTA, but concerns what MC / pipeline should do to produce DL3, not something that should be stored in DL3 files).

Of course it's possible to split exposure = area x time in multiple ways, putting the deadtime fraction with area or time to get the effective exposure that is needed in DL3. But I don't see why any other way to do it would be better to what we already have now (which is to have exposure = area x livetime with livetime = (1 - DEADC) x obstime, where DEADC is a single number (not energy-dependent) per time interval. CTA (and other instruments) can have more info internally like energy-dependent deadtime fractions or whatever they like for debugging, but there's not point in creating a more complex DL3 spec and having to teach science tool codes about these things, no?

from gamma-astro-data-formats.

cdeil avatar cdeil commented on May 26, 2024

For the record: I've changed my mind on this.

As already mentioned in #97 (comment) my suggestion for IACT data would be to compute AEFF and BKG rate models at the DL3 level, i.e. as given to the science tools, so that they include all dead-time effects, and DEADC=1, i.e. LIVETIME=ONTIME.

This gets rid of the question / confusion when to use which time for exposure and background model count computation, there is only one time.

It also gets rid of the question how to store this in DL3, where currently the DEADC key is in the EVENTS header, but then the start / stop observation times are in a different FITS HDU, the GTI.

This is a simple and generic solution for the DL3 interface, and it doesn't limit options at all what to do at the MC and pipeline level.

Note that currently ctools and Gammapy are handling background rates differently (ctools simulates with rate per livetime, Gammapy analyses assuming rate per obstime), and because counts are so high in CTA, these effects are visible in the simulated DC-1 data (see gammapy/gammapy#1842 (comment)).

@kosack or anyone - any chance to get a decision on this for CTA DL3 or create a decision process so that this and other questions can be decided in CTA soon?

from gamma-astro-data-formats.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.