Coder Social home page Coder Social logo

franpoz / sherlock Goto Github PK

View Code? Open in Web Editor NEW
16.0 3.0 7.0 767.28 MB

Easy and versatile open-source code to explore Kepler, K2 and TESS data in the search for exoplanets

License: MIT License

Python 98.03% Shell 0.69% Dockerfile 0.16% Singularity 0.02% CSS 0.06% HTML 0.30% JavaScript 0.45% PHP 0.30%
exoplanets tess kepler

sherlock's Introduction

SHERLOCK is an end-to-end pipeline that allows the users to explore the data from space-based missions to search for planetary candidates. It can be used to recover alerted candidates by the automatic pipelines such as SPOC and the QLP, the so-called Kepler objects of interest (KOIs) and TESS objects of interest (TOIs), and to search for candidates that remain unnoticed due to detection thresholds, lack of data exploration or poor photometric quality. To this end, SHERLOCK has six different modules to (1) acquire and prepare the light curves from their repositories, (2) search for planetary candidates, (3) vet the interesting signals, (4) perform a statistical validation, (5) model the signals to refine their ephemerides, and (6) compute the observational windows from ground-based observatories to trigger a follow-up campaign. To execute all these modules, the user only needs to fill in an initial YAML file with some basic information such as the star ID (KIC-ID, EPIC-ID, TIC-ID), the cadence to be used, etc., and use sequentially a few lines of code to pass from one step to the next. Alternatively, the user may provide with the light curve in a csv file, where the time, the normalized flux, and the flux error need to be given in columns comma-separated format.

Citation

We are currently working on a specific paper for SHERLOCK. In the meantime, the best way to cite SHERLOCK is by referencing the first paper where it was used Pozuelos et al. (2020):

@ARTICLE{2020A&A...641A..23P,
       author = {{Pozuelos}, Francisco J. and {Su{\'a}rez}, Juan C. and {de El{\'\i}a}, Gonzalo C. and {Berdi{\~n}as}, Zaira M. and {Bonfanti}, Andrea and {Dugaro}, Agust{\'\i}n and {Gillon}, Micha{\"e}l and {Jehin}, Emmanu{\"e}l and {G{\"u}nther}, Maximilian N. and {Van Grootel}, Val{\'e}rie and {Garcia}, Lionel J. and {Thuillier}, Antoine and {Delrez}, Laetitia and {Rod{\'o}n}, Jose R.},
        title = "{GJ 273: on the formation, dynamical evolution, and habitability of a planetary system hosted by an M dwarf at 3.75 parsec}",
      journal = {\aap},
     keywords = {planets and satellites: dynamical evolution and stability, planets and satellites: formation, Astrophysics - Earth and Planetary Astrophysics, Astrophysics - Solar and Stellar Astrophysics},
         year = 2020,
        month = sep,
       volume = {641},
          eid = {A23},
        pages = {A23},
          doi = {10.1051/0004-6361/202038047},
archivePrefix = {arXiv},
       eprint = {2006.09403},
 primaryClass = {astro-ph.EP},
       adsurl = {https://ui.adsabs.harvard.edu/abs/2020A&A...641A..23P},
      adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}

Also, you may be interested in having a look at recent papers that used SHERLOCK:
Pozuelos et al. (2023)
Delrez et al. (2022)
Dransfield et al. (2022)
Luque et al. (2022)
Schanche et al. (2022)
Wells et al. (2021)
Benni et al. (2021)
Van Grootel et al. (2021)
Demory et al. (2020)

Full Tutorials

We have conducted dedicated workshops to teach SHERLOCK's usage and best practices. The last one was held on June 2023 at the Instituto de Astrofísica de Andalucía-CSIC. You can find all the material used (Jupyter notebooks, full examples, presentations, etc.) in this link: SHERLOCK Workshop IAA-CSIC. Let us know if you or your lab are interested in the SHERLOCK package! We might organize an introduction and a hands-on session to help you get familiar with the code and/or implement new functionalities.

Main Developers

Active: F.J. Pozuelos, M. Dévora

Additional contributors

A. Thuillier & L. García & Luis Cerdeño Mota

Documentation

Please visit https://sherlock-ph.readthedocs.io to get a complete set of explanations and tutorials to get started with SHERLOCK.

Launch

You can run SHERLOCK PIPEline as a standalone package by using:

python3 -m sherlockpipe --properties my_properties.yaml

You only need to provide a YAML file with any of the properties contained in the internal properties.yaml provided by the pipeline. The most important keys to be defined in your YAML file are those under the GLOBAL OBJECTS RUN SETUP and SECTOR OBJECTS RUN SETUP sections because they contain the object ids or files to be analysed in the execution. You'd need to fill at least one of those keys for the pipeline to do anything. If you still have any doubts please refer to the examples/properties directory

Additionally, you could only want to inspect the preparation stage of SHERLOCK and therefore, you can execute it without running the analyse phase so you can watch the light curve, the periodogram and the initial report to take better decisions to tune the execution parameters. Just launch SHERLOCK with:

python3 -m sherlockpipe --properties my_properties.yaml --explore

and it will end as soon as it has processed the preparation stages for each object.

Updates

SHERLOCK uses third party data to know TOIs, KOIs, EPICs and to handle FFIs and the vetting process. This data gets frequently updated from the active missions and therefore SHERLOCK will perform better if the metadata gets refreshed. You can simply run:

python3 -m sherlockpipe.update

and SHERLOCK will download the dependencies. It will store a timestamp to remember the last time it was refreshed to prevent several unneeded calls. However, if you find that there are more updates and you need them now, you can call:

python3 -m sherlockpipe.update --force

and SHERLOCK will ignore the timestamps and perform the update process. In addition, you could be interested in wiping all the metadata and build it again. That's why you could execute:

python3 -m sherlockpipe.update --clean

This last command implies a force statement and the last executed time will be ignored too.

You can additionally let SHERLOCK refresh the OIs list before running your current execution by adding to the YAML file the next line:

UPDATE_OIS=True

Vetting

SHERLOCK PIPEline comes with a submodule to examine the most promising transit candidates found by any of its executions. This is done via WATSON, capable of vetting TESS and Kepler targets. You should be able to execute the vetting by calling:

python3 -m sherlockpipe.vet --properties my_properties.yaml

Through that command you will run the vetting process for the given parameters within your provided YAML file. You could watch the generated results under $your_sherlock_object_results_dir/vetting directory. Please go to examples/vetting/ to learn how to inject the proper properties for the vetting process.

There is an additional simplified option which can be used to run the vetting. In case you are sure there is a candidate from the Sherlock results which matches your desired parameters, you can run

python3 -m sherlockpipe.vet --candidate ${theCandidateNumber}

from the sherlock results directory. This execution will automatically read the transit parameters from the Sherlock generated files.

Fitting

SHERLOCK PIPEline comes with another submodule to fit the most promising transit candidates found by any of its executions. This fit is done via ALLESFITTER code. By calling:

python3 -m sherlockpipe.fit --properties my_properties.yaml

you will run the fitting process for the given parameters within your provided YAML file. You could watch the generated results under $your_sherlock_object_results_dir/fit directory. Please go to examples/fitting/ to learn how to inject the proper properties for the fitting process.

There is an additional simplified option which can be used to run the fit. In case you are sure there is a candidate from the Sherlock results which matches your desired parameters, you can run

python3 -m sherlockpipe.fit --candidate ${theCandidateNumber}

from the sherlock results directory. This execution will automatically read the transit and star parameters from the Sherlock generated files.

Validation

SHERLOCK PIPEline implements a module to execute a statistical validation of a candidate by the usage of TRICERATOPS. By calling:

python3 -m sherlockpipe.validate --candidate ${theCandidateNumber}

you will run the validation for one of the Sherlock candidates.

Stability

SHERLOCK PIPEline also implements a module to execute a system stability computation by the usage of Rebound and SPOCK. By calling:

python3 -m sherlockpipe.stability --bodies 1,2,4

where the --bodies parameter is the set of the SHERLOCK accepted signals as CSV to be used in the scenarios simulation. You can also provide a stability properties file) to run a custom stability simulation:

python3 -m sherlockpipe.stability --properties stability.yaml

and you can even combine SHERLOCK accepted signals with some additional bodies provided by the properties file:

python3 -m sherlockpipe.stability --bodies 1,2,4 --properties stability.yaml

The results will be stored into a stability directory containing the execution log and a stability.csv containing one line per simulated scenario, sorted by the best results score.

Observation plan

SHERLOCK PIPEline also adds now a tool to plan your observations from ground-based observatories by using astropy and astroplan. By calling:

python3 -m sherlockpipe.plan --candidate ${theCandidateNumber} --observatories observatories.csv

on the resulting sherlockpipe.fit directory, where the precise candidate ephemeris are placed. The observatories.csv file should contain the list of available observatories for your candidate follow-up. As an example, you can look at this file.

SHERLOCK PIPEline Workflow

It is important to note that SHERLOCK PIPEline uses some csv files with TOIs, KOIs and EPIC IDs from the TESS, Kepler and K2 missions. Therefore your first execution of the pipeline might take longer because it will download the information.

Provisioning of light curve

The light curve for every input object needs to be obtained from its mission database. For this we use the high level API of Lightkurve, which enables the download of the desired light curves for TESS, Kepler and K2 missions. We also include Full Frame Images from the TESS mission by the usage of ELEANOR. We always use the PDCSAP signal from the ones provided by any of those two packages.

Pre-processing of light curve

In many cases we will find light curves which contain several systematics like noise, high dispersion beside the borders, high-amplitude periodicities caused by pulsators, fast rotators, etc. SHERLOCK PIPEline provides some methods to reduce these most important systematics.

Local noise reduction

For local noise, where very close measurements show high deviation from the local trend, we apply a Savitzky-Golay filter. This has proved a highly increment of the SNR of found transits. This feature can be disabled with a flag.

High RMS areas masking

Sometimes the spacecrafts have to perform reaction wheels momentum dumps by firing thrusters, sometimes there is high light scattering and sometimes the spacecraft can infer some jitter into the signal. For all of those systematics we found that in many cases the data from those regions should be discarded. Thus, SHERLOCK PIPEline includes a binned RMS computation where bins whose RMS value is higher than a configurable factor multiplied by the median get automatically masked. This feature can be disabled with a flag.

Input time ranges masking

If enabled, this feature automatically disables High RMS areas masking for the assigned object. The user can input an array of time ranges to be masked into the original signal.

Detrend of high-amplitude periodicities

Our most common foes with high periodicities are fast-rotators, which infer a high sinusoidal-like trend in the PDCSAP signal. This is why SHERLOCK PIPEline includes an automatic high-amplitude periodicities detection and detrending during its preparation stage. This feature can be disabled with a flag.

Input period detrend

If enabled, this feature automatically disables Detrend of high-amplitude periodicities for the assigned object. The user can input a period to be used for an initial detrend of the original signal.

Custom user code

You can even inject your own python code to perform:

  • A custom signal preparation task by implementing the CurvePreparer class that we provide. Then, inject your python file into the CUSTOM_PREPARER property and let SHERLOCK use your code.
  • A custom best signal selection algorithm by implementing the SignalSelector. class that we provide. Then, inject your python file into the CUSTOM_ALGORITHM property and let SHERLOCK use your code.
  • A custom search zone definition by implementing the SearchZone. class that we provide. Then, inject your python file into the CUSTOM_SEARCH_ZONE property and let SHERLOCK use your code.
  • Custom search modes: 'tls', 'bls', 'grazing', 'comet' or 'custom'. You can search for transits by using TLS, BLS, TLS for a grazing template, TLS for a comet template or even inject your custom transit template (this is currently included as an experimental feature).

For better understanding of usage please see the examples, which references custom implementations that you can inspect in our custom algorithms directory

sherlock's People

Contributors

anthuil avatar franpoz avatar luiscerdenomota avatar martindevora avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

sherlock's Issues

Vetting: Add transits plot with y axis limited by depth

As LATTE plots are not very useful for shallow transits we might add our custom plots of the transits with the flux values limited by a depth value. That way we could focus the view into the values that we really want to explore.

Add a proper fit of the promising results

After a detection by SHERLOCK the solution should be improved by a proper fit to have the best ephemeris as possible to perform a ground-based follow-yp. We might use Allesfitter/Juliet/Exofast for that.

[Bug] in reporting border score

When running the TIC 467179258, in the 3rd run is detected a signal which all the detected transits are in borders. However, the border score indicates that they are not (Border score=1).
This needs further exploration.

Run_3_PDCSAP-FLUX_MIS_TIC 467179528_all

win_size    Period    Per_err   N.Tran  Mean Depth (ppt)  T. dur (min)  T0            SNR           SDE           FAP           Border_score      
PDCSAP_FLUX 32.01687  0.104577  1       1.899             138.0         1710.2785     8.614         24.781        8.0032e-05    1.00 


matplolib version

To run the vetting via LATTE package, it should be used an old version of matplotlib. We may use the same for the full sherlock to avoid the need of a virtual env for the vetting part.

Add a second fit guess after TLS best transit signal is returned

One of the analysis points which are not solved by SHERLOCK yet is the visual processing of every detrend result for each run. We could add a new mechanism to reject false positives returned by TLS by analyzing the found transit environment. For instance, we could assess sinusoidal trends, high variability of signal, empty measurements around the transit...

Proper solution via fit with Allesfitter

To obtain the best solution as possible once we have the TLS results we need to fit the data properly. We choose
Allesfitter.

We will need three files:

1) photometic file: TESS.csv
it should be a coma separated csv file with three columns; #time, flux, flux_err

# time,flux,flux_err
0.000000000000000000e+00,1.000993428306022448e+00,2.000000000000000042e-03
1.041666666666666609e-02,9.997246288021938154e-01,2.000000000000000042e-03
2.083333333333333218e-02,1.001297691868046513e+00,2.000000000000000042e-03

2) settings.csv
This file can be constant for all our trials, in principle there is not need to change to many things. It looks like:

#name,value
###############################################################################,
# General settings,
###############################################################################,
companions_phot,b
companions_rv,
inst_phot,Leonardo
inst_rv,
###############################################################################,
# Fit performance settings,
###############################################################################,
multiprocess,True
multiprocess_cores,4
fast_fit,True
fast_fit_width,0.3333333333333333
secondary_eclipse,False
phase_curve,False
shift_epoch,True
inst_for_b_epoch,all
###############################################################################,
# MCMC settings,
###############################################################################,
mcmc_nwalkers,100
mcmc_total_steps,2000
mcmc_burn_steps,1000
mcmc_thin_by,1
###############################################################################,
# Nested Sampling settings,
###############################################################################,
ns_modus,dynamic
ns_nlive,500
ns_bound,single
ns_sample,rwalk
ns_tol,0.01
###############################################################################,
# Limb darkening law per object and instrument,
# if 'lin' one corresponding parameter called 'ldc_q1_inst' has to be given in params.csv,
# if 'quad' two corresponding parameter called 'ldc_q1_inst' and 'ldc_q2_inst' have to be given in params.csv,
# if 'sing' three corresponding parameter called 'ldc_q1_inst'; 'ldc_q2_inst' and 'ldc_q3_inst' have to be given in params.csv,
###############################################################################,
host_ld_law_Leonardo,quad
###############################################################################,
# Baseline settings per instrument,
# baseline params per instrument: sample_offset / sample_linear / sample_GP / hybrid_offset / hybrid_poly_1 / hybrid_poly_2 / hybrid_poly_3 / hybrid_pol_4 / hybrid_spline / hybrid_GP,
# if 'sample_offset' one corresponding parameter called 'baseline_offset_key_inst' has to be given in params.csv,
# if 'sample_linear' two corresponding parameters called 'baseline_a_key_inst' and 'baseline_b_key_inst' have to be given in params.csv,
# if 'sample_GP' two corresponding parameters called 'baseline_gp1_key_inst' and 'baseline_gp2_key_inst' have to be given in params.csv,
###############################################################################,
baseline_flux_Leonardo,hybrid_offset
###############################################################################,
# Error settings per instrument,
# errors (overall scaling) per instrument: sample / hybrid,
# if 'sample' one corresponding parameter called 'log_err_key_inst' (photometry) or 'log_jitter_key_inst' (RV) has to be given in params.csv,
###############################################################################,
error_flux_Leonardo,sample
###############################################################################,
# Exposure times for interpolation,
# needs to be in the same units as the time series,
# if not given the observing times will not be interpolated leading to biased results,
###############################################################################,
t_exp_Leonardo,
###############################################################################,
# Number of points for exposure interpolation,
# Sample as fine as possible; generally at least with a 2 min sampling for photometry,
# n_int=5 was found to be a good number of interpolation points for any short photometric cadence t_exp;,
# increase to at least n_int=10 for 30 min phot. cadence,
# the impact on RV is not as drastic and generally n_int=5 is fine enough,
###############################################################################,
t_exp_n_int_Leonardo,
###############################################################################,
# Number of spots per object and instrument,
###############################################################################,
host_N_spots_Leonardo,
###############################################################################,
# Number of flares (in total),
###############################################################################,
N_flares,
###############################################################################,
# TTVs,
###############################################################################,
fit_ttvs,False
###############################################################################,
# Stellar grid per object and instrument,
###############################################################################,
host_grid_Leonardo,default
###############################################################################,
# Stellar shape per object and instrument,
###############################################################################,
host_shape_Leonardo,sphere
###############################################################################,
# Flux weighted RVs per object and instrument,
# ("Yes" for Rossiter-McLaughlin effect),
###############################################################################,

3) params.csv
The main parameters that need to be modified here from candidate to candidate are the:

b_rr,0.1,1,trunc_normal 0 1 0.1 0.05,$R_b / R_\star$,
b_rsuma,0.2,1,trunc_normal 0 1 0.2 1.5,$(R_\star + R_b) / a_b$,
b_cosi,0.0,1,uniform 0.0 0.03,$\cos{i_b}$,
b_epoch,1.09,1,trunc_normal -1000000000000.0 1000000000000.0 1.09 0.05,$T_{0;b}$,$\mathrm{BJD}$
b_period,3.41,1,trunc_normal -1000000000000.0 1000000000000.0 3.41 0.05,$P_b$,$\mathrm{d}$
b_f_c,0.0,0,trunc_normal -1 1 0.0 0.0,$\sqrt{e_b} \cos{\omega_b}$,
b_f_s,0.0,0,trunc_normal -1 1 0.0 0.0,$\sqrt{e_b} \sin{\omega_b}$,

The full file looks like:

#name,value,fit,bounds,label,unit
#companion b astrophysical params,,,,,
b_rr,0.1,1,trunc_normal 0 1 0.1 0.05,$R_b / R_\star$,
b_rsuma,0.2,1,trunc_normal 0 1 0.2 1.5,$(R_\star + R_b) / a_b$,
b_cosi,0.0,1,uniform 0.0 0.03,$\cos{i_b}$,
b_epoch,1.09,1,trunc_normal -1000000000000.0 1000000000000.0 1.09 0.05,$T_{0;b}$,$\mathrm{BJD}$
b_period,3.41,1,trunc_normal -1000000000000.0 1000000000000.0 3.41 0.05,$P_b$,$\mathrm{d}$
b_f_c,0.0,0,trunc_normal -1 1 0.0 0.0,$\sqrt{e_b} \cos{\omega_b}$,
b_f_s,0.0,0,trunc_normal -1 1 0.0 0.0,$\sqrt{e_b} \sin{\omega_b}$,
#dilution per instrument,,,,,
dil_Leonardo,0.0,0,trunc_normal 0 1 0.0 0.0,$D_\mathrm{0; Leonardo}$,
#limb darkening coefficients per instrument,,,,,
host_ldc_q1_Leonardo,0.5,1,uniform 0.0 1.0,$q_{1; \mathrm{Leonardo}}$,
host_ldc_q2_Leonardo,0.5,1,uniform 0.0 1.0,$q_{2; \mathrm{Leonardo}}$,
#surface brightness per instrument and companion,,,,,
b_sbratio_Leonardo,0.0,0,trunc_normal 0 1 0.0 0.0,$J_{b; \mathrm{Leonardo}}$,
#albedo per instrument and companion,,,,,
host_geom_albedo_Leonardo,0.0,0,trunc_normal 0 1 0.0 0.0,$A_{\mathrm{geom}; host; \mathrm{Leonardo}}$,
b_geom_albedo_Leonardo,0.0,0,trunc_normal 0 1 0.0 0.0,$A_{\mathrm{geom}; b; \mathrm{Leonardo}}$,
#gravity darkening per instrument and companion,,,,,
host_gdc_Leonardo,0.0,0,trunc_normal 0 1 0.0 0.0,$Grav. dark._{b; \mathrm{Leonardo}}$,
#spots per instrument and companion,,,,,
#errors per instrument,
log_err_flux_Leonardo,-7.0,1,uniform -15.0 0.0,$\log{\sigma_\mathrm{Leonardo}}$,$\log{ \mathrm{rel. flux.} }$
#baseline per instrument,
baseline_gp_matern32_lnsigma_flux_Leonardo,0.0,1,uniform -15.0 15.0,$\mathrm{gp: \ln{\sigma} (Leonardo)}$,
baseline_gp_matern32_lnrho_flux_Leonardo,0.0,1,uniform -15.0 15.0,$\mathrm{gp: \ln{\rho} (Leonardo)}$,

We need a function to be provided with the mainly results from TLS and generates the three files to run the allesfitter fit.

To lunch the fit we just need to run run.py, which looks like:

import allesfitter
allesfitter.show_initial_guess('allesfit')
allesfitter.ns_fit('allesfit')
allesfitter.ns_output('allesfit')

warning for the YAML files

We need to update how to load yaml files. The current version yields this warning:

/usr/local/lib/python3.6/dist-packages/sherlockpipe/__main__.py:16: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  sherlock_user_properties = yaml.load(open(resources_dir + "/" + 'properties.yaml'))
/usr/local/lib/python3.6/dist-packages/sherlockpipe/__main__.py:17: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  user_properties = yaml.load(open(args.properties))

[BUG] cannot convert float NaN to integer

It seems like a TLS issue, but I create it here until we confirm it. It happened in the first run of this execution:

sherlock = Sherlock([]).setup_detrend(initial_rms_threshold=2.5, initial_rms_bin_hours=5)\
         .setup_transit_adjust_params(snr_min=8, period_min=0.5, period_max=70, max_runs=8)\
         .load_ois().filter_hj_ois().limit_ois(7, 8).run()

The exception was:

Traceback (most recent call last):
  File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/home/martin/.local/lib/python3.6/site-packages/transitleastsquares/core.py", line 153, in search_period
    duration_min_in_samples = int(numpy.floor(duration_min * len(y)))
ValueError: cannot convert float NaN to integer

Auto-detrend for specific cases: fast rotators, pulsators, etc

Right now auto-detrend only works for one period. It would be a great addition to do a detrend or subtraction of pulsations from pulsating stars. For this we can follow the asteroseismology examples from Lightkurve. In order to be able to do that we need to be able to automatically characterize the kind of object we are processing to select the proper algorithm to detrend it.

For fast rotators, auto-detrend should be done only for the strongest period and not for the shortest one within a range of strong periods.

Consider [sub]harmonics detection in the same Run

Armonics, subharmonics and the source signal can be detected by several detrends in the same run. Maybe we can enhance the signal scoring by detecting whether the found signal is armonic or subharmonic of some others which were found in the same run.

Modify the outputs names of work folders

The current nomenclature for working folders is given as MIS_TIC XXXXX, which is no longer convenient.

Let's move to a new one as TICXXXXX_2MIN or TICXXXXXX_FFI etc.

[BUG] Sometimes model curve doesn't match the transit parameters

Example for TIC 299798795:
Run_1_PDCSAP-FLUX_MIS_TIC 299798795_all
tls_results.folded_phase[np.argwhere(tls_results.folded_phase_model < 1)]
This returns a range between 0.495 and 0.505 for the period of 4.17 which would be a duration of 60 mins aprox. However, the tls returned duration is 13 minutes.
Possible bug for transitleastsquares.

Add errors to the provisioning file for fitting

The current version of the file that one needs for the fitting looks like:

# This example YAML file illustrates the provisioning of planet and star parameters. The star mass here is important
# because the semi-major axis is not being provided and thus, it'd need to be calculated from the period and the M_star.
settings:
  cpus: 7
planet:
  id: 231663901
  period: 1.4309133565904666
  t0: 1326.004206688168
  rp_rs: 0.10455530183261129
star:
  R_star: 1.02
  M_star: 1.05

But we need to also use the errors given by TLS, at least in the period. For the case of T0, TLS does not provide any error, but I would say that we can use a reasonable value of +-0.01 days

Add more parameters to fit with allesfitter

We are using only the essential parameters for the fit. We can add some more like the limb darkening parameters and many others might not be not critical for the fit but could be helpful to increase its accuracy.

Save the time and flux for future steps

Save the time, flux, flux_err after SHERLOCK operations in a csv file to proceed with vetting and fits steps after.

For a given signal that we want to vett and fit, we need as imputs its period and T0, which has been provided by SHERLOCK previously.

Binning in the detrends plots

Remove the binning plot in the detrending plots.
It is not offering to much information. It is better to let only the corrected un-binned plots

Modify the layout of the detrend plots

The current layout of the detrends applied is not very useful when many sectors are available, even worse if the sectors are separated. For example:

Detrends_run_1_MIS_TIC 467179528_all

We should think about a more friendly-read layout. For example:

n=10_TIC_57984826_Sp0020+3305

LATTE vetting printing exception when executed for object without TOI

Example:
vet.py --object_dir MIS_TIC_307210830_all --candidate 5

Traceback (most recent call last):
  File "/home/martin/git_repositories/sherlockpipe/sherlockpipe/vet.py", line 162, in __process
    dec, self.args)
  File "/home/martin/git_repositories/sherlockpipe/watson/lib/python3.6/site-packages/LATTE/LATTEbrew.py", line 363, in brew_LATTE
    ldv.LATTE_DV(tic, indir, syspath, transit_list, sectors_all, target_ra, target_dec, tessmag, teff, srad, [0], [0], tpf_corrupt, astroquery_corrupt, FFI = False,  bls = False, model = model, mpi = args.mpi)
  File "/home/martin/git_repositories/sherlockpipe/watson/lib/python3.6/site-packages/LATTE/LATTE_DV.py", line 89, in LATTE_DV
    TOI = (float(TOIpl["Full TOI ID"]))
  File "/home/martin/git_repositories/sherlockpipe/watson/lib/python3.6/site-packages/pandas/core/series.py", line 129, in wrapper
    raise TypeError(f"cannot convert the series to {converter}")
TypeError: cannot convert the series to <class 'float'>
couldn't download TP but continue anyway

Run storage in folders

We are generating a considerable amount of plots in the results. It is more practical to save independently each run of SHERLOCK in a folder.

Add scoring algorithm based on all the detrends power spectra

tls_results.power represent the SDE vs period of every detrended signal. We could look for every found signal period into the otherdetrended signals power spectrum to see if it is also strong enough (even not being the stronghest for its source signal).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.