Gravitational wave detection in real noise timeseries using deep residual neural networks.
This repository contains the method submitted by our team, Virgo-AUTH, to the MLGWSC competition. Our method contains the following components:
- A whitening module implemented in pytorch
- An adaptive normalization layer (DAIN) for non-stationary timeseries
- A ResNet backbone with a depth of 54 layers
- Training with over 600K samples
TBA
The deploy_model.py
script is the main method used for testing. Following the rules of the competition, this script expects only two inputs: the input foreground/background file (generated by generate_data.py), and the name of the file where the resulting predictions or events will be saved to.
An example of usage can be found in the run_on_test.sh
script, explained below:
# creation of the directory where results will be saved
mkdir results
# run model on foreground, using GPU0 and save results
CUDA_VISIBLE_DEVICES="0" python3 deploy_model.py dataset-4/v2/test_foreground_s4w6_1.hdf results/test_fgevents_s4w6_1.hdf
# run model on background, using GPU0 and save results
CUDA_VISIBLE_DEVICES="0" python3 deploy_model.py dataset-4/v2/test_background_s4w6_1.hdf results/test_bgevents_s4w6_1.hdf
# the following files can be found in the original competition repository:
# https://github.com/gwastro/ml-mock-data-challenge-1
# and are required for evaluation and visualization of results
./evaluate.py --injection-file dataset-4/v2/test_injections_s4w6_1.hdf --foreground-events results/test_fgevents_s4w6_1.hdf --foreground-files dataset-4/v2/test_foreground_s4w6_1.hdf --background-events results/test_bgevents_s4w6_1.hdf --output-file results/test_eval_output_s4w6_1.hdf --verbose
python3 contributions/sensitivity_plot.py --files results/test_eval_output_s4w6_1.hdf --output results/test_eval_output_s4w6_1_plot.png --no-tex
@article{tba, title={In writing}, author={}, journal={}, year={} }