This is code to reproduce key results and figures from the article: Latent embedding based on a transcription-decay decomposition of mRNA dynamics using self-supervised CoxPH.
Autoencoders are trained using different loss function.
We compare latent space representations of the models by evaluating their performance on three downstream tasks.
The code was tested on Ubuntu 20.04.4 LTS and MacOS 13.1. With python version 3.10.8.
Follow these steps to prepare the environment:
- Clone the repository
git clone https://github.com/MartinSpendl/DiscoveryScience24-paper.git
cd DiscoveryScience24-paper
- Install the required packages
# using pip in a virtual environment
pip install -r requirements.txt
# using Conda
conda create --name <env_name> --file requirements.txt
conda activate <env_name>
Data used for the analysis is publically accessible. Download file in the data/raw
folder.
TCGA datasets from UCSC Xena portal: https://xenabrowser.net/datapages/
For clustering, load Illumina gene expressions and Phenotype data:
For survival, load Illumina gene expressions and Survival data:
Download multi-omic data from the cBioPortal.
Download gene expression data and drug screening data from the Genomics of Drug Sensitivity in Cancer.
L1000 geneset from the GEO project GSE92742 is already provided in the genesets folder.
Firstly, run all the scipts in the /scipts directory:
cd scripts
python scripts/model-training-5-CV-CCLE-METABRIC.py --hyper-parameter-optimization
python scripts/model-training-5-CV-TCGA.py --hyper-parameter-optimization
python scripts/model-training-clustering.py --hyper-parameter-optimization
Note that due to hyper-parameter optimization, the training can take from several days to weeks if CUDA is not enabled.
Secondly, run notebooks in the /notebooks directory.
Figures from the notebooks are stored in the /figures directory.