Coder Social home page Coder Social logo

mahmoodlab / porpoise Goto Github PK

View Code? Open in Web Editor NEW
172.0 4.0 32.0 198.62 MB

Pan-Cancer Integrative Histology-Genomic Analysis via Multimodal Deep Learning - Cancer Cell

Home Page: http://pancancer.mahmoodlab.org

License: GNU General Public License v3.0

Python 29.43% Jupyter Notebook 70.57%
histology wsi-images pan-cancer-analysis genomics mahmoodlab

porpoise's People

Contributors

faisalml avatar richarizardd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

porpoise's Issues

The compressed files in 'datasets_csv_mutsig' cannot be uncompressed.

Hello, author, thank you for your selfless sharing. I could not find the tcga-coadread file in the "datasets_csv" folder, but fortunately, I found it in "datasets_csv_mutsig". 1. Do these two folders contain the same type of files? 2. After downloading "datasets_csv_mutsig" to my local machine and unzipping it (with WinRAR), the compressed files inside could not be further uncompressed, displaying that the file format is unknown or damaged. Can you give me some advice? Thank you.

loss computation

Hi, thanks a lot for your sharing your code.

I have a question on the following code:

loss = (1 - alpha) * neg_l + alpha * uncensored_loss

neg_l = censored_loss + uncensored_loss
if alpha is not None:
    loss = (1 - alpha) * neg_l + alpha * uncensored_loss

which can be split into:

loss = (1 - alpha) * (censored_loss + uncensored_loss) + alpha * uncensored_loss
     = (1 - alpha) * censored_loss + uncensored_loss - alpha * uncensored_loss + alpha * uncensored_loss
     = (1 - alpha) * censored_loss + uncensored_loss

Is this reasonable?

Partition of the continuous time scale

Hi,

Thank you for sharing your work on survival analysis and the survival losses.
I've read this repo as well as MCAT along with your papers and I have one question:

Is there any rationale for using the quartiles to partition the continuous time? or this can be task-dependent ? Have you try any other partitioning ?
Thank you

Missing yaml file

Hi,
Thanks for the great work.
I noticed that the yaml file is missing in the 'docs' directory.
Can you please add it?

What is the Fusion method used in article?

Hello,
In code, there are three fusion mode, i.e. 'concat', 'bilinear', 'lrb'. The research mentions that "a multimodal fusion layer that computes the Kronecker Product to model pairwise feature interactions between histology and molecular features". Thus, which is the exact fusion method used in paper?

pretrained model weights

Hello and thank you for sharing your work!
Could you please provide the last checkpoint for the pretrained model?
Thank you in advance, Lucia

Data preprocessing

I noticed that your description of the selection of molecular feature is "For RNA-Seq abundance, we selected the top 2000 genes with the largest median absolute deviation for inclusion". Why were genes with larger median absolute deviation selected as SNN input?
As far as I know, the larger the median absolute deviation in a set of data, the higher the probability that there are abnormal points in the set of data. In general, we should choose data with low median absolute deviation. I was hoping you could explain it to me. Thank you!

EarlyStopping issue?

Pardon me if there is some gap in my understanding of the code but should there not be something like

if stop:
    break

after these lines on PORPOISE/utils/core_utils.py (in the train function) for EarlyStopping to work?

for epoch in range(args.max_epochs):
        if args.task_type == 'survival':
            if args.mode == 'coattn':
                train_loop_survival_coattn(epoch, model, train_loader, optimizer, args.n_classes, writer, loss_fn, reg_fn, args.lambda_reg, args.gc)
                stop = validate_survival_coattn(cur, epoch, model, val_loader, args.n_classes, early_stopping, monitor_cindex, writer, loss_fn, reg_fn, args.lambda_reg, args.results_dir)
            else:
                train_loop_survival(epoch, model, train_loader, optimizer, args.n_classes, writer, loss_fn, reg_fn, args.lambda_reg, args.gc)
                stop = validate_survival(cur, epoch, model, val_loader, args.n_classes, early_stopping, monitor_cindex, writer, loss_fn, reg_fn, args.lambda_reg, args.results_dir)

TypeError: __call__() got an unexpected keyword argument 'hazards'

Hello, I have used CLAM before, so I am very interested in this project. However, this issue has been troubling me for several days. I attempted to replicate the process using TCGA-STAD data on PORPOISE. I used CLAM for feature extraction, and followed all the other steps as described in the Readme.md. I utilized the tcga_stad_all_clean.csv.zip file from the dataset_csv_mutsig folder, and I also selected the corresponding tcga_stad split. Despite this, I am still encountering the following error.

Below is the error log:

CUDA_VISIBLE_DEVICES=0 python main.py --which_splits 5foldcv --split_dir tcga_stad --mode coattn --reg_type pathomic --model_type mcat --apply_sig --fusion bilinear
/home/aletolia/anaconda3/envs/porpoise/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/aletolia/anaconda3/envs/porpoise/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/aletolia/anaconda3/envs/porpoise/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/aletolia/anaconda3/envs/porpoise/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/aletolia/anaconda3/envs/porpoise/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/aletolia/anaconda3/envs/porpoise/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Experiment Name: tcga_stad_MCAT_nll_surv_a0.0_pathomicreg1e-05_5foldcv_gc32_bilinear

Load Dataset
(0, 0) : 0
(0, 1) : 1
(1, 0) : 2
(1, 1) : 3
(2, 0) : 4
(2, 1) : 5
(3, 0) : 6
(3, 1) : 7
label column: survival_months
label dictionary: {(0, 0): 0, (0, 1): 1, (1, 0): 2, (1, 1): 3, (2, 0): 4, (2, 1): 5, (3, 0): 6, (3, 1): 7}
number of classes: 8
slide-level counts:
7 117
5 51
4 35
6 35
2 35
3 13
0 35
1 28
Name: label, dtype: int64
Patient-LVL; Number of samples registered in class 0: 35
Slide-LVL; Number of samples registered in class 0: 35
Patient-LVL; Number of samples registered in class 1: 28
Slide-LVL; Number of samples registered in class 1: 28
Patient-LVL; Number of samples registered in class 2: 35
Slide-LVL; Number of samples registered in class 2: 35
Patient-LVL; Number of samples registered in class 3: 13
Slide-LVL; Number of samples registered in class 3: 13
Patient-LVL; Number of samples registered in class 4: 35
Slide-LVL; Number of samples registered in class 4: 35
Patient-LVL; Number of samples registered in class 5: 51
Slide-LVL; Number of samples registered in class 5: 51
Patient-LVL; Number of samples registered in class 6: 35
Slide-LVL; Number of samples registered in class 6: 35
Patient-LVL; Number of samples registered in class 7: 117
Slide-LVL; Number of samples registered in class 7: 117
label column: survival_months
label dictionary: {(0, 0): 0, (0, 1): 1, (1, 0): 2, (1, 1): 3, (2, 0): 4, (2, 1): 5, (3, 0): 6, (3, 1): 7}
number of classes: 8
slide-level counts:
7 117
5 51
4 35
6 35
2 35
3 13
0 35
1 28
Name: label, dtype: int64
Patient-LVL; Number of samples registered in class 0: 35
Slide-LVL; Number of samples registered in class 0: 35
Patient-LVL; Number of samples registered in class 1: 28
Slide-LVL; Number of samples registered in class 1: 28
Patient-LVL; Number of samples registered in class 2: 35
Slide-LVL; Number of samples registered in class 2: 35
Patient-LVL; Number of samples registered in class 3: 13
Slide-LVL; Number of samples registered in class 3: 13
Patient-LVL; Number of samples registered in class 4: 35
Slide-LVL; Number of samples registered in class 4: 35
Patient-LVL; Number of samples registered in class 5: 51
Slide-LVL; Number of samples registered in class 5: 51
Patient-LVL; Number of samples registered in class 6: 35
Slide-LVL; Number of samples registered in class 6: 35
Patient-LVL; Number of samples registered in class 7: 117
Slide-LVL; Number of samples registered in class 7: 117
split_dir ./splits/5foldcv/tcga_stad
################# Settings ###################
num_splits: 5
k_start: -1
k_end: -1
task: tcga_stad_survival
max_epochs: 20
results_dir: ./results_new
lr: 0.0002
experiment: tcga_stad_MCAT_nll_surv_a0.0_pathomicreg1e-05_5foldcv_gc32_bilinear
reg: 1e-05
label_frac: 1.0
bag_loss: nll_surv
seed: 1
model_type: mcat
model_size_wsi: small
model_size_omic: small
use_drop_out: True
weighted_sample: True
gc: 32
opt: adam
split_dir: ./splits/5foldcv/tcga_stad
Shape (279, 2533)
Shape (70, 2533)
****** Normalizing Data ******
training: 279, validation: 70
Genomic Dimensions [93, 341, 537, 436, 219, 437]

Training Fold 0!

Init train/val/test splits...
Done!
Training on 279 samples
Validating on 70 samples

Init loss function... Done!

Init Model... Done!
MCAT_Surv(
(wsi_net): Sequential(
(0): Linear(in_features=1024, out_features=256, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
(sig_networks): ModuleList(
(0): Sequential(
(0): Sequential(
(0): Linear(in_features=93, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
(1): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
)
(1): Sequential(
(0): Sequential(
(0): Linear(in_features=341, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
(1): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
)
(2): Sequential(
(0): Sequential(
(0): Linear(in_features=537, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
(1): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
)
(3): Sequential(
(0): Sequential(
(0): Linear(in_features=436, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
(1): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
)
(4): Sequential(
(0): Sequential(
(0): Linear(in_features=219, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
(1): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
)
(5): Sequential(
(0): Sequential(
(0): Linear(in_features=437, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
(1): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ELU(alpha=1.0)
(2): AlphaDropout(p=0.25, inplace=False)
)
)
)
(coattn): MultiheadAttention(
(out_proj): _LinearWithBias(in_features=256, out_features=256, bias=True)
)
(path_transformer): TransformerEncoder(
(layers): ModuleList(
(0): TransformerEncoderLayer(
(self_attn): MultiheadAttention(
(out_proj): _LinearWithBias(in_features=256, out_features=256, bias=True)
)
(linear1): Linear(in_features=256, out_features=512, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
(linear2): Linear(in_features=512, out_features=256, bias=True)
(norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(dropout1): Dropout(p=0.25, inplace=False)
(dropout2): Dropout(p=0.25, inplace=False)
)
(1): TransformerEncoderLayer(
(self_attn): MultiheadAttention(
(out_proj): _LinearWithBias(in_features=256, out_features=256, bias=True)
)
(linear1): Linear(in_features=256, out_features=512, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
(linear2): Linear(in_features=512, out_features=256, bias=True)
(norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(dropout1): Dropout(p=0.25, inplace=False)
(dropout2): Dropout(p=0.25, inplace=False)
)
)
)
(path_attention_head): Attn_Net_Gated(
(attention_a): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Tanh()
(2): Dropout(p=0.25, inplace=False)
)
(attention_b): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Sigmoid()
(2): Dropout(p=0.25, inplace=False)
)
(attention_c): Linear(in_features=256, out_features=1, bias=True)
)
(path_rho): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
(omic_transformer): TransformerEncoder(
(layers): ModuleList(
(0): TransformerEncoderLayer(
(self_attn): MultiheadAttention(
(out_proj): _LinearWithBias(in_features=256, out_features=256, bias=True)
)
(linear1): Linear(in_features=256, out_features=512, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
(linear2): Linear(in_features=512, out_features=256, bias=True)
(norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(dropout1): Dropout(p=0.25, inplace=False)
(dropout2): Dropout(p=0.25, inplace=False)
)
(1): TransformerEncoderLayer(
(self_attn): MultiheadAttention(
(out_proj): _LinearWithBias(in_features=256, out_features=256, bias=True)
)
(linear1): Linear(in_features=256, out_features=512, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
(linear2): Linear(in_features=512, out_features=256, bias=True)
(norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(dropout1): Dropout(p=0.25, inplace=False)
(dropout2): Dropout(p=0.25, inplace=False)
)
)
)
(omic_attention_head): Attn_Net_Gated(
(attention_a): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Tanh()
(2): Dropout(p=0.25, inplace=False)
)
(attention_b): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Sigmoid()
(2): Dropout(p=0.25, inplace=False)
)
(attention_c): Linear(in_features=256, out_features=1, bias=True)
)
(omic_rho): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
(mm): BilinearFusion(
(linear_h1): Sequential(
(0): Linear(in_features=256, out_features=32, bias=True)
(1): ReLU()
)
(linear_z1): Sequential(
(0): Linear(in_features=512, out_features=32, bias=True)
)
(linear_o1): Sequential(
(0): Linear(in_features=32, out_features=32, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
(linear_h2): Sequential(
(0): Linear(in_features=256, out_features=32, bias=True)
(1): ReLU()
)
(linear_z2): Sequential(
(0): Linear(in_features=512, out_features=32, bias=True)
)
(linear_o2): Sequential(
(0): Linear(in_features=32, out_features=32, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
(post_fusion_dropout): Dropout(p=0.25, inplace=False)
(encoder1): Sequential(
(0): Linear(in_features=1089, out_features=256, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
(encoder2): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
(2): Dropout(p=0.25, inplace=False)
)
)
(classifier): Linear(in_features=256, out_features=4, bias=True)
)
Total number of parameters: 4350918
Total number of trainable parameters: 4350918

Init optimizer ... Done!

Init Loaders... Done!

Setup EarlyStopping...
Setup Validation C-Index Monitor... Done!

/home/aletolia/documents/GithubRepos/PORPOISE/utils/utils.py:77: FutureWarning: The input object of type 'Tensor' is an array-like implementing one of the corresponding protocols (__array__, __array_interface__ or __array_struct__); but not a sequence (or 0-D). In the future, this object will be coerced as if it was first converted using np.array(obj). To retain the old behaviour, you have to either modify the type 'Tensor', or assign to an empty array created with np.empty(correct_shape, dtype=object).
event_time = np.array([item[8] for item in batch])
/home/aletolia/documents/GithubRepos/PORPOISE/utils/utils.py:77: FutureWarning: The input object of type 'Tensor' is an array-like implementing one of the corresponding protocols (__array__, __array_interface__ or __array_struct__); but not a sequence (or 0-D). In the future, this object will be coerced as if it was first converted using np.array(obj). To retain the old behaviour, you have to either modify the type 'Tensor', or assign to an empty array created with np.empty(correct_shape, dtype=object).
event_time = np.array([item[8] for item in batch])
/home/aletolia/documents/GithubRepos/PORPOISE/utils/utils.py:77: FutureWarning: The input object of type 'Tensor' is an array-like implementing one of the corresponding protocols (__array__, __array_interface__ or __array_struct__); but not a sequence (or 0-D). In the future, this object will be coerced as if it was first converted using np.array(obj). To retain the old behaviour, you have to either modify the type 'Tensor', or assign to an empty array created with np.empty(correct_shape, dtype=object).
event_time = np.array([item[8] for item in batch])
/home/aletolia/documents/GithubRepos/PORPOISE/utils/utils.py:77: FutureWarning: The input object of type 'Tensor' is an array-like implementing one of the corresponding protocols (__array__, __array_interface__ or __array_struct__); but not a sequence (or 0-D). In the future, this object will be coerced as if it was first converted using np.array(obj). To retain the old behaviour, you have to either modify the type 'Tensor', or assign to an empty array created with np.empty(correct_shape, dtype=object).
event_time = np.array([item[8] for item in batch])
Traceback (most recent call last):
File "main.py", line 259, in
results = main(args)
File "main.py", line 76, in main
val_latest, cindex_latest = train(datasets, i, args)
File "/home/aletolia/documents/GithubRepos/PORPOISE/utils/core_utils.py", line 204, in train
train_loop_survival_coattn(epoch, model, train_loader, optimizer, args.n_classes, writer, loss_fn, reg_fn, args.lambda_reg, args.gc)
File "/home/aletolia/documents/GithubRepos/PORPOISE/utils/coattn_train_utils.py", line 36, in train_loop_survival_coattn
loss = loss_fn(hazards=hazards, S=S, Y=label, c=c)
TypeError: call() got an unexpected keyword argument 'hazards'

pip freeze
absl-py==1.4.0
ase==3.22.1
astor==0.8.1
autograd==1.6.2
autograd-gamma==0.5.0
backcall @ file:///home/conda/feedstock_root/build_artifacts/backcall_1592338393461/work
backports.functools-lru-cache @ file:///home/conda/feedstock_root/build_artifacts/backports.functools_lru_cache_1687772187254/work
Bottleneck @ file:///opt/conda/conda-bld/bottleneck_1657175564434/work
brotlipy==0.7.0
cached-property==1.5.2
captum==0.2.0
certifi==2023.7.22
cffi @ file:///croot/cffi_1670423208954/work
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1632508026186/work
cryptography @ file:///croot/cryptography_1677533068310/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
dataclasses==0.6
debugpy==1.6.7.post1
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
ecos==2.0.12
entrypoints @ file:///home/conda/feedstock_root/build_artifacts/entrypoints_1643888246732/work
formulaic==0.6.4
future==0.18.3
gast==0.5.4
googledrivedownloader==0.4
graphlib-backport==1.0.3
grpcio==1.57.0
h5py==2.10.0
idna @ file:///croot/idna_1666125576474/work
importlib-metadata==4.13.0
interface-meta==1.3.0
ipykernel==6.16.2
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1651240553635/work
ipython-genutils==0.2.0
isodate==0.6.1
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1690896916983/work
Jinja2==3.1.2
joblib==1.3.2
jupyter_client==7.4.9
jupyter_core==4.12.0
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.2
kiwisolver @ file:///opt/conda/conda-bld/kiwisolver_1638569886207/work
lifelines==0.27.7
llvmlite==0.39.1
Markdown==3.4.4
MarkupSafe==2.1.3
matplotlib==3.1.1
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1660814786464/work
mkl-fft==1.3.1
mkl-random @ file:///tmp/build/80754af9/mkl_random_1626179032232/work
mkl-service==2.4.0
mock==5.1.0
nest-asyncio==1.5.7
networkx==2.6.3
numba @ file:///croot/numba_1670258325998/work
numexpr @ file:///croot/numexpr_1668713893690/work
numpy==1.21.6
opencv-python==4.1.1.26
openslide-python @ file:///home/conda/feedstock_root/build_artifacts/openslide-python_1623554159772/work
osqp==0.6.3
packaging @ file:///croot/packaging_1671697413597/work
pandas==1.3.5
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1638334955874/work
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1667297516076/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
Pillow==9.4.0
prompt-toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1688565951714/work
protobuf==3.19.6
psutil==5.9.5
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1691408637400/work
pyOpenSSL @ file:///croot/pyopenssl_1677607685877/work
pyparsing @ file:///opt/conda/conda-bld/pyparsing_1661452539315/work
PySocks @ file:///tmp/build/80754af9/pysocks_1594394576006/work
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
python-louvain==0.16
pytz @ file:///croot/pytz_1671697431263/work
pyzmq==25.1.1
qdldl==0.1.7.post0
rdflib==6.3.2
requests @ file:///opt/conda/conda-bld/requests_1657734628632/work
scikit-learn @ file:///tmp/build/80754af9/scikit-learn_1642601761909/work
scikit-survival==0.17.2
scipy @ file:///opt/conda/conda-bld/scipy_1661390393401/work
shap @ file:///croot/shap_1668715257344/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
slicer @ file:///tmp/build/80754af9/slicer_1633422823758/work
tensorboard==1.13.1
tensorboardX==1.9
tensorflow==1.13.1
tensorflow-estimator==1.13.0
termcolor==2.3.0
threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1629802263681/work
torch==1.7.0
torch-geometric==1.6.3
torch-scatter @ file:///home/aletolia/documents/torch_scatter-2.0.5-cp37-cp37m-linux_x86_64.whl
torch-sparse @ file:///home/aletolia/documents/torch_sparse-0.6.8-cp37-cp37m-linux_x86_64.whl
torchaudio==0.7.0a0+ac17b64
torchvision==0.8.0
tornado @ file:///opt/conda/conda-bld/tornado_1662061693373/work
tqdm==4.66.1
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1675110562325/work
typing_extensions @ file:///tmp/abs_ben9emwtky/croots/recipe/typing_extensions_1659638822008/work
urllib3 @ file:///croot/urllib3_1673575502006/work
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1673864653149/work
Werkzeug==2.2.3
wrapt==1.15.0
zipp==3.15.0

AttributeError: 'Namespace' object has no attribute 'omic_sizes'

Thanks for the great tool. I am getting problems for running the different version of the model. If I do:

python main.py --which_splits 5foldcv --split_dir SCLC --mode pathomic --reg_type pathomic --model_type porpoise_mmf --data_root_dir /Path/to/Extract_features/ --fusion bilinear --max_epochs 50

Everything seems to work nicely. However if I remove --model_type I got an error on omic sizes:

****** Normalizing Data ******
training: 164, validation: 33
Genomic Dimension 10

Training Fold 0!

Init train/val/test splits... 
Done!
Training on 164 samples
Validating on 33 samples

Init loss function... Done!

Init Model... Traceback (most recent call last):
  File "main.py", line 258, in <module>
    results = main(args)
  File "main.py", line 76, in main
    val_latest, cindex_latest = train(datasets, i, args)
  File "/home/joan/PORPOISE/utils/core_utils.py", line 169, in train
    model_dict = {'fusion': args.fusion, 'omic_sizes': args.omic_sizes, 'n_classes': args.n_classes}
AttributeError: 'Namespace' object has no attribute 'omic_sizes'

I am not really getting why this is throwing an error. I see that mcat (default) uses this variable. I also got the error if I tried --mode coattn which also uses it. Is there anything related with the input data that I am missing? Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.