Coder Social home page Coder Social logo

binli123 / dsmil-wsi Goto Github PK

View Code? Open in Web Editor NEW
336.0 0.0 85.0 49.24 MB

DSMIL: Dual-stream multiple instance learning networks for tumor detection in Whole Slide Image

License: MIT License

Python 100.00%
deep-learning multiple-instance-learning whole-slide-imaging whole-slide-image tumor-detection self-attention weakly-supervised-learning histopathology deep-neural-networks semi-supervised-learning

dsmil-wsi's People

Contributors

binli123 avatar georgebatch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dsmil-wsi's Issues

Questions about learning rate, betas, and weight decay in Adam optimizer

I am wondering why learning rate is set to 0.0002 while your paper did mention that We use Adam [25] optimizer
with a constant learning rate of 0.0001 to update the model weights during the training
.
Besides, the settings of betas and weight decay in Adam are uncommon.

# arg
parser.add_argument('--weight_decay', default=5e-3, type=float, help='Weight decay [5e-3]')
# adam
optimizer = torch.optim.Adam(milnet.parameters(), lr=args.lr, betas=(0.5, 0.9), weight_decay=args.weight_decay)

Is there any reason to set the hyperparameters like these (especially the strange settings of betas)?

What is the 'init.pth'

Thanks for sharing this amazing work. I wonder what is the 'init.pth' for the dsmil model. Is it trained on some specific dataset?

Do I need to load it if I am doing other tasks (not the medical ones)?

Best wishes,
Yifan

Question about some model architecture

Hi, I have some questions about MIL Aggregator.

In your paper, you used a dual-stream aggregator. Finally, you combine Cm and Cb, but I do not find cm operation in dsml.py.
Excuse me, what part of the program about Cm operation.

Why there is scaling factor of 0.25?

Hi,

Thanks for sharing your work. I have read the paper and understand you are grabbing tiles at lower mag (Single Tile( and concatenating/adding with corresponding (16 Tiles) at 20x.
What I don't understand is the reason behind choosing 0.25. Can you please elaborate.

feats = feats.cpu().numpy()+0.25*feats_list[idx]

Thank you.

Potential better results on Camelyon16

Hi, dear Li, thanks for your fabulous open-sourced project!

I'm trying to reproduce your results on Camelyon16 and found better performance can be achieved.

The performance of DSMIL-LC is reported to have an accuracy of 0.8992 and an AUC of 0.9165 in Table 1 of the DSMIL paper. The posted log in issue #14 shows a higher performance than the reported results after 200-epoch training, i.e., average score: 0.9000, AUC: class-0>>0.927734375, and even better performance at the early training stage, e.g., average score: 0.9250, AUC: class-0>>0.9446614583333333. Which model is used to report the final results? the best one or the 200-epoch one?

Also, I modified some parts of the code to speed up the training (e.g., I removed the drop_patch() part since the drop_rate is 0, but I noticed the drop_patch is still functional when the drop_rate is 0 (which is another shuffling, right?) ).

After modification, I trained the DSMIL model on the precomputed Camelyon16 features and got a performance of average score: 0.9375, AUC: class-0>>0.972 after 200-epoch training. In your experience, how much variation do the results have? These stronger results would definitely improve your state-of-the-art results.

BTW, just for clarification, which split was the model evaluated on? The released code uses the former 80% of the csv file as the training set and the latter 20% of the csv file as the test set, leading to a 319/80 WSIs for training/testing. The Camelyon16 is reported to have a split of 270/129 for training/testing in the DSMIL paper.

Another BTW: would you like to share the DSMIL model weights other than the embedder?

Thanks in advance for your reply. Your work is great!

Reproduce features for Camelyon16

HI,

I am trying to reproduce your results for Camelyon 16. Can you please confirm the settings for features creation?

I am using deepzoom_tiler.py with following settings:

parser.add_argument('-d', '--dataset', type=str, default='Camelyon16', help='Dataset name')
parser.add_argument('-e', '--overlap', type=int, default=0, help='Overlap of adjacent tiles [0]')
parser.add_argument('-f', '--format', type=str, default='jpeg', help='image format for tiles [jpeg]')
parser.add_argument('-v', '--slide_format', type=str, default='tif', help='image format for tiles [svs]')
parser.add_argument('-j', '--workers', type=int, default=4, help='number of worker processes to start [4]')
parser.add_argument('-q', '--quality', type=int, default=90, help='JPEG compression quality [90]')
parser.add_argument('-s', '--tile_size', type=int, default=224, help='tile size [224]')
parser.add_argument('-m', '--magnifications', type=int, nargs='+', default=[1,3], help='Levels for patch extraction [0]')
parser.add_argument('-t', '--background_t', type=int, default=25, help='Threshold for filtering background [25]') 

Then I run computeFeats.py with model weights downloaded from
https://drive.google.com/drive/folders/1sFPYTLPpRFbLVHCNgn2eaLStOk3xZtvT for lower patches.
https://drive.google.com/drive/folders/1_mumfTU3GJRtjfcJK_M0fWm048sYYFqi for higher patches.

The settings for computeFeats.py are as follows:

parser = argparse.ArgumentParser(description='Compute TCGA features from SimCLR embedder')
parser.add_argument('--num_classes', default=2, type=int, help='Number of output classes [2]')
parser.add_argument('--batch_size', default=128, type=int, help='Batch size of dataloader [128]')
parser.add_argument('--num_workers', default=4, type=int, help='Number of threads for datalodaer')
parser.add_argument('--gpu_index', type=int, nargs='+', default=(0,), help='GPU ID(s) [0]')
parser.add_argument('--backbone', default='resnet18', type=str, help='Embedder backbone [resnet18]')
parser.add_argument('--norm_layer', default='instance', type=str, help='Normalization layer [instance]')
parser.add_argument('--magnification', default='tree', type=str, help='Magnification to compute features. Use `tree` for multiple magnifications.')
parser.add_argument('--weights', default=None, type=str, help='Folder of the pretrained weights, simclr/runs/*')
parser.add_argument('--weights_high', default='./', type=str, help='Folder of the pretrained weights of high magnification, FOLDER < `simclr/runs/[FOLDER]`')
parser.add_argument('--weights_low', default='./', type=str, help='Folder of the pretrained weights of low magnification, FOLDER <`simclr/runs/[FOLDER]`')
parser.add_argument('--dataset', default='Camelyon16', type=str, help='Dataset folder name Camelyon16')

Question about current_score

Hi,thank you for sharing this awesome project.

I am trying to run dsmil on my own dateset. And I got 2 problems in train_tcga.py about current_score and class_prediction_bag.
1.I was wondering why current_score equals (sum(aucs) + avg_score + 1 - test_loss_bag)/4?
2.I was also confused about class_prediction_bag using the binary sigmoid prediction ((0.0torch.sigmoid(max_prediction)+1.0torch.sigmoid(bag_prediction)) for multiple classes(num_classes>1) instead of softmax prediction. And does this zero coefficient mean the max_prediction make no contribution to the final patient prediction?

Many thanks!

Problem of reproduce Camelyon16 result

Hello, thank you for your excellent work!
Earlier I tried to reproduce the results of Camelyon16, I used a total of 271 training sets, batch size 512 to train simclr for 3 days and train the aggregator, but the results are not as good as the simclr weights you provided for 3 days of training (model-v0 in google drive).
Like #46 , the accuracy of the aggregator will be stuck at about 60% and cannot be improved, and I found that in this case, each patch will produce the same attention score.
Can you provide relevant training parameters, such as the -o or -t parameters in deepzoom_tiler.py, and the learning rate, batch size, epoch, etc. of simclr.

TCGA data download

When I come to the website, it says: “All slide and diagnostic images from the TCGA program are currently unavailable for download”. Could you share the lung datasets by using a Google Cloud link? : )

Configuration of SimCLR for Camelyon16

Dear Bin Li,
Very interesting work!
Could you please also upload the configuration of SimCLR for training Camelyon16, i.e., epoch, optimizer, lr, weight decay, batch size, etc.
Thank you very much.
BW

about simclr batchsize>=512?

Hello, can you tell me if the batchsize must be greater than 512 when extracting image features under simclr?

The prediction accuracy began to hover around 60%, and even slowly declined

image

image

Hello, I tested on my own binary data set. After simclr (batch_size=256) feature extraction, I found that my training accuracy has been hovering around 60%, and finally it will even be predicted that it is all in the same category. Is it me_ Feats error or something else, and bag_ The prediction values are very close, both in the 0.1 attachment .please point it out, thank you

How to run DSMIL pipeline at different magnification

Hi,
I have a dataset which have different magnification at level 0. Some data are at 40x and the rest are at 20x. Since they are at different resolutions, I'm trying to extract the patches and features at 20x resolution for all the WSIs for uniformity of patch magnifications.

Could you please provide more details as to how I could achieve it?

Thank you very much!

How to set num_classes?

Hi, thank you very much for sharing this code - I wanted to ask for some guidance on how to set the num_classes parameter.

I have a case where each bag has a label class X or class Y. When I set num_classes=2 some bags are predicted as [1, 1] or [0, 0] - these labels do not exist in my dataset (a bag cannot be both class X and Y or neither). So I think num_classes should be set to 1 such that predictions are either 0 or 1 - is that correct?

I am confused because the TCGA dataset is the same task - a bag is either LUAD or LUSC (it cannot be both or neither), yet in the code num_classes is set to 2? Why is this?

Thank you!

Which normalization transform whould I use to use pre-trained models?

Hi,

I want to use your pre-trained ResNet models, but I could not find which transforms you used. To be more precise, when using ImageNet pre-trained models on new inputs, people normalize the input images before passing them to the pre-trained extractor using ImageNet normalization constants (mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225]).

What normalization should I perform to use your pre-trained models?

Many thanks,
George Batchkala

How to deal with multi-label problem?

Some cancer may have different parts in one slide because of tumor heterogeneity. Does this code solve the multi-label problem? Or how to deal with multi-label problem by using MIL?

Camelyon16: pretrained embedders

Hi @binli123,
Given the data obtained as #39, I extracted the features using both model-v0 and model-v2. The differences between their performances on the downstream task are evident. Here is the AUC:
image

Looking at #12 you say they differ for batch size and training time. Could you be more specific?

Discarding background patches

Hello.

Your work is very nice and so I'm trying to reproduce your result on camelyon 16.

From WSIs (tif) of camelyon 16, I'm cropping the patches, but the number of patches (about 10000 patches in a bag) is slightly larger than reported number of patches in your paper(about 8000 patches in a bag).

In a paper, I read that you discarded the background patches (entropy < 5).

Is part of discarding patches included in your code? If so, where can i find that part?

Or What method did you use for discarding patches based on the entropy?

Thank you for your great work.

Embedding for multiscale patches

Hello,
I really like this repo!

Do I understand correctly that the embedder currently only supports single scale patches?

Thanks!
Anne

What is aggregator.pth?

Thanks for sharing this amazing work. I wonder what is the 'aggregator.pth' for the dsmil model. Is it trained on some specific dataset? How can I get it on my own dataset? Looking forward to your reply.

Best Wishes!

Patch-based method

Hi, thank you for this fantastic work!

I tested a similar patch-based method proposed here and also a attention-based MIL on TCGA LIHC dataset. It seems that the attention-based MIL outperformed the patch-based method in AUROC by 10%. Thus I am a bit supervised by the TCGA results (patch-based) shown in the Table 3 of your paper. I guess it is because of my sub-optical settings. I would appreciate if you could share your detailed settings used for the patch-based method (epochs, batch size, optimizer, loss, etc).

Thank you very much!

Train/Test Split of Camelyon16

Hi,

In the code I noticed that you split the Camelyon16 dataset with the formula

train_path = bags_path.iloc[0:int(len(bags_path)*(1-args.split)), :]
test_path = bags_path.iloc[int(len(bags_path)*(1-args.split)):, :]

For the default argument of 0.2, this results in 320 train and 80 test WSIs. For the AUC and Accuracy results you have given in the paper, are they calculated with this split, or the standard 270 training/130 test WSI split of Camelyon16? If you used 320 training slides, is the SimCLR part also trained with 320 slides or 270 slides?

shuffle in computing features

in the function 'bag_dataset', you set the dataloader's shuffle params as True. I think it will lead to the order of elements in Var feats_list is different with Var low_patches in function compute_tree_feats. So when you add concatenate the high resolution features to low, it will mismatch.

Quesions about Camelyon16

Hello,
You said, "3.2 million patches at 20× magnification" you get in the paper.
But I found some of it is at magnification 20x and the other is 40x(as in fig 1) after I checked the Camelyon16 data.

image

I try to split data from RUMC at level 0 and UMCU at level 1 but only get about 10 million patches.

So my questions are two folds:

  1. How do you deal with the different magnification problems?
  2. What is your background processing method?

about the result are all 0or1

Hello, author. When I used your network to train my own WSI, simclr trained 200 epochs. My data is divided into positive and negative, so I set number_ class=1. But in training, the first training result is the best, but after training, the bag_ Predictions are all 0 or 1. Could you give me some advice? thank you

Questions about why not larger ResNet model or MoCo?

Have you ever tried larger models such as ResNet-50/101/152 or DenseNet, wider models like wide-ResNet? And have you ever tried MoCo V2 besides SimCLR? Why finally choose ResNet-18 and SimCLR? This is because of performance issues or other considerations?
Thanks very much.

Pre-trained Weights

Hi Bin Li,

Thank you very much for such a thorough explanation of how everything works in the README! Also, thank you for making so many things public (representations, weights, and even some of the data).

Could you please explain which of the weights in the TCGA folder corresponds to which magnification? As I understand, one should be for the 5x and the other for the 20x magnification embedder network, but I do not understand which one is which.

Many thanks,
George

About Camelyon16 Localization

First of all, thank you for your work.
I read your paper. I saw that you wrote an experiment on localization( FROC) of camelyon16 dataset in your paper. Then I didn't read about localization work in your code.

Generating embeddings for Camelyon16

First of all, thank you for the great work.

I am trying to train with your DSMIL model, but I have a different backbone that generates the embeddings, I would like to just train the MIL part. My question is, while you were generating the csv files, have you applied transformation to the patch images (other than toTensor()), or just fed the images to the already trained Resnet18 backbone? I will also generate the csv files like your work, but I would like to hear your recommendation before doing so.

final prediction class

Hi,

When having multi-class MIL, if one case is detected as more than one class (e.g. case#1 is detected as class 0 and class 1 and class 2), how do we decide which class is the final prediction? Is it the argmax of bag_prediction?

Thanks!

training speed and abmil.

Hi, thanks for your fabulous project.

May we get some information from your experience on the training speed of each part of the pipeline? e.g., how long does pre-training on C16/TCGA take? and how long does the MIL part take?

For example, I'm currently running the train_tcga.py using the provided C16 features. But the training speed seems to be low and the GPU usage utility also stays low. Is there any way to speed up?


About ABMIL

I'm wondering if you planned to release the ABMIL code?

SimClr -CAMELYON16

Could you please post the weight of SIMCLR training on CAMELYON16 dataset

About SimCLR model

Great Work! And I want to know how long it took you to train the SimCLR model? Can you share the trained ResNet18 model.pth file?

Request for patch-level labels of CAMELYON 16

Thanks for your brilliant work and sharing the precomputed features of CAMELYON 16!
But only slide-level labels are available in the precomputed features. Could you please further release the patch-level labels of the precomputed features?

question about test

Hi,
Thank you for making public this implementation of your very nice paper.
would you please share python code for test or detect new WSI? i see just train scripts.
thanks

more than RGB channels

Hi, and thanks again for sharing your code.

The first step of the pipeline is training a self-supervised contrastive learning CNN to later use for feature extraction. This part of the code is for RGB images. Do you have any suggestions for when I have more than 3 channels? So far, your code is showing promising results on my dataset, and I am trying to incorporate some segmentation masks into the pipeline (adding more channels to the input images). Any suggestion is appreciated!

Thanks,
Shima

Requests for multiscale features

Hello, I appreciate very much for your source codes, and I have read your paper carefully and found it very inspiring. But I would like to request you to open source or send me the multi-scale features of TCGA and Camelyon16 data used in your paper.You have only open sourced a single scale version of them in the repository. Thank you very much and I look forward to your reply!

About the csv file

simclr/all_patches.csv
How to generate the csv file?
And what kind of data is stored in it?

SIMclr training vs test sets configuration

Hi @binli123 ,

I'm trying to replicate your results without success on camelyon16. I put the number of classes to 1 and also tried weights online for computing the feats on both training and test set. Even with that I still obtain only 0.7% AUC... So I start thinking about how I organized the data different from you. I downloaded the data from here: https://ftp.cngb.org/pub/gigadb/pub/10.5524/100001_101000/100439/CAMELYON16/
the data is divided into training and test. I used as threeshold 25 for filtering out background. So I used only the training set for training the self-supervised model.
After that, even with the model you published on drive, I extracted feats with the compute_feat script for both training and test(especially with the fusion option). Finally, I modified the train_tcga for considering them as sources for the training set and the test set (270 /130 bags). Even

If instead, I use the features precomputed by you the mil model works. So the problem could be how I split data or how I extract embeddings. What am I missing?

About ABMIL

I have read ABMIL's code (https://github.com/AMLab-Amsterdam/AttentionDeepMIL), the input size of its model is: (1, bag_length, 1, width, height), due to the images of the dataset being very small, memory problems may not occur. But when using WSI, bag_length, width and height are very big, so how to apply ABMIL to WSIs for the comparison?

Pretrained embedders

I have a question, your simlr is pre-training, does it include all the data of camelyon16 (training set and test set)? Because I found that your feature extractor is faulty, you leaked the information of the test set, I tried, only pre-trained on the training set, there is no such high result, I think you should check this problem carefully, resulting in your result is too high

test_crop_single default magnificent

你好!

  1.   C16: 在训练时(train_tcga.py)我看到只有dsmil model(help='MIL model [dsmil]'),请问dimil-lc有吗?测试时也是 只有单个倍率的特征。请问默认倍率是20吗?
    
  2.   C16: 当我使用test_crop_single.py时,它分割patch的速率非常慢,比deepzoom_tiler.py要慢得多。请问怎么解决?默认倍率就是20X吗(test_crop_single.py)?
    

Hello!
11

  1. C16:During training (train_tcga. Py), I saw only "dsmil model" (_**help ='mil model [dsmil] '**_). Can we Use "dimil_LC"? The test is also characterized by only a single magnification. Is the default magnification 20X in test? or 10X?
    
  2. C16:When I use test_ crop_ single. Py, it splits patch very slowly, which is slower than deepzoom_ tiler. Py is much slower. How to solve it? Is the default magnification 20x (test_crop_single. Py)?

11

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.