Official code for "Self-Supervised driven Consistency Training for Annotation Efficient Histopathology Image Analysis" Published in Medical Image Analysis (MedIA) Journal, Oct, 2021.
Hi, thank you for sharing your code and pre-trained models !
I have a question regarding the loss function used for fine-tuning the pre-trained model on the BreastPathQ dataset.
In line 387 it looks like you use the mean squarred error, although you build your model as a classifier. Is this what you wanted to do ? And if yes, could you explain why ?
For BreastPathQ, when I used the pretrian model author released and did the finetune, I could get a similar result to your paper.
But I tried to pretrain the model myself, following the instructions (1. Self-supervised pretext task: Resolution sequence prediction (RSP) in WSIs). I used the best pretrain model and did the same things as before. After finetuning, the result was not good enough like paper.
paper:
my reslut:
During my training, 4 WSIs which are bad wsis cannot be used. But I don’t think it is an essential problem for me, because just lose hundreds of data.
Thanks for your effort for releasing this great code, I have some issues towards the BreastPathQ dataset. I wander where can i get the target of the testing set of BreastPathQ dataset, since the official site on grand challenge only have labels of training and validation set.
Hello, thanks for your amazing work!In the process of reproducing the results of the paper, I encountered some problems, which I hope can be replyed.
For the slide prediction in Camelyon16, I didn't find code on how to predict from heatmap to slide level. According to the paper, I refer to the code here: For a Slide, I extracted 28 features based on the heatmap, and then fed into the random forest for training, but did not get a good result. So there will be some tricks to train the RandomForestClassifier?If you can open source the code for this part, I believe it will be of great help!
Looking forward to your reply!
Hi,
Thank you for your great work!
I wonder how to use your model for linear probing.
I empirically find the results are not promising when I use your released models with the MLP removed.
If the MLP can not be removed, then how to use the model with only one magnitude of pathological images?
Looking forward to your help! Thanks again.
Hi, thanks for your amazing work! I have a question that when I try to load the ptetrained weight Pretrained_models/Camelyon16_pretrained_model.pt, it has some problem.
I think this weight is damaged, can you please check the validity of the weight. Looking forward to your reply!
Thank you for your excellent paper and open source code. I have some questions about the experimental results of NCT-CRC.
The MoCo + CR approach obtains a new state-of-the-art result with an Acc of 0.990, weighted F-1 score of 0.953 and a macro AUC of 0.997, compared to the previous method ( Kather et al., 2019 ) which obtained an Acc of 0.943. However, using random initialization can get 97.2% acc with 10% training data in Table 5, which is also much higher than 0.943 of (kather et al., 2019), random initialization can also get high ACC, did I miss something?
Table 5 presents the overall Acc and weighted F 1 score ( F 1 ) for classification of 9 colorectal tissue classes using different methodologies. On this dataset, the MoCo + CR approach obtains a new state-of-the-art result with an Acc of 0.990, weighted F-1 score of 0.953 and a macro AUC of 0.997, compared to the previous method ( Kather et al., 2019 ) which obtained an Acc of 0.943.
When I train the CRC dataset, the difference between my weighted F1 and ACC was not as great as yours(Acc :0.990, weighted F-1: 0.953), for example, ACC:0.9400, weight-F1:0.9399 , did I miss something?