Coder Social home page Coder Social logo

mr-talhailyas / tsfd Goto Github PK

View Code? Open in Web Editor NEW
21.0 1.0 6.0 20.39 MB

Nuclei segmentation and classification (Cancer cells)

Home Page: https://www.sciencedirect.com/science/article/pii/S0893608022000612?via%3Dihub

Python 3.20% Jupyter Notebook 96.80%
bifpn cancer-detection deep-learning medical-image-processing nuclei-segmentation

tsfd's Introduction

Keras TensorFlow Hits

TSFD-Net: Nuclei Segmentation and Classification

PWC

Nuclei segmentation and classification using hematoxylin and eosin-stained histology images is a challenging task due a variety of issues, such as color inconsistency resulting from non-uniform manual staining operations, clustering of nuclei and blurry and overlapping nuclei boundaries. Existing approaches involve segmenting nuclei by drawing their polygon representations or by measuring the distances between nuclei centroids. In contrast, we leverage the fact that morphological features (appearance, shape and texture) of nuclei vary greatly depending upon the tissue type on which it is located. We exploit this information by extracting tissue specific (TS) features from raw histopathology images using our tissue specific feature distillation (TSFD) backbone. Then our bi-directional feature pyramid network (BiFPN) generates a robust hierarchical feature pyramid using these TS features. Next, our interlinked decoders jointly optimize and fuse these features to generate final predictions. We also propose a novel loss combination for joint optimization and faster convergence of our proposed network. Extensive ablation studies are performed to validate the effectiveness of each component of TSFD-Net. TSFD- Net achieves state-of-the-art performance on PanNuke dataset having 19 different tissue types and up to 5 clinically important tumor classes.

Full Paper

Pannuke Dataset:

The PanNuke dataset can be downloaded form here.

Dataset Preparation

You can follow the steps highlighted in the following repo to prepare the dataset for training.

Network Architecture:

Figure below shows full architecture of our proposed TSFD-Net.

alt text

Results

The Table below compare quantitative results of different models.

alt text

Visual Results

The figure below shows some qualitative results.

alt text

Evaluation

To evaluate the model we used the Panoptic Quality metric as introduced in HoverNet paper.

We use the official implementation provided by the authors of Pannuke dataset.

To see our implementation follow the link. We mainly follow the original implementation with some minor improvements for exception handelling, bug fixes and better visualization.

Citation

@article{ilyas2022tsfd,
  title={TSFD-Net: Tissue specific feature distillation network for nuclei segmentation and classification},
  author={Ilyas, Talha and Mannan, Zubaer Ibna and Khan, Abbas and Azam, Sami and Kim, Hyongsuk and De Boer, Friso},
  journal={Neural Networks},
  year={2022},
  publisher={Elsevier}
}

tsfd's People

Contributors

dependabot[bot] avatar mr-talhailyas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

tsfd's Issues

Could you please help analyze what went wrong with the reproduction process

@Mr-TalhaIlyas
Hello! Thank you for opening up the source code for such an exciting work. I encountered some issues while replicating your TSFD work.
The evaluation matrix obtained through replication is as follows:

Metric Value
loss 0.0427
clf_out_loss 0.012
seg_out_loss 0.1062
inst_out_loss -0.0975
clf_out_accuracy 0.5502
seg_out_mean_iou 0.261
inst_out_mean_iou 0.3711

When using the weight file model.h5 you provided, the results are as follows

Metric Value
loss -0.0267
clf_out_loss 0.0015
seg_out_loss 0.0774
inst_out_loss -0.1086
clf_out_accuracy 0.9345
seg_out_mean_iou 0.3388
inst_out_mean_iou 0.4078

We have carefully checked every step of the code and configured it according to the suggestions in your paper. But the results are still disappointing as shown in the above table. We noticed that the loss did not fully converge. Currently, epoch is set to 150. May I ask if this result is caused by a too small epoch ? If so, how much should epoch be set appropriately? If not, do you have any suggestions?

I would be extremely grateful if you could provide some suggestions to help reproduce successfully. thank you!

Handling consep data as image size is 540x540

Hi Mr Talhallyas

Thanks for sharing your work, can you please provide details on how did you handled Consep Dataset or any other dataset if the patch/tile size is not a factor 256. For example, consep has tile size of 540x540x3

Thanks in advance

slide-level inference script

Hi, I saw the TSFD paper and was very excited to try it out. Will you release a slide-level inference script? Similar to what hovernet has.

Loss value reaches inf in the first epoch

Hi

Thanks for sharing your work. I am running your code on TensorFlow version 2.6 but I get inf error. Looks like the loss value is too large and thereby I get this error.

Can you please let me know what's going wrong.

image

tumor types

Hi,
which files have you used in tumor types .npy?

Callbacks not provided

Dear Mr-TalhaIlyas, thank you for your work and the amazing paper.

I saw you deleted your callbacks scripts for several github repo, as this one. Is it possible to have the script in order to run the code? I don't understand why it is not here anymore, maybe I am missing something.

As there is:
from callbacks import [...]
In main_tsfd.py
I tried in vain to install callbacks library in case the functions were part of the library and not from the missing script, but then I reached:

 Traceback (most recent call last):
File "main_tsfd.py", line 23, in <module>
  from callbacks import PlotLearning, PredictionCallback, CustomLearningRateScheduler, CustomDropoutScheduler
ImportError: cannot import name 'PlotLearning'

Thank you for your help and support,
Lucas

Layers and some Custom Callbacks not provided

Dear Mr-TalhaIlyas, thank you for your work.

I just came back to this repo, and now I see that the layers.py script is missing and was deleted before. Exactly same issue as before with the callbacks.py script.

Is it possible to have it back?

And also some Callbacks are still missing PlotLearning, PredictionCallback, CustomLearningRateScheduler, CustomDropoutScheduler, CustomKernelRegularizer and I would like to have it instead of re-doing it myself, knowing that this code is open source as stated in the paper.

Thank you for the paper, your help and support,
Lucas

Cannot reproduce performance as per paper

@Mr-TalhaIlyas
Hi, I wanted to reproduce the results obtained in the paper on Pannuke dataset, but I don't seem to get anywhere near.
I tried running the script as it is in the current version of the repository, but I could not get it to start, so I reverted to original loss combination as per paper:

def SEG_Loss(y_true, y_pred):
    # weight fot tp hr v3 only
    loss = FocalTverskyLoss(y_true, y_pred, smooth=1e-7) + [0.4 * Weighted_BCEnDice_loss(y_true, y_pred)]
    
    return tf.math.reduce_mean(loss)
  
def INST_Loss(y_true, y_pred):
    # weight fot tp hr v3 only
    loss = FocalTverskyLoss(y_true, y_pred, smooth=1e-7) + [0.4 * Combo_loss(y_true, y_pred, smooth=1e-7)]

    return tf.math.reduce_mean(loss)

Which worked and I could train the model.
I have used the splits provided in TSFD/tsfd_weights/splits but after finishing the training the results were bad:
image

I used provided weights (Efficent_pet_203_clf-end.h5) for testing and the results are much better:
image

I would like to ask what parameters and loss combination did you use to train Efficent_pet_203_clf-end.h5 model:

  • what was the dropout value ? I did couple of runs and in my case dropout > 0.1 negatively affects model performance. Did you use init_dropout = 0.2 ?
  • what loss combination was used ?
  • there seems to be a problem with running the script with loss configuration as it is currently in the repository:
    • Focal_loss function does not have smooth parameter
    • after removing the parameter, I am getting an error regarding BCE calculation: TypeError: Input 'y' of 'Mul' Op has type float16 that does not match type uint8 of argument 'x'.
    • when casting x to float, there is an issue with dimensions: tensorflow.python.framework.errors_impl.InvalidArgumentError: required broadcastable shapes [[node SEG_Loss/mul (defined at home/mateusz/org_TSFD/scripts/losses.py:27) ]] [Op:__inference_test_function_64607]

I would appreciate it if you could help me solve the issue and be able to train the model so it achieves similar results to Efficent_pet_203_clf-end.h5. Thank you very much in advance.

Weights file provided does not perform as per the paper

Hi Mr Talhallyas,

I used the weights provided Efficient_pet_203_clf-end. But the result I get is far away from the information in the paper. Can you please provide weight file

Metric Value
loss 0.6111
clf_out_loss 0.0016
seg_out_loss 0.2662
inst_out_loss 0.3402
clf_out_accuracy 0.9308
seg_out_mean_iou 0.7642
inst_out_mean_iou 0.4022
Nuclei Type PQ
Neoplastic 0.2069
Inflammatory 0.2909
Connective 0.1036
Dead 0.5984
Non-Neoplastic 0.5625

Deconvolution step for ConSep Data

Hi Talha

Thanks for updating evaluation and post-processing steps. Can you also mention for deconvolution of stain for Consep data. I tried using the default h_e_r values but does not seems to do a fair job. The maximum iou for boundary detection is 0.4 therefore, the boundaries do not form well and the performance is low.
Can you suggest something on improving the result?
I am training on consep dataset (1000,1000) . The data is prepared with step size of 256 with overlap for the last window to keep the patch consistent.
Thanks in advance

Assigning index to each instance per class

Hi Talha

After getting the boundary information from the network, how are labelling instances ( count) as the boundaries overlaps. How do you distinguish for each cell nucleus. If you can provide code for post-processing steps

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.