Coder Social home page Coder Social logo

deepmialab / ai-ffpe Goto Github PK

View Code? Open in Web Editor NEW
47.0 1.0 9.0 137.95 MB

Deep Learning-based Frozen Section to FFPE Translation

Home Page: https://www.dropbox.com/sh/x7fvxx1fiohxwb4/AAAObJJTJpIHHi-s2UafrKeea?dl=0

License: Other

Python 98.81% TeX 0.54% Shell 0.66%
deep-learning computer-vision generative-adversarial-networks pytorch-implementation

ai-ffpe's Introduction

A deep-learning model for transforming the style of tissue images from cryosectioned to formalin-fixed and paraffin-embedded

In this work, we propose AI-FFPE pipeline which is optimized for histopathology images by driving the network attention specifically to the nuclei and tissue preperation protocols related deficiencies. Compared to CycleGAN, our model training is faster and less memory-intensive.



Example Results

Frozen to FFPE Translation in Brain Specimens

Frozen to FFPE Translation in Lung Specimens

Prerequisites

  • Linux or macOS
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting started

  • Clone this repo:
git clone https://github.com/DeepMIALab/AI-FFPE
cd AI-FFPE
  • Install PyTorch 1.1 and other dependencies (e.g., torchvision, visdom, dominate, gputil).

  • For pip users, please type the command pip install -r requirements.txt.

  • For Conda users, you can create a new Conda environment using conda env create -f environment.yml.

Training and Test

  • The slide identity numbers which were used in train, validation and test sets are given as .txt files in docs/ for both Brain and Lung dataset. To replicate the results, you may download GBM and LGG projects for Brain, LUAD and LUSC projects for Lung from TCGA Data Portal and create a subset using these .txt files.
  • To extract the patches from WSIs and create PNG files, please follow the instructions given in AI-FFPE/Data_preprocess section.

The data used for training are expected to be organized as follows:

Data_Path                # DIR_TO_TRAIN_DATASET
 ├──  trainA
 |      ├── 1.png     
 |      ├── ...
 |      └── n.png
 ├──  trainB     
 |      ├── 1.png     
 |      ├── ...
 |      └── m.png
 ├──  valA
 |      ├── 1.png     
 |      ├── ...
 |      └── j.png
 └──  valB     
        ├── 1.png     
        ├── ...
        └── k.png
  • To view training results and loss plots, run python -m visdom.server and click the URL http://localhost:8097.

  • Train the AI-FFPE model:

python train.py --dataroot ./datasets/Frozen/${dataroot_train_dir_name} --name ${model_results_dir_name} --CUT_mode CUT --batch_size 1
  • Test the AI-FFPE model:
python test.py --dataroot ./datasets/Frozen/${dataroot_test_dir_name}  --name ${result_dir_name} --CUT_mode CUT --phase test --epoch ${epoch_number} --num_test ${number_of_test_images}

The test results will be saved to a html file here: ./results/${result_dir_name}/latest_train/index.html

AI-FFPE, AI-FFPE without Spatial Attention Block, AI-FFPE without self-regularization loss, CUT, FastCUT, and CycleGAN

Apply a pre-trained AI-FFPE model and evaluate

For reproducability, you can download the pretrained models for each algorithm here.

Reference

If you find our work useful in your research or if you use parts of this code please consider citing our paper:

@article{article,
author = {Ozyoruk, Kutsev and Can, Sermet and Darbaz, Berkan and Başak, Kayhan and Demir, Derya and Gokceler, Irem and Serin, Gurdeniz and Hacısalihoglu, Payam and Kurtuluş, Emirhan and Lu, Ming and Chen, Tiffany and Williamson, Drew and Yılmaz, Funda and Mahmood, Faisal and Turan, Mehmet},
year = {2022},
month = {12},
pages = {},
title = {A deep-learning model for transforming the style of tissue images from cryosectioned to formalin-fixed and paraffin-embedded},
volume = {6},
journal = {Nature Biomedical Engineering},
doi = {10.1038/s41551-022-00952-9}
}

Acknowledgments

Our code is developed based on CUT. We also thank pytorch-fid for FID computation, and stylegan2-pytorch for the PyTorch implementation of StyleGAN2 used in our single-image translation setting.

ai-ffpe's People

Contributors

deepmialab avatar guliz-gokceler-2014400192 avatar kutsev-ozyoruk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

ai-ffpe's Issues

pre-trained model weight mismatched

hello

I really want to test your Algorithm. (for test only, not for training)

However, the pre-trained model does not match the defined model. (I used the CUT model for now, and I used the model in Lung/CUT from the pre-trained model you provided)

Check the test command line again.

스크린샷 2023-01-26 오후 2 42 37

Which case to be used for the FrozGan

Hi there, thanks a lot for the great work and congratulations!

I want to quickly adapt your method in my dataset, and sorry that I didn't have time to go through your article carefully. I guess the best models are stored in the FrozGanModels/wAtt_wLoss folder. Am I correct? In addition, I found that there are three cases there. Would you please give me some suggestions if I want to quickly apply your model? Which case should I use? Thanks a lot!

Pre trained models not working

Hi, I tried to try out the CUT pretrained model, however I have the feeling that the pretrained models are outdated by the version of pytorch or that your code has changed so there is a mismatch in loading the models.

Here is the config and the error I get. Could you maybe help me with this?

user@5789a7f2eba2:~/AI-FFPE$ python3 test.py --dataroot /home/user/patches_png/ --results_dir /home/user/results --direction AtoB --dataset_mode single --name Lung/CUT --epoch 5 
----------------- Options ---------------
                 CUT_mode: CUT                           
               batch_size: 1                             
          checkpoints_dir: ./checkpoints                 
                crop_size: 512                           
                 dataroot: /home/user/patches_png/              [default: placeholder]
             dataset_mode: single                               [default: unaligned]
                direction: AtoB                          
          display_winsize: 512                           
               easy_label: experiment_name               
                    epoch: 5                                    [default: latest]
                     eval: False                         
        flip_equivariance: False                         
                  gpu_ids: 0                             
                init_gain: 0.02                          
                init_type: xavier                        
                 input_nc: 3                             
                  isTrain: False                                [default: None]
               lambda_GAN: 1.0                           
               lambda_NCE: 1.0                           
                load_size: 512                           
         max_dataset_size: inf                           
                    model: cut                           
               n_layers_D: 3                             
                     name: Lung/CUT                             [default: experiment_name]
                    nce_T: 0.07                          
                  nce_idt: True                          
nce_includes_all_negatives_from_minibatch: False                         
               nce_layers: 0,4,8,12,16                   
                      ndf: 64                            
                     netD: basic                         
                     netF: mlp_sample                    
                  netF_nc: 256                           
                     netG: resnet_9blocks                
                      ngf: 64                            
             no_antialias: False                         
          no_antialias_up: False                         
               no_dropout: True                          
                  no_flip: False                         
                    normD: instance                      
                    normG: instance                      
              num_patches: 256                           
                 num_test: 50                            
              num_threads: 4                             
                output_nc: 3                             
                    phase: test                          
                pool_size: 0                             
               preprocess: none                          
         random_scale_max: 3.0                           
              results_dir: /home/user/results                   [default: ./results/]
      self_regularization: 0.03                          
           serial_batches: False                         
stylegan2_G_num_downsampling: 1                             
                   suffix:                               
                  verbose: False                         
----------------- End -------------------
dataset [SingleDataset] was created
dataset [SingleDataset] was created
model [CUTModel] was created
creating web directory /home/user/results/test_5
loading the model from ./checkpoints/Lung/CUT/5_net_G.pth
Traceback (most recent call last):
  File "/home/user/AI-FFPE/test.py", line 57, in <module>
    model.setup(opt)               # regular setup: load and print networks; create schedulers
  File "/home/user/AI-FFPE/models/base_model.py", line 99, in setup
    self.load_networks(load_suffix)
  File "/home/user/AI-FFPE/models/base_model.py", line 225, in load_networks
    net.load_state_dict(state_dict)
  File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ResnetGenerator:
        Missing key(s) in state_dict: "SAB.conv1.weight", "SAB.conv2.weight", "SAB.conv3.weight", "model.4.conv1.weight", "model.4.conv2.weight", "model.4.conv3.weight", "model.5.weight", "model.5.bias", "model.8.filt", "model.9.weight", "model.9.bias", "model.12.filt", "model.21.conv_block.1.weight", "model.21.conv_block.1.bias", "model.21.conv_block.5.weight", "model.21.conv_block.5.bias", "model.22.filt", "model.23.weight", "model.23.bias", "model.26.filt", "model.27.weight", "model.27.bias", "model.31.weight", "model.31.bias". 
        Unexpected key(s) in state_dict: "model.4.weight", "model.4.bias", "model.7.filt", "model.8.weight", "model.8.bias", "model.11.filt", "model.12.conv_block.1.weight", "model.12.conv_block.1.bias", "model.12.conv_block.5.weight", "model.12.conv_block.5.bias", "model.21.filt", "model.22.weight", "model.22.bias", "model.25.filt", "model.26.weight", "model.26.bias", "model.30.weight", "model.30.bias". 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.