Coder Social home page Coder Social logo

kiwixiao / bronchinet Goto Github PK

View Code? Open in Web Editor NEW

This project forked from antonioguj/bronchinet

0.0 0.0 0.0 124.81 MB

Airway segmentation from chest CTs using deep Convolutional Neural Networks

License: MIT License

Shell 1.62% Python 97.76% Makefile 0.61%

bronchinet's Introduction

BronchiNet

Airway segmentation from chest CTs using deep Convolutional Neural Networks

Contact: Antonio Garcia-Uceda Juarez ([email protected])

Introduction

This software provides functionality to segment airways from CT scans, using deep CNN models, and in particular the U-Net. The implementation of the segmentation method is described in:

  • [1] Garcia-Uceda, A., Selvan, R., Saghir, Z., Tiddens, H.A.W.M., de Bruijne, M. Automatic airway segmentation from computed tomography using robust and efficient 3-D convolutional neural networks. Scientific Reports Nature 11, 16001 (2021). https://doi.org/10.1038/s41598-021-95364-1

If using this software influences positively your project, please cite the above paper.

This software includes tools to i) prepare the CT data to use with DL models, ii) perform DL experiments for training and testing, and iii) process the output of DL models to obtain the binary airway segmentation. The tools are entirely implemented in Python, and both Pytorch and Keras/tensorflow libraries can be used to run DL experiments.

Project Organization

├── LICENSE
├── Makefile                <- Makefile with commands like `make data` or `make train`
├── README.md               <- The top-level README for developers using this project
│
├── docs                    <- A default Sphinx project; see sphinx-doc.org for details
│
├── models                  <- Trained and serialized models, model predictions, or model summaries
│
├── requirements.txt        <- The requirements file for reproducing the analysis environment, e.g.
│                         	   generated with `pip freeze > requirements.txt`
│
├── scripts_launch          <- Scripts with pipelines and PBS scripts to run in clusters
│
├── setup.py                <- makes project pip installable (pip install -e .) so src can be imported
├── src                     <- Source code for use in this project.
│   │
│   ├── common              <- General files and utilities
│   ├── dataloaders         <- Modules to load data and batch generators
│   ├── imageoperators      <- Various image operations
│   ├── models              <- All modules to define networks, metrics and optimizers
│   ├── plotting            <- Various plotting modules
│   ├── postprocessing      <- Modules to postprocess the output of networks
│   ├── preprocessing       <- Modules to preprocess the images to feed to networks
│   │
│   ├── scripts_evalresults <- Scripts to evaluate results from models
│   ├── scripts_experiments <- Scripts to train and test models
│   ├── scripts_preparedata <- Scripts to prepare data to train models
│   └── scripts_util        <- Scripts for various utilities
│
├── tests                   <- Tests to validate the method implementation (to be run locally)
└── tox.ini                 <- tox file with settings for running tox; see tox.readthedocs.io

Project structure based on the cookiecutter data science project template. #cookiecutterdatascience

Requirements

(Recommended to use python virtualenv)

  • python -m venv <path_new_pyvenv>
  • source <path_new_pyvenv>/bin/activate
  • pip install -r requirements.txt

Instructions


Prepare Data Directory

Before running the scripts, the user needs to prepare the data directory with the following structure:

├── Images                  <- Store CT scans (in dicom or nifti format)
├── Airways                 <- Store reference airway segmentations
├── Lungs (optional)        <- Store lung segmentations 
│                              (used in options i) mask ground-truth to ROI, and ii) crop images)
└── CoarseAirways (optional)<- Store segmentation of trachea and main bronchi
                               (used in option to attach trachea and main bronchi to predictions)

Prepare Working Directory

The user needs to prepare the working directory in the desired location, as follows:

  1. mkdir <path_your_work_dir> && cd <path_your_work_dir>
  2. ln -s <path_your_data_dir> BaseData
  3. ln -s <path_this_repo> Code

Run the Scripts

The user needs only to run the scripts in the directories: "scripts_evalresults", "scripts_experiments", "scripts_launch", "scripts_preparedata", "scripts_util". Each script performs a separate and well-defined operation, either to i) prepare data, ii) run experiments, or iii) evaluate results.

The scripts are called in the command line as follows:

  • python <path_script> <mandat_arg1> <mandat_arg2> ... --<option_arg1>= --<option_arg2>= ...

    • <mandat_argN>: obligatory arguments (if any), typically for the paths of directories of input / output files

    • <option_argN>: optional arguments, typical for each script. The list of optional arguments for an script can be displayed by:

      • python <path_script> --help
    • For optional arguments not indicated in the command line, they take the default values in the source file: "<path_this_repo>/src/common/constant.py"

(IMPORTANT): set the variable PYTHONPATH with the path of this code as follows:

  • export PYTHONPATH=<path_this_repo>/src/

Important Scripts

Steps to Prepare Data

1. From the data directory above, create the working data used for training / testing:

  • python <path_this_repo>/src/scripts_preparedata/prepare_data.py --datadir=<path_data_dir>

Several preprocessing operations can be applied in this script:

  1. mask ground-truth to ROI: lungs
  2. crop images around the lungs
  3. rescale images

IF use option to crop images: compute the bounding-boxes of the lung masks, prior to the script above:

  • python <path_this_repo>/src/scripts_preparedata/compute_boundingbox_images.py --datadir=<path_data_dir>

Steps to Train Models

1. Distribute the working data in training / validation / testing:

  • python <path_this_repo>/src/scripts_experiments/distribute_data.py --basedir=<path_work_dir>

2. Launch a training experiment:

  • python <path_this_repo>/src/scripts_experiments/train_model.py --basedir=<path_work_dir> --modelsdir=<path_output_models>

OR restart a previous training experiment:

  • python <path_this_repo>/src/scripts_experiments/train_model.py --basedir=<path_work_dir> --modelsdir=<path_stored_models> --is_restart=True --in_config_file=<path_config_file>

Steps to Test Models

1. Compute probability maps from a trained model:

  • python <path_this_repo>/src/scripts_experiments/predict_model.py <path_trained_model> <path_output_work_probmaps> --basedir=<path_work_dir> --in_config_file=<path_config_file>)

The output prob. maps have the format and dimensions as the working data used for testing, which is typically different from that of the original data (if using options above for preprocessing in the script "prepare_data.py").

2. Compute probability maps in format and dimensions of original data:

  • python <path_this_repo>/src/scripts_evalresults/postprocess_predictions.py <path_output_work_probmaps> <path_output_probmaps> --basedir=<path_work_dir>
  • rm -r <path_output_work_probmaps>

3. Compute binary mask of airways from probability maps:

  • python <path_this_repo>/src/scripts_evalresults/process_predicted_airway_tree.py <path_output_probmaps> <path_output_binmasks> --basedir=<path_work_dir>

4. (IF NEEDED) Compute the largest connected component of the airway binary masks:

  • python <path_this_repo>/src/scripts_util/apply_operation_images.py <path_output_binmasks> <path_output_conn_binmasks> --type=firstconreg

  • rm -r <path_output_binmasks> && mv <path_output_conn_binmasks> <path_output_binmasks>

5. Compute airway centrelines from airway binary masks:

  • python <path_this_repo>/src/scripts_util/apply_operation_images.py <path_output_binmasks> <path_output_cenlines> --type=thinning

6. Compute the desired metrics from the results:

  • python <path_this_repo>/src/scripts_evalresults/compute_result_metrics.py <path_output_binmasks> <path_output_cenlines> --basedir=<path_work_dir>

Other Scripts

The user can apply various operations to input images / masks, such as i) binarise masks, ii) mask images to a mask, iii) rescale images... as follows:

  • python <path_this_repo>/src/scripts_util/apply_operation_images.py <path_input_files> <path_output_files> --type=<various_options>

Some operations require extra input arguments. To visualize the list of operations available and the required input arguments, include "--help" after the script.

Example usage

We provide a trained U-Net model with this software, that we used for evaluation on the public EXACT'09 dataset. You can use this model to compute airway segmentations on your own CT data. To do this:

  1. Prepare a folder with your own data, following the steps above in "Prepare Data Directory" ("Airways" are not needed)

  2. Prepare a working directory, following the steps above in "Prepare Working Directory". Copy there the folder "models" from this repo

  3. Run script: "bash models/run_model_trained.sh <path_your_input_data> <path_output_results> --torch"

We also provide a trained model using Tf-Keras instead of Pytorch. To use this one:

  1. Set "TYPE_DNNLIB_USED == 'Keras' " in the source file "<path_this_repo>/src/common/constant.py"

  2. Repeat the steps above, but with flag '--keras' instead of '--torch' in step 3)

We also provide a docker image with which you can evaluate the trained model on your own CT data within a docker container. To do this:

  1. Prepare a folder with your own data

  2. Pull our pre-built docker image: "sudo docker pull antonioguj/bronchinet:stable_torch"

  3. Run script: "bash run_docker_models.sh <path_your_input_data> <path_output_results>"

bronchinet's People

Contributors

antonioguj avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.