Coder Social home page Coder Social logo

kibromberihu / mipsegmentatorv1 Goto Github PK

View Code? Open in Web Editor NEW
6.0 2.0 0.0 4.46 MB

A 3D lesion segmentation method on whole-body PET images including automated quality control.

License: Apache License 2.0

ai deep-learning dmax lymphoma maximum-intensity-projections mip oncology pet segmentation autopet

mipsegmentatorv1's Introduction

MIPsegmentatorV1

License: MIT Docker build passing

Artificial Intelligence for Efficient Learning-based Image Feature Extraction.

Tool for detecting and segmenting lesions in 3D whole-body PET images with automated quality control. It was trained on lymphoma data, including the AutoPET data. It is available in Docker image, easy use case. First, it automatically detects and segments lesion areas on the coronal and sagittal maximum intensity projections (MIP) PET images. A 3D lesion segmentation reconstruction algorithm is then developed to obtain the 3D lesion from the two 2D MIP segmentations using the SUV values from the 3D PET images.

If you use this pipeline, please cite our JNM paper and the autopet data paper. For additional information regarding the AI-based lesion segmentations from the coronal and sagittal maximum intensity projections (MIP) PET images, please refer to the github code.

This tool provides robust 2D MIP and 3D lesion segmentation on whole-body PET images, including automated quality control in Excel files. Surrogate biomarkers are automatically calculated and saved in an Excel file. The input 3D PET image should be in SUV NIFTI format.

Required input folder structure

Please provide all data in a single directory. The method automatically analyses all given data batch-wise.

To run the program, you only need PET scans (CT is not required) of patients in nifty format, where the PET images are coded in SUV units. If your images have already been segmented, you can also provide the mask (ground truth (gt)) as a binary image in nifty format. Suppose you provided ground truth (gt) data; it will print the dice, sensitivity, and specificity metrics between the reference segmentation by the expert (i.e., gt) and the predicted segmentation by the model. If the ground truth is NOT AVAILABLE, the model will only predict the segmentation.

A typical data directory might look like:

|-- input                                        <-- The main folder or all patient folders (Give it any NAME)

|      |-- parent folder (patient_folder_1)             <-- Individual patient folder name with unique id
|           |-- pet                                     <-- The pet folder for the .nii suv file
                 | -- name.nii or name.nii.gz            <-- The pet image in nifti format (Name can be anything)
|           |-- gt                                      <-- The corresponding ground truth folder for the .nii file  
                 | -- name.nii or name.nii.gz            <-- The ground truth (gt) image in nifti format (Name can be anything)
|      |-- parent folder (patient_folder_2)             <-- Individual patient folder name with unique id
|          |-- gt                                     <-- The pet folder for the .nii suv file
                | -- name.nii or name.nii.gz            <-- The pet image in nifti format (Name can be anything)
|         |-- pet                                      <-- The corresponding ground truth folder for the .nii file  
                | -- name.nii or name.nii.gz            <-- The ground truth (gt) image in nifti format (Name can be anything)
|           .
|           .
|           .
|      |-- parent folder (patient_folder_N)             <-- Individual patient folder name with unique id
|           |-- pet                                     <-- The pet folder for the .nii suv file
                | -- name.nii or name.nii.gz            <-- The pet image in nifti format (Name can be anything)
|           |-- gt                                      <-- The corresponding ground truth folder for the .nii file  
                | -- name.nii or name.nii.gz            <-- The ground truth (gt) image in nifti format (Name can be anything)

| -- output 

Note: the folder name for PET images should be pet and for the ground truth gt. All other folder and sub-folder names could be anything.

Usage

For reproducibility purposes, the method is currently available only in a Docker image.

  1. Assuming you already have docker desktop installed. For more information, kindly refer to THIS.
  2. docker pull kibromberihu/mipsegmentator:latest-0
  3. docker run --rm -v "/path/to/input_output/input/":"/home/docker_input" -v "/path/to/input_output/output/":"/home/docker_output" kibromberihu/mipsegmentator:latest-0

Note:

  • /path/to/input_output/input/: This is the path to the folder that includes the 3D PET scans in SUV format and the ground truth (gt) files (if available) in NIFTI format, as detailed in the required input folder structure above.
  • /path/to/input_output/output/: This is the path to the folder where the output files will be saved.

Results

  • Predicted results including predicted segmentation masks and calculated surrogate biomarekrs (sTMTV and sDmax) will be saved into the folder output/.*.

  • Predicted 2D MIP masks are saved under the folder name output/predicted_data/final/*.

  • Predicted 3D masks are saved under the folder name output/predicted_data/predicted_pseudo_3d_reconstructed/*.

  • Surrogate biomarkers (sTMTV and sDmax) will be automatically calculated and saved as an EXCEL file inside the folder output/*.csv. Two EXCEL files will be saved. The first one constitutes computed surrogate biomarkers calculated from the segmentation masks predicted from AI with an indicator predicted in the file name. The second EXCEL file would constitute the surrogate biomarkers computed from the reference segmentation masks (i.e., ground truth) from the expert (if available) with an indicator ground_truth in the file name. In addition to the predicted and ground truth indicator names, the CSV file's name also constitutes an automatically generated month, year, and the processing time.

Citations

Please consider citting the following papers if you use this package for your research:

   1. DOI: https://doi.org/10.2967/jnumed.121.263501 
   2. DOI: https://doi.org/10.1109/TMI.2021.3060497
   3. DOI: https://doi.org/10.1038/s41597-022-01718-3

Acknowledgments

We thank you [the reader].

mipsegmentatorv1's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

mipsegmentatorv1's Issues

Update Docker Run Command to Include GPU, IPC, and Auto-Remove Options

Hi @KibromBerihu - Great job on MIPSegmentatorV1 - congrats ๐Ÿ˜„! Works really well and quite fast.

However, I noticed that the Docker run command in the README needs a few additional options to run successfully with GPU support.

Current Command (causes issues while running):

docker run -v "/path/to/input_output/input/":"/home/docker_input" -v "/path/to/input_output/output/":"/home/docker_output" kibromberihu/mipsegmentator:latest-0

Suggested Command (solves the issues):

docker run --gpus all --ipc=host --rm -v "/path/to/input_output/input/":"/home/docker_input" -v "/path/to/input_output/output/":"/home/docker_output" kibromberihu/mipsegmentator:latest-0

Details:

Adding --gpus all and --ipc=host ensures the container can utilize the GPU and handle inter-process communication properly. The --rm option cleans up the container after it exits, which is useful during repeated runs.

Environment:

  • Docker version: 26.1.3, build b72abbb
  • System: Ubuntu
  • GPU: NVIDIA A100

Updating the README with these options will help others avoid issues and streamline their setup process.

Thanks for your excellent work on this tool!

Cheers,
Lalith

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.