Coder Social home page Coder Social logo

stanfordmimi / comp2comp Goto Github PK

View Code? Open in Web Editor NEW
50.0 50.0 8.0 28.09 MB

Computed tomography to body composition (Comp2Comp).

License: Apache License 2.0

Python 97.99% Shell 1.28% Dockerfile 0.05% Jupyter Notebook 0.69%
abdomen-ct abdominal body-composition computed-tomography segmentation

comp2comp's People

Contributors

ad12 avatar adritrao avatar asch99 avatar edreismd avatar fohofmann avatar louisblankemeier avatar malteekj avatar sssomani avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

comp2comp's Issues

Possible to work on google colab?

Hi,

is it possible to run comp2comp on google colab?
and do you consider to integrate nifti format for input and output images?

Best,
Aymen

Manual adjustment of segmentation output

Thank you very much for this repository.
If there are some failed segmentation, is it possible to adjust it manually? as you can see in this example the psoas muscles are not fully segmented at the level of L5

spine_muscle_adipose_tissue_report

Per-pipeline installation

There should be a way to install dependencies for only the pipelines that you are interested in running.

ImportError: libXrender.so

Screenshot 2023-03-20 at 4 43 48 PM

If you receive an error like this, running the following in the c2c_env conda environment solves the issue:

conda install -c conda-forge xorg-libxrender

Local Implementation @ AppleSilicon M1

First of all, Louis, Jakob and all, thank you for your efforts implementing an easy-to-use pipeline for body composition analysis. For reference, and for future versions, a documentation of the issues I ran into during setup and implementation at a local machine w/ M1 Max, and its solution:

Install

Comp2Comp requires tensorflow, TotalSegmentator PyTorch. The standard installation will not work on a M1/M2 machine. Solution:

Install tensorflow

conda install -c apple tensorflow-deps=2.9.0 -y
python -m pip install tensorflow-macos==2.9
python -m pip install tensorflow-metal==0.5.0

Install PyTorch

conda install pytorch torchvision torchaudio -c pytorch

Install Numpy, scikit-learn and scipy manually

conda install -c conda-forge numpy scikit-learn scipy plotly -y

Install TotalSegmentor

 python -m pip install git+https://github.com/wasserth/TotalSegmentator.git

Install Comp2Comp.

It is important not to use the installation bash, as some of the predefined requirements won't work.Thus:

  1. Clone Comp2Comp git clone https://github.com/StanfordMIMI/Comp2Comp.git in the respective directory
  2. Remove the following requirements from setup.py
"numpy==1.23.5",
"tensorflow>=2.0.0"
'totalsegmentator @ git+https://github.com/StanfordMIMI/TotalSegmentator.git'
  1. install comp2comp using python -m pip install -e . in the respective directory

Pipeline 2 (body composition segmentation) should work now on a M1/M2 taking 0.5-2sec per image. For pipeline 1 (additional segmentation of spine using TotalSegmentator) additional adaptions are necessary:

dl_utils.get_available_gpus using nvidia-smi

The authors check free memory on gpus using nvidia-smi. This will not work on a M1/M2 machine, and TotalSegmentator works only with CUDA compatible GPUs (!="mps"). I am not sure, about torch.device("mps") in the future, see also wasserth/TotalSegmentator#39. A simple workaround is to use only the CPU, therefore avoid the nvidia-smi-call by setting num_gpus = 0; gpus = None manually and remove or comment out all dl_utils.get_available_gpus-calls in C2C and cli.py.

get_dicom_paths_and_num

For Pipeline 1, comp2comp searches for dicom files in the input directory. The get_dicom_paths_and_num-function does not return anything if any other files than *.dcm (e.g. hidden files such as .DS_Store) are included in the directory. Adapt the function accordingly, or make sure that only *.dcm are included in the directory.

to be continued / remaining issues:

At a local machine the integration and use --fast and --body_seg for TotalSegmentator might be preferable.

A problem I could not solve so far is, that TotalSegmentator using Pipeline1 breaks down with

"File .../lib/python3.9/site-packages/nibabel/loadsave.py", line 104, in load raise ImageFileError(f'Cannot work out file type of "{filename}"').

I think the underlying problem is, that TotalSegmentator usually takes *.nii.gz instead of a whole folder containing *.dcm. However, I am not sure about it (as it seems to be working at the authors' machine). Maybe the implementation of NIFTI support in general #19 might be a solution to get rid of the dicom_paths_and_num-issue (see above) and this at the same time?!

Master pipeline

TODO: Implement a master pipeline that includes all currently implemented pipelines.

Cannot slice image objects; consider using img.slicer[slice] to generate a sliced image (see documentation for caveats) or slicing image array data with img.dataobj[slice] or img.get_fdata()[slice]

Hi. I'm currently using the the muscle_adipose_tissue pipeline and it works perfectly.
However, I run into some issues when trying to use the spine_muscle_adipose_tissue and liver_spleen_pancreas pipelines, they all give the same type error:

force_separate_z: None interpolation order: 0
Traceback (most recent call last):
  File ".../.Trash/Comp2Comp/comp2comp/spine/spine.py", line 135, in spine_seg
    img, seg = nnUNet_predict_image(
  File "....../lib/python3.9/site-packages/nibabel/spatialimages.py", line 645, in __getitem__
    raise TypeError(
TypeError: Cannot slice image objects; consider using `img.slicer[slice]` to generate a sliced image (see documentation for caveats) or slicing image array data with `img.dataobj[slice]` or `img.get_fdata()[slice]`

Are there any specific modifications needed for the DICOM files to run these pipelines?

3D method does not accept dicom folder

The 2D and 3D method expect the same input: a folder with dicom files (or rather files ending in .dcm). The 2D method works fine with .dcm files, but the 3D method crashes. The reason being that the 3D method uses the TotalSegmentator net, which needs a nifty file as input instead of a folder (with dicom files).

A kludge to on the fly convert the folder with dicom files to a nii.gz file is possible, but still fails.

A solution might be to either use TotalSegmentator directly; TotalSegmentator does work with the nii.gz file above.

Todos

  • Make the spine, muscle, adipose tissue pipeline work when a subset of T12 - L5 is visible
  • Dataset integrity check (check for missing slices, nonuniform SI spacing)
  • Add support for NIfTIs

Nested pipelines

Use nested pipelines to reduce code duplication - pipeline as an InferenceClass object

Dicom outputs

Pack image outputs into Dicom files for Dicom input / output

load model correction, SpineComputeROIs inconsistent error

Hi Louis,

thank you for cleaning up :) Two issues I encountered with the latest version of your work:

Downloading model from hugging face each run

The pipeline loads muscle/fat model from hugging face each run, regardless whether the model already exists in models/ or not.

Downloading muscle/fat model from hugging face
100% [......................................................................] 138570720 / 138570720

Although this might drive your download rates at hugging face :), it's unnecessary 140mb traffic per segmentation. Thus, I changed models.py line L71-72 to:

try:
   filename = Models.find_model_weights(self.model_name, model_dir)

Spine compute ROIs

In some dcm, the computation of the ROIs does not work.

Finished SpineSegmentation with output keys dict_keys(['segmentation', 'medical_volume'])
Running SpineReorient with input keys odict_keys(['inference_pipeline', 'segmentation', 'medical_volume'])
Finished SpineReorient with output keys dict_keys([])
Running SpineComputeROIs with input keys odict_keys(['inference_pipeline'])
ERROR PROCESSING inputs/batch/27
Traceback (most recent call last):
File "...Comp2Comp/bin/C2C", line 122, in spine_muscle_adipose_tissue pipeline()
File "...Comp2Comp/comp2comp/inference_pipeline.py", line 48, in call output = inference_class(self, **output)
File "...Comp2Comp/comp2comp/spine/spine.py", line 143, in call (spine_hus, rois, centroids_3d) = spine_utils.compute_rois(
File "...Comp2Comp/comp2comp/spine/spine_utils.py", line 229, in compute_rois two_largest = keep_two_largest_connected_components(slice)
File "...Comp2Comp/comp2comp/spine/spine_utils.py", line 262, in keep_two_largest_connected_components mask[labels == sorted_indices[i] + 1] = 1
IndexError: index 1 is out of bounds for axis 0 with size 1

The strange thing is, that this error does not occur consistently. Instead, if I rerun the pipeline using exactly the sam dcm, in sometimes everything works fine. Any ideas where to start debugging? I assume that the problem is not keep_two_largest_connected_components itself but the result of connectedComponentsWithStats?!

Best, Felix

dicom_to_nifti orientation issue

dicom_to_nifti triggers an error if it detects that different slices have different orientations. Seems that using SimpleITK solves this.

Tensorflow version

Tensorflow version 2.12.0 ran into the following error
image
when running the inference pipeline.
Tested that Tensorflow version 2.11.0 was working correctly with the pipeline.

Todos

  • Add an info file that includes all of the completed scans, runtime, error status, etc.
  • Potentially get rid of spine seg borders so can see cortical bone easier
  • Curved planar reconstructions
  • Spherical ROIs with median as opposed to mean
  • Put T12 - L5 in corner of image
  • Fix reverse ordering of metrics with spine level
  • Validate IMAT
  • Decrease image saturation
  • Add Stanford spine model

todos

  • Make the package pip installable for those that don't want to run the installation script
  • Asserts for the dimensionality of the input CTs (512x512 and heuristics for covering T12-L5)
  • Add NIFTI support
  • Improve logging
  • Add Stanford spine model

get_dicom_or_nifti_paths_and_num naming

Hi,
I just tried to run Comp2Comp and I experienced the following error:

from comp2comp.io.io_utils import get_dicom_nifti_paths_and_num
ImportError: cannot import name 'get_dicom_nifti_paths_and_num' from 'comp2comp.io.io_utils' (/home/jakob/dev/Comp2Comp/comp2comp/io/io_utils.py)

I fixed it by renaming get_dicom_nifti_paths_and_num to get_dicom_or_nifti_paths_and_num.

wrong weights url

The system tries to download the weights file from the wrong url, so it fails there and stops.
Furthermore, as it is right now the system redownloads the weights for every new run, which makes it slow and results in redundant copies of the same file.

Both are easily solved. in models.py/load_model():

    try:
        filename = Models.find_model_weights()
    except:
        #weights_file_name = wget.download("https://huggingface.co/stanfordmimi/stanford_abct_v0.0.1/blob/main/stanford_v0.0.1.h5", out=PREFERENCES.MODELS_DIR)
        weights_file_name = wget.download("https://huggingface.co/stanfordmimi/stanford_abct_v0.0.1/resolve/main/stanford_v0.0.1.h5", out=PREFERENCES.MODELS_DIR)

        logger.info("Downloading muscle/fat model from hugging face")
        # filename = os.path.join(
        #    PREFERENCES.MODELS_DIR, "{}.h5".format(self.model_name)
        # )
        filename = Models.find_model_weights()
    logger.info("Loading muscle/fat model from {}".format(filename))
    return load_model(filename)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.