stanfordmimi / comp2comp Goto Github PK
View Code? Open in Web Editor NEWComputed tomography to body composition (Comp2Comp).
License: Apache License 2.0
Computed tomography to body composition (Comp2Comp).
License: Apache License 2.0
Hi,
is it possible to run comp2comp on google colab?
and do you consider to integrate nifti format for input and output images?
Best,
Aymen
There should be a way to install dependencies for only the pipelines that you are interested in running.
Need to investigate:
@sallyyaohanqing pointed out that with some input folder configurations, outputs are overwritten in a single output folder
First of all, Louis, Jakob and all, thank you for your efforts implementing an easy-to-use pipeline for body composition analysis. For reference, and for future versions, a documentation of the issues I ran into during setup and implementation at a local machine w/ M1 Max, and its solution:
Comp2Comp requires tensorflow, TotalSegmentator PyTorch. The standard installation will not work on a M1/M2 machine. Solution:
conda install -c apple tensorflow-deps=2.9.0 -y
python -m pip install tensorflow-macos==2.9
python -m pip install tensorflow-metal==0.5.0
conda install pytorch torchvision torchaudio -c pytorch
conda install -c conda-forge numpy scikit-learn scipy plotly -y
python -m pip install git+https://github.com/wasserth/TotalSegmentator.git
It is important not to use the installation bash, as some of the predefined requirements won't work.Thus:
git clone https://github.com/StanfordMIMI/Comp2Comp.git
in the respective directory"numpy==1.23.5",
"tensorflow>=2.0.0"
'totalsegmentator @ git+https://github.com/StanfordMIMI/TotalSegmentator.git'
python -m pip install -e .
in the respective directoryPipeline 2 (body composition segmentation) should work now on a M1/M2 taking 0.5-2sec per image. For pipeline 1 (additional segmentation of spine using TotalSegmentator) additional adaptions are necessary:
The authors check free memory on gpus using nvidia-smi
. This will not work on a M1/M2 machine, and TotalSegmentator works only with CUDA compatible GPUs (!="mps"). I am not sure, about torch.device("mps")
in the future, see also wasserth/TotalSegmentator#39. A simple workaround is to use only the CPU, therefore avoid the nvidia-smi
-call by setting num_gpus = 0; gpus = None
manually and remove or comment out all dl_utils.get_available_gpus
-calls in C2C and cli.py.
For Pipeline 1, comp2comp searches for dicom files in the input directory. The get_dicom_paths_and_num
-function does not return anything if any other files than *.dcm
(e.g. hidden files such as .DS_Store) are included in the directory. Adapt the function accordingly, or make sure that only *.dcm
are included in the directory.
At a local machine the integration and use --fast
and --body_seg
for TotalSegmentator might be preferable.
A problem I could not solve so far is, that TotalSegmentator using Pipeline1 breaks down with
"File .../lib/python3.9/site-packages/nibabel/loadsave.py", line 104, in load raise ImageFileError(f'Cannot work out file type of "{filename}"').
I think the underlying problem is, that TotalSegmentator usually takes *.nii.gz
instead of a whole folder containing *.dcm
. However, I am not sure about it (as it seems to be working at the authors' machine). Maybe the implementation of NIFTI support in general #19 might be a solution to get rid of the dicom_paths_and_num-issue (see above) and this at the same time?!
TODO: Implement a master pipeline that includes all currently implemented pipelines.
Hi. I'm currently using the the muscle_adipose_tissue pipeline and it works perfectly.
However, I run into some issues when trying to use the spine_muscle_adipose_tissue and liver_spleen_pancreas pipelines, they all give the same type error:
force_separate_z: None interpolation order: 0
Traceback (most recent call last):
File ".../.Trash/Comp2Comp/comp2comp/spine/spine.py", line 135, in spine_seg
img, seg = nnUNet_predict_image(
File "....../lib/python3.9/site-packages/nibabel/spatialimages.py", line 645, in __getitem__
raise TypeError(
TypeError: Cannot slice image objects; consider using `img.slicer[slice]` to generate a sliced image (see documentation for caveats) or slicing image array data with `img.dataobj[slice]` or `img.get_fdata()[slice]`
Are there any specific modifications needed for the DICOM files to run these pipelines?
Use series number to separate multiple series where the dicoms exist within the same folder.
also add "dicom" extension to the find_dicoms function
If the orientation differs from what is expected, the visualizations appear flipped or transposed. Added a ToCanonical module before visualization would solve this.
The 2D and 3D method expect the same input: a folder with dicom files (or rather files ending in .dcm). The 2D method works fine with .dcm files, but the 3D method crashes. The reason being that the 3D method uses the TotalSegmentator net, which needs a nifty file as input instead of a folder (with dicom files).
A kludge to on the fly convert the folder with dicom files to a nii.gz file is possible, but still fails.
A solution might be to either use TotalSegmentator directly; TotalSegmentator does work with the nii.gz file above.
Numpy version should be 1.23.5. Doesn't work with numpy == 1.24.
Use nested pipelines to reduce code duplication - pipeline as an InferenceClass object
Hi. I am currently using C2C to segment muscle, and it works very well. However, I noticed that it no longer supports 2D DICOM images like previous versions. Additionally, the segmentation output cannot be obtained in the results. why?
Pack image outputs into Dicom files for Dicom input / output
Hi Louis,
thank you for cleaning up :) Two issues I encountered with the latest version of your work:
The pipeline loads muscle/fat model from hugging face each run, regardless whether the model already exists in models/ or not.
Downloading muscle/fat model from hugging face
100% [......................................................................] 138570720 / 138570720
Although this might drive your download rates at hugging face :), it's unnecessary 140mb traffic per segmentation. Thus, I changed models.py line L71-72 to:
try:
filename = Models.find_model_weights(self.model_name, model_dir)
In some dcm, the computation of the ROIs does not work.
Finished SpineSegmentation with output keys dict_keys(['segmentation', 'medical_volume'])
Running SpineReorient with input keys odict_keys(['inference_pipeline', 'segmentation', 'medical_volume'])
Finished SpineReorient with output keys dict_keys([])
Running SpineComputeROIs with input keys odict_keys(['inference_pipeline'])
ERROR PROCESSING inputs/batch/27
Traceback (most recent call last):
File "...Comp2Comp/bin/C2C", line 122, in spine_muscle_adipose_tissue pipeline()
File "...Comp2Comp/comp2comp/inference_pipeline.py", line 48, in call output = inference_class(self, **output)
File "...Comp2Comp/comp2comp/spine/spine.py", line 143, in call (spine_hus, rois, centroids_3d) = spine_utils.compute_rois(
File "...Comp2Comp/comp2comp/spine/spine_utils.py", line 229, in compute_rois two_largest = keep_two_largest_connected_components(slice)
File "...Comp2Comp/comp2comp/spine/spine_utils.py", line 262, in keep_two_largest_connected_components mask[labels == sorted_indices[i] + 1] = 1
IndexError: index 1 is out of bounds for axis 0 with size 1
The strange thing is, that this error does not occur consistently. Instead, if I rerun the pipeline using exactly the sam dcm, in sometimes everything works fine. Any ideas where to start debugging? I assume that the problem is not keep_two_largest_connected_components
itself but the result of connectedComponentsWithStats
?!
Best, Felix
dicom_to_nifti triggers an error if it detects that different slices have different orientations. Seems that using SimpleITK solves this.
In the C2C config for the spine_muscle_adipose_tissue model, I don't think there should be a sys.exit() in the exception block? Line 130.
Hi,
I just tried to run Comp2Comp and I experienced the following error:
from comp2comp.io.io_utils import get_dicom_nifti_paths_and_num
ImportError: cannot import name 'get_dicom_nifti_paths_and_num' from 'comp2comp.io.io_utils' (/home/jakob/dev/Comp2Comp/comp2comp/io/io_utils.py)
I fixed it by renaming get_dicom_nifti_paths_and_num
to get_dicom_or_nifti_paths_and_num
.
The system tries to download the weights file from the wrong url, so it fails there and stops.
Furthermore, as it is right now the system redownloads the weights for every new run, which makes it slow and results in redundant copies of the same file.
Both are easily solved. in models.py/load_model():
try:
filename = Models.find_model_weights()
except:
#weights_file_name = wget.download("https://huggingface.co/stanfordmimi/stanford_abct_v0.0.1/blob/main/stanford_v0.0.1.h5", out=PREFERENCES.MODELS_DIR)
weights_file_name = wget.download("https://huggingface.co/stanfordmimi/stanford_abct_v0.0.1/resolve/main/stanford_v0.0.1.h5", out=PREFERENCES.MODELS_DIR)
logger.info("Downloading muscle/fat model from hugging face")
# filename = os.path.join(
# PREFERENCES.MODELS_DIR, "{}.h5".format(self.model_name)
# )
filename = Models.find_model_weights()
logger.info("Loading muscle/fat model from {}".format(filename))
return load_model(filename)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.