Coder Social home page Coder Social logo

qurit / pytomography Goto Github PK

View Code? Open in Web Editor NEW
76.0 2.0 15.0 93.8 MB

This repository enables easy and fast medical image reconstruction in Python.

Home Page: https://pytomography.readthedocs.io/en/latest/#

License: MIT License

Python 100.00%
image-reconstruction medical-imaging spect nuclear-medicine python pytorch quantitative-imaging

pytomography's People

Contributors

ahxmeds avatar carluri avatar lukepolson avatar obeddzik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

pytomography's Issues

no attribute RadialPosition in some dicom vendor's files

GEHC Tandem_Discovery_670 NM dicom files has no attribute RadialPosition to be discovered by pydicom.
here's a traceback output:

site-packages\pytomography\io\SPECT\dicom.py", line 60, in parse_projection_dataset

    radial_positions_detector = ds.DetectorInformationSequence[detector-1].RadialPosition ...

site-packages\pydicom\dataset.py", line 908, in __getattr__
    return object.__getattribute__(self, name)

AttributeError: 'Dataset' object has no attribute 'RadialPosition'

Maybe, one could add the attribute in case it is missing.

Generate XCat PET Data in SIMSET For Testing 3D PET Recon

Ultimately, we need a set of full 3D PET data so we can test 3D PET image reconstruction algorithms. Unlike SPECT, which is organized by $(r, \theta, z)$, 3D PET is organized by $(r, \theta, z_1, z_2)$ due to oblique angles. For testing, the following will be required

  • Obtain XCat binary data from Roberto, convert it into format usable by SIMSET
  • Run SIMSET simulation on XCat phantom for basic 3D PET system (make sure to clearly understand all acceptance angle parameters, etc). Make sure you understand importance sampling, how to do it in SIMSET, etc, since it will be required for such a large simulation. Also, look into running on a computer with many cores, since simulations can be parallelized and thus ran faster. Be sure to save photopeak and scattered projections separately.
  • Create a repository for this 3D data that we can use for testing

Update Datasheet Names in Data Folder, Create README file

Create a separate branch for this issue, when you're done, create a pull request to the development branch.

  • update datasheet names "data_sheet.csv" and "attenuation_values.csv" to be more specific as to what their purpose is (e.g. "SPECT_collimator_parameters.csv"). Make sure you update the corresponding "dicom.py" file to reflect these changes, and test the corresponding DICOM tutorial.
  • Create a README.md file which goes over all data sheets. Each data sheet gets its own section in the README file. Clearly describe all parameters in each data sheet. Should be formatted professionally.

Rename Prior `__call__` method to `gradient`

  • for each prior function, allow calling the prior through a __call__ method or computing the gradient through a gradient method
  • reconstruction algorithms then need to call the gradient method

`FileNotFoundError:` ../../../data/HU_to_mu.csv not found while using `get_HU2mu_coefficients()`

This is in reference to the tutorial: Reconstructing from DICOM data

Run this to reproduce:
object_meta, image_meta, projections, projections_scatter = dicom_MEW_to_data(file_NM)

Probably the code is not able to access the correct path to the data/HU_to_mu.csv file. Will look into this later.

Error log:

Output exceeds the [size limit](command:workbench.action.openSettings?%5B%22notebook.output.textLineLimit%22%5D). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?1b95e71e-d021-43b4-97f0-05107cbaf8db)
---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
/home/jhubadmin/Projects/pytomography_tutorials/reconstructing_dicom_data.py in line 10
      [37](file:///home/jhubadmin/Projects/pytomography_tutorials/reconstructing_dicom_data.py?line=36) # %%
      [38](file:///home/jhubadmin/Projects/pytomography_tutorials/reconstructing_dicom_data.py?line=37) 
      [39](file:///home/jhubadmin/Projects/pytomography_tutorials/reconstructing_dicom_data.py?line=38) ######################################################################################
   (...)
      [42](file:///home/jhubadmin/Projects/pytomography_tutorials/reconstructing_dicom_data.py?line=41) ######################################################################################
      [43](file:///home/jhubadmin/Projects/pytomography_tutorials/reconstructing_dicom_data.py?line=42) ######################################################################################
      [45](file:///home/jhubadmin/Projects/pytomography_tutorials/reconstructing_dicom_data.py?line=44) object_meta, image_meta, projections, projections_scatter = dicom_MEW_to_data(file_NM)
---> [46](file:///home/jhubadmin/Projects/pytomography_tutorials/reconstructing_dicom_data.py?line=45) CT = dicom_CT_to_data(files_CT, file_NM)

File /anaconda/envs/pytomography1/lib/python3.9/site-packages/pytomography/io/dicom.py:139, in dicom_CT_to_data(files_CT, file_NM)
    [137](file:///anaconda/envs/pytomography1/lib/python3.9/site-packages/pytomography/io/dicom.py?line=136) CT_resampled = affine_transform(CT_scan, M[0:3,0:3], M[:3,3], output_shape=(ds_NM.Rows, ds_NM.Rows, ds_NM.Columns) )
    [138](file:///anaconda/envs/pytomography1/lib/python3.9/site-packages/pytomography/io/dicom.py?line=137) CT_HU = CT_resampled + ds.RescaleIntercept
--> [139](file:///anaconda/envs/pytomography1/lib/python3.9/site-packages/pytomography/io/dicom.py?line=138) CT = HU_to_mu(CT_HU, *get_HU2mu_coefficients(ds_NM))
    [140](file:///anaconda/envs/pytomography1/lib/python3.9/site-packages/pytomography/io/dicom.py?line=139) CT = torch.tensor(CT[::-1,::-1,::-1].copy())
    [141](file:///anaconda/envs/pytomography1/lib/python3.9/site-packages/pytomography/io/dicom.py?line=140) return CT

File /anaconda/envs/pytomography1/lib/python3.9/site-packages/pytomography/io/dicom.py:82, in get_HU2mu_coefficients(ds)
     [81](file:///anaconda/envs/pytomography1/lib/python3.9/site-packages/pytomography/io/dicom.py?line=80) def get_HU2mu_coefficients(ds):
---> [82](file:///anaconda/envs/pytomography1/lib/python3.9/site-packages/pytomography/io/dicom.py?line=81)     table = np.loadtxt('.[./../../data/HU_to_mu.csv](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22515552495433227d.vscode-resource.vscode-cdn.net/home/jhubadmin/data/HU_to_mu.csv)', skiprows=1)
     [83](file:///anaconda/envs/pytomography1/lib/python3.9/site-packages/pytomography/io/dicom.py?line=82)     energies = table.T[0]
     [84](file:///anaconda/envs/pytomography1/lib/python3.9/site-packages/pytomography/io/dicom.py?line=83)     window_upper = ds.EnergyWindowInformationSequence[0].EnergyWindowRangeSequence[0].EnergyWindowUpperLimit
...
    [531](file:///anaconda/envs/pytomography1/lib/python3.9/site-packages/numpy/lib/_datasource.py?line=530)                               encoding=encoding, newline=newline)
    [532](file:///anaconda/envs/pytomography1/lib/python3.9/site-packages/numpy/lib/_datasource.py?line=531) else:
--> [533](file:///anaconda/envs/pytomography1/lib/python3.9/site-packages/numpy/lib/_datasource.py?line=532)     raise FileNotFoundError(f"{path} not found.")

FileNotFoundError: .[./../../data/HU_to_mu.csv](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22515552495433227d.vscode-resource.vscode-cdn.net/home/jhubadmin/data/HU_to_mu.csv) not found.

Fix a minor issue in dt1.ipynb

In notebook dt1.ipynb, the line

angles = np.arange(0,360.,3)

should be replaced by:

angles = torch.arange(0,360.,3)

Also, I think there is a typo here:

plt.pcolormesh(obj[0].sum(axis=2).T, cmap='Greys_r')

which should be replaced by:

plt.pcolormesh(x,x, obj[0].sum(axis=2).T, cmap='Greys_r')

Table / Code to Obtain PSFMeta From DICOM Files Given Scanner Model

Main goal: obtain collimator_slope and collimator_intercept from DICOM file (parameters for PSFMeta). For this...

  • Get scanner name from SPECT DICOM file
  • Create data table to go from scanner name to get appropriate parameters required to obtain collimator_slope and collimator_intercept. These will be things like crystal length, hole diameter, etc.
  • Write function (in pytomography.io.dicom) that takes in SPECT dicom file, and obtains collimator_slope and collimator_intercept
  • Reach out to MIM team for validation with their private parameter files.

Cardiac SPECT Reconstruction

Introduction

  • When reconstructing heart images in nuclear medicine, a different approach is required compared to other medical images. Typically, medical images are performed in the three conventional directions X, Y, and Z, which correspond to sagittal, coronal, and axial views respectively. However, a different approach is required when reconstructing heart images in nuclear medicine.

  • Specifically, in order to reconstruct heart images, reorientation must be done towards the apex of the heart. This means that instead of reaching the three usual sagittal, coronal, and axial views, the reconstructed images will be in three other views including horizontal, vertical, and short axis.

  • The horizontal view shows a cross-sectional image of the heart at the level of the ventricles, while the vertical view shows a longitudinal image of the ventricles. The short axis view shows a cross-sectional image of the ventricles at the level of the papillary muscles.

  • This approach is necessary because the shape and orientation of the heart are unique compared to other organs in the body, and a different imaging approach is required to visualize it accurately. By reorienting the images towards the apex of the heart, nuclear medicine practitioners can obtain a more accurate and detailed view of the heart's structure and function, which can be useful for diagnosing and treating various heart conditions.

Material and Method

  • Reconstructing heart images in nuclear medicine is a complex process that cannot be easily automated. This is because different patients may require different orientations to accurately reconstruct the heart, and a specific angle cannot be attributed to all patients. Therefore, manual adjustment of the apex direction is necessary for each patient to obtain accurate and useful images of the heart.

  • To address this challenge, a code can be designed that allows for manual adjustment of the apex direction. The code can prompt the user to specify the apex direction manually after showing the heart, thereby enabling the user to adjust the orientation of the reconstructed images to match the unique anatomy of each patient's heart. By doing so, the resulting images will be more accurate and useful for diagnosis and treatment planning.

  • The code can also include cropping and masking to make the images more similar to those that doctors work within the clinic. Cropping can be used to remove any extraneous or irrelevant parts of the image, while masking can be used to highlight specific regions of interest within the heart. By incorporating these techniques into the code, the resulting images will be more interpretable and useful for clinical applications.

Can this be done automatically?

  • Designing a neural network can help automate the process of determining the direction of the apex in heart images. This would involve training the neural network on a large number of heart patients, allowing the model to learn the patterns and features that are characteristic of different orientations of the heart.

  • Once the neural network has been trained, it can be used to automatically detect the direction of the apex in new heart images. The selected direction can be shown to the user, who can then approve or adjust the direction as necessary before the reconstruction process continues. This would allow the user to maintain control over the reconstruction process and ensure that the resulting images are accurate and useful for diagnosis or treatment planning.

`from __future__ import annotations` missing in certain files

This is with reference to the tutorial: Reconstructing SIMIND data

Running the imports pytomography.io, pytomography.projections, pytomography.algorithms at the beginning of the tutorial notebook gives this error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[35], line 1
----> 1 import pytomography.projections

File /anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/__init__.py:1
----> [1](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/__init__.py?line=0) from .forward_projection import ForwardProjectionNet
      [2](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/__init__.py?line=1) from .back_projection import BackProjectionNet
      [3](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/__init__.py?line=2) from .projection import ProjectionNet

File /anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/forward_projection.py:3
      [1](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/forward_projection.py?line=0) import torch
      [2](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/forward_projection.py?line=1) from pytomography.utils import rotate_detector_z, pad_object, unpad_image
----> [3](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/forward_projection.py?line=2) from .projection import ProjectionNet
      [5](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/forward_projection.py?line=4) class ForwardProjectionNet(ProjectionNet):
      [6](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/forward_projection.py?line=5)     """Implements a forward projection of mathematical form :math:`g_j = \sum_{i} c_{ij} f_i` where :math:`f_i` is an object, :math:`g_j` is the corresponding image, and :math:`c_{ij}` is the system matrix given by the various phenonemon modeled (e.g. atteunation/PSF).
      [7](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/forward_projection.py?line=6)     """

File /anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/projection.py:7
      [4](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/projection.py?line=3) from pytomography.mappings import MapNet
      [5](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/projection.py?line=4) from pytomography.metadata import ObjectMeta, ImageMeta
----> [7](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/projection.py?line=6) class ProjectionNet(nn.Module):
      [8](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/projection.py?line=7)     r"""Abstract parent class for projection networks. Any subclass of this network must implement the ``forward`` method. """
      [9](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/projection.py?line=8)     def __init__(
     [10](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/projection.py?line=9)         self,
...
     [24](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/projection.py?line=23)             device (str, optional): Pytorch device used for computation. If None, uses the default device `pytomography.device` Defaults to None.
     [25](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/projection.py?line=24)         """
     [26](file:///anaconda/envs/pytomography/lib/python3.8/site-packages/pytomography/projections/projection.py?line=25)         super(ProjectionNet, self).__init__()

TypeError: 'type' object is not subscriptable

I think basically anything that initializes an object from the ProjectionNet class (https://github.com/qurit/PyTomography/blob/main/src/pytomography/projections/projection.py)` generates this error.

Installation of PyTorch should be done by installation of PyTomography

The documentation says to create an environment for PyTomography, followed by the installation of PyTorch, and then followed by the installation of PyTomography.

PyTomography clearly requires PyTorch. This should be set as a requirement either in a requirements.txt file or in the setup.cfg one.

When pip install pytomography is executed PyTorch will be installed if not already on the system.

Empty tensor with ObjectMeta too large or with dr too small

Hello,
I am modifying the Reconstructing GATE data tutorial and I get an empty recon tensor when the shape object_meta is too larger or dr is too small.

As an example, this combination results in an empty recon:

object_meta = ObjectMeta(
    dr=(2,2,2), #mm
    shape=(256,256,98) #voxels
)

It would be nice to reproduce this behavior on another device to check it's not a memory issue. If that's the case, an explicit failure instead of an empty tensor might be a solution.

Remove `Net` from all classes

Technically they are not all neural networks. In addition

  • Convert ForwardProjectionNet and BackProjectionNet into a single Projection class that has a forward and backward method. Update all corresponding functions

ModuleNotFoundError: No module named 'kornia'

The tutorials won't work because kornia is not installed by default.

ModuleNotFoundError: No module named 'kornia'

We need to fix this either by adding it to the

dependencies = [
  "numpy>=1.24.2",
  "scipy>=1.10.1",
  "pydicom>=2.0.0",
]

Or removing the package call if not needed.

Wrong imports in dt5

In dt5.ipynb one finds the code:

import pytomography
from pytomography.metadata import ObjectMeta, ImageMeta, PSFMeta
from pytomography.transforms import SPECTAttenuationTransform, SPECTPSFTransform
from pytomography.projections import SystemMatrix

All import fail, apparently they refer to an obsolete installation.

The same occurs in dt6.ipynb

Fix typo in conventions.ipynb

In notebook:

$f$ refers to an object, and $f_j$ refers to the value of the object at voxel $j$

  • $g$ refers to a set of projections, and $g_i$ refers to the value of the projections at detector element $i$
  • $H$ refers to the system matrix with components $H_{ij}$: the contribution voxel $j$ in object space makes to detector element $j$ in image space

Should be

to detector element $i$ in image space. (NB: "i" instead of "j"

Fixes in dt2.ipynb

To make the code compatible with MacOS I commented the line below:

#device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

And used instead:

device = (
    "cuda"
    if torch.cuda.is_available()
    else "mps"
    if torch.backends.mps.is_available()
    else "cpu"
)
print(f"Using {device} device")

In my mac this prints "mps", while in a Linux PC using CUDA it would print "cuda". This fix allows the use of the GPU in Macs (through MPS).

Also, replace "np" by "torch", like below (same issue than in notebook dt1)

x = torch.linspace(-1,1,128)

Add link to github repository in the docs

I find it slightly annoying that there is no direct link in the documentation to the GitHub repository. Since you are using the pydata theme, it is extremely easy to add links to GitHub and any other external resources which are easily available on the top of the site. Look at the docs of pydata itself for example:

Screenshot 2023-11-05 102950

Adding such a link to GitHub is as simple as adding:

html_theme_options = {
    "icon_links": [
        {
            "name": "GitHub",
            "url": "https://github.com/qurit/PyTomography",
            "icon": "fa-brands fa-github",
            "type": "fontawesome",
        },
    ],
}

to conf.py. If there are no objections or issues, I can make a PR to implement this change.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.