Coder Social home page Coder Social logo

zenetio / ai-4-clinical-workflow Goto Github PK

View Code? Open in Web Editor NEW
29.0 3.0 9.0 211.55 MB

Integrating AI to Clinical Workflow

License: MIT License

Shell 5.15% Lua 0.35% Python 94.50%
artificial-intelligence gpu integration inference alzheimers-disease clinical ai

ai-4-clinical-workflow's Introduction

Integrating AI into Clinical Workflow with Orthanc and OHIF Viewer

Creating an AI module for a Clinical Radiological System for Alzheimer disease (AD) treatment

Overview

The target of this project is to provide a solution to integrate an AI model to Clinical Workflow to help clinicians provide better patient treratment.

1. System Architecture

Figure 1

The project uses some scripts to simulate the integration showed in the figure above.

In this simulation, we have 4 parts:

  • The Picture Archiving and Communication System (PACS) is represented by Orthanc system.

  • The MRI scanner is represented by a script, send_volume.sh, that send a volume (radiologic exam) to the clinical PACS. The PACS server is listening to DICOM DIMSE requests on port 4242. The Orthanc system has a DicomWeb interface that is exposed at port 8042 and can be enabled for visualization. The Orthanc can be configured for auto-routing of DICOM flows to send everything it receives to AI server.

  • The viewer system is represented by OHIF. As commented above, the viewer is connected to Orthanc via DicomWeb using port 3000.

  • The AI server is composed of 3 components: listener, sender and HippoVolume.AI.

    1. the listener is a script, start_listener.sh. The script will copy any routed DICOM study to a predefined directory in AI server, creating a sub-directory for each copied study.
    2. The HippoVolume.AI is represented by the inference_dcm.py script file. In this project I trained a U-Net model, generating a model.pth file, that infer the probability a patient has AD. The DICE score got 90%. The details about implementation and training the model is out of scope of this article and may be material for another article. When the script is executed, it analyzes the content of AI server directory, where the routed files were copied, calls the model to make the inference of the study, get the results and generates a dcm report.
    3. the sender is a script, send_results.sh. The script will send the new modality of the study created in the previous step to PACS. The new modality is now available to clinicians to help take or accelerate critical decisions.

2. Putting all together

To put all the packages and scripts working, we need to follow a sequence that is described below. You can find the source code used in this simulation in my GitHub repo, here. Depending on your configuration, you will need to open a group of terminal window for Orthanc, viewer, listener and one more to execute commands. Note that there is no need to open a terminal if the component is running as a service. In my case, I used 3 terminals as shown in the figure:

Figure 2

  • Download and install the Orthanc server from here. Open a terminal and run the command
bash launch_orthanc.sh

or

./launch_orthanc.sh

If you installed Orthanc on Windows, it is already running as a service. You can check if Orthanc is working by running the command

echoscu localhost 4242 -v
  • Download and install the OHIF web viewer from here. Open a terminal (I will call viewer) and run the command
bash launch_OHIF.sh

or

./launch_OHIF.sh

If the viewer is properly installed, after execute the command you will get a similar view as in the figure above. If the viewer is empty, it means there is no scanned files in PACS server yet.

  • Testing the visualization. Open a new terminal, that I will call cmd. Go to deploy_scripts directory and run the command
bash send_volume.sh

or

./send_volume.sh

The script will simulate the MRI scanner, sending the studies located in data/TestVolumes directory to PACS. So, now you can manipulate the study and visualize the details in the OHIF viewer.

  • The listener. Open a new terminal, that I will call listener. Go to deploy_scripts directory and execute the command
bash start_listener.sh

or

./start_listener.sh

The listener is listening the url localhost:8042, the PACS, and will copy the routed patient study images to an AI server directory, that I configured as scanner.

  • The HippoVolume.AI. Now, in AI server, we have DICOM images to make inference using the trained CNN model. Go back to cmd terminal and run the command
python ./inference_dcm.py ../scanner

The inference takes less than 2 seconds. Basically, the script will load the study, process the inference and generate a dcm report detailing the findings of Alzheimer disease in patient study.

  • The sender. Still in the cmd terminal, go to deploy_scripts directory and run the command
bash  send_result.h

or

./send_result.sh

The script will send the report to PACS. At this point, after the AI server has processed the patient study, a new Modality of the patient study was created (figure below). The clinicians now have additional valuable information that can help to take critical decisions, maybe in less time, improving the patient treatment.

Figure 3

More Interesting Readings

ai-4-clinical-workflow's People

Contributors

zenetio avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

ai-4-clinical-workflow's Issues

403 forbidden error and connection close while running start_listener.sh

Hey I'm a little new to this but why does it say 403 forbidden and connection close when i try to run start_listener.sh ? However I try looking there isn't any authentication that is required but it still says forbidden. Why am I facing this ? Any help would be appreciated
P.S. Sorry if this is a stupid question or if i need prior knowledge to something else in order to understand this. If that is the issue please lead me to the right resources.

Scanner Folder

So I am trying to run your code locally, and I have run every other command successfully until the part where we run this:

python ./inference_dcm.py ../scanner

Is this code correct? I can't seem to find any scanner folder on your code.

I tried changing it to scanned and then I get this error:

FileNotFoundError: [Errno 2] No such file or directory: '../../section2/out/model.pth'

Again, looking at the folders, I can't seem to find a section2 folder anywhere.
I can find some model.pth.zip in the out folder, and referencing that file in the inference_dcm.py file brings pickle errors.

Looking for series to run inference on in directory ../scanned/st_1.3.6.1.4.1.14519.5.2.1.4429.7055.290332099546120305394775536058...
Found series of 32 axial slices
HippoVolume.AI: Running inference...
Traceback (most recent call last):
  File "/home/sampsepi0l/repos/work/ai-4-clinical-workflow/src/inference_dcm.py", line 322, in <module>
    inference_agent = UNetInferenceAgent(
  File "/home/sampsepi0l/repos/work/ai-4-clinical-workflow/src/inference/UNetInferenceAgent.py", line 25, in __init__
    self.model.load_state_dict(torch.load(parameter_file_path, map_location=self.device))
  File "/home/sampsepi0l/mambaforge/envs/zenoto/lib/python3.10/site-packages/torch/serialization.py", line 713, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/home/sampsepi0l/mambaforge/envs/zenoto/lib/python3.10/site-packages/torch/serialization.py", line 920, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '\x05'.

Could you assist with the above issue?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.