Coder Social home page Coder Social logo

The pipeline is throwing Segmentation Fault (Core Dumped) while trying to replicate BACH classification pipeline in Python which is working if I try to process same file via front end application about fast-pathology HOT 20 CLOSED

lomshabhishek avatar lomshabhishek commented on June 3, 2024 1
The pipeline is throwing Segmentation Fault (Core Dumped) while trying to replicate BACH classification pipeline in Python which is working if I try to process same file via front end application

from fast-pathology.

Comments (20)

smistad avatar smistad commented on June 3, 2024 2

You have some other mistakes as well, this works on my side:

import fast
model = fast.DataHub().download('bach-model')

image_name = "TIFF_pyramid.tiff"
importer = fast.WholeSlideImageImporter\
    .create(image_name)

tissueSegmentation = fast.TissueSegmentation.create(threshold=85)\
    .connect(importer)

generator = fast.PatchGenerator.create(
        512, 512,
        magnification=20,
        overlapPercent=0,
        maskThreshold= 0.05
    ).connect(importer)\
    .connect(1, tissueSegmentation)

classification = fast.NeuralNetwork.create(
        model.paths[0] + '/pw_classification_bach_mobilenet_v2.onnx',
        scaleFactor=1./255.
    ).connect(generator)

stitcher = fast.PatchStitcher.create()\
    .connect(classification)

finished = fast.RunUntilFinished.create()\
    .connect(stitcher)

renderer = fast.ImagePyramidRenderer.create()\
    .connect(importer)

heatmap = fast.HeatmapRenderer.create(useInterpolation=False, channelColors={0: fast.Color.Green(), 1: fast.Color.Green(), 2: fast.Color.Magenta(),3: fast.Color.Red()})\
    .connect(stitcher)

fast.SimpleWindow2D.create()\
    .connect(renderer)\
    .connect(heatmap)\
    .run()

# Export at the end
fast.TIFFImagePyramidExporter.create(f'{image_name}_processed.tiff')\
    .connect(finished).run()

fast.TIFFImagePyramidExporter.create(f'{image_name}_heatmap.tiff')\
    .connect(tissueSegmentation).run()

from fast-pathology.

smistad avatar smistad commented on June 3, 2024 1

Hi again @lomshabhishek

Sorry for not spotting this error as well.
When you do patch-wise classification the output of the patch stitcher is a Tensor, not an Image. A Tensor can not be saved directly as a TIFF at the momemt, thus you have to use HDF5TensorExporter instead to export it to a HDF5 file. The TIFFImagePyramidExporter should have produced an error message stating this, not a seg fault..

Also you don't need to use RunUntilFinished when you export after a window like this. If you remove the window you should use RunUntilFinished.

# Export at the end (patch-wise classification becomes a Tensor, not an image)
fast.HDF5TensorExporter.create(f'{image_name}_processed.hd5')\
    .connect(stitcher).run()

fast.TIFFImagePyramidExporter.create(f'{image_name}_heatmap.tiff')\
    .connect(tissueSegmentation).run()

If you still want to save the Tensor as a TIFF image you can do that by first converting the tensor to an image using the TensorToImage object. This will give a float image, and since FAST currently can't handle TIFF other than uint8, you can cast the float image to a uint8 image like so:

# Export at the end (patch-wise classification becomes a Tensor, not an image)
# Here we only convert channel 1 in the tensor to an image
channel = 1
tensor2image = fast.TensorToImage.create([channel]).connect(stitcher)
# Convert image to uint8, and multiply every pixel by 255, as they are in the float range 0-1.
caster = fast.ImageCaster.create(fast.TYPE_UINT8, 255.).connect(tensor2image) 
fast.TIFFImagePyramidExporter.create(f'{image_name}_processed_channel_{channel}.tiff')\
    .connect(caster).run()

fast.TIFFImagePyramidExporter.create(f'{image_name}_heatmap.tiff')\
    .connect(tissueSegmentation).run()

You can also use TensorToSegmentation instead of TensorToImage and ImageCaster if you want a binary output.

from fast-pathology.

lomshabhishek avatar lomshabhishek commented on June 3, 2024 1

Thanks a lot, it worked.

from fast-pathology.

lomshabhishek avatar lomshabhishek commented on June 3, 2024 1

Hi @andreped, I was testing the Python script to run FAST on a few images I found out that the results were slightly different. Investigating it further, something might be wrong with those images as the script works well with FAST's test data.

from fast-pathology.

lomshabhishek avatar lomshabhishek commented on June 3, 2024 1

image format is ".otif" -> scanner i guess is from optra

from fast-pathology.

lomshabhishek avatar lomshabhishek commented on June 3, 2024 1

Extension is *.otif and works wherever *.tif files are supported. I guess it should be added as a supported format.

from fast-pathology.

smistad avatar smistad commented on June 3, 2024

I see you are using SegmentationNetwork, when in fact you should use NeuralNetwork. Because this network is an image classification network, not a segmentation network.

from fast-pathology.

lomshabhishek avatar lomshabhishek commented on June 3, 2024

Thanks a lot, the fixed script from you worked.
Also there are two paths for the same model, not sure why maybe FAST and FAST-Path downloaded models in their respective locations.

After I run the script I get the following messages and if I try to run the script again I get same "Segmentation Fault (Core Dumped)" message again. So it works first time but breaks again after that.

WARNING [139962144487232] Unable to open X display. Disabling visualization.

WARNING [139962144487232] TensorRT will now perform auto-tuning for your model. This may take a while! But this is only done the first time loading a new model.
WARNING [139962144487232] [TensorRT] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING [139962144487232] [TensorRT] TensorRT encountered issues when converting weights between types and that could affect accuracy.
WARNING [139962144487232] [TensorRT] If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
WARNING [139962144487232] [TensorRT] Check verbose logs for the list of affected weights.
WARNING [139962144487232] [TensorRT] - 36 weights are affected by this issue: Detected subnormal FP16 values.
WARNING [139962144487232] [TensorRT] - 10 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.

from fast-pathology.

smistad avatar smistad commented on June 3, 2024

TensorRT performs auto-tuning the first time it encounteres a new model, then it saves the auto tuned model and stores it.
The next time it will load this stored auto-tuned model. It sounds like something with that is crashing.

As a quick-fix you can switch to use OpenVINO or ONNXRuntime by setting the inferenceEngine parameter in NeuralNetwork:

classification = fast.NeuralNetwork.create(
        model.paths[0] + '/pw_classification_bach_mobilenet_v2.onnx',
        scaleFactor=1./255.,
        inferenceEngine='OpenVINO'
).connect(generator)

This bach model works with TensorRT on my side, so I am not sure exactly what is crashing. Could you turn on verbose output and paste the printout of that:

import fast
fast.Reporter.setGlobalReportMethod(fast.Reporter.COUT)

# Load the model
classification = fast.NeuralNetwork.create(
        model.paths[0] + '/pw_classification_bach_mobilenet_v2.onnx',
        scaleFactor=1./255.,
        inferenceEngine='TensorRT'
)

from fast-pathology.

lomshabhishek avatar lomshabhishek commented on June 3, 2024

INFO [140546119989056] Inference engine TensorRT selected
INFO [140546119989056] Finished freeing TensorRT buffer data
INFO [140546119989056] Inference engine TensorRT selected
INFO [140546119989056] [TensorRT] Serialized file /home/karkinos/FAST//kernel_binaries//pw_classification_bach_mobilenet_v2.onnx_17302426615403080058.bin is up to date.
INFO [140546119989056] [TensorRT] Serialized file was created with same TensorRT version: 8601
INFO [140546119989056] [TensorRT] Loaded engine size: 7 MiB
INFO [140546119989056] [TensorRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +6, now: CPU 0, GPU 6 (MiB)
Segmentation fault (core dumped)

from fast-pathology.

lomshabhishek avatar lomshabhishek commented on June 3, 2024

Segmentation Fault (core dumped) occurs now at

fast.TIFFImagePyramidExporter.create(f'{image_name}_processed.tiff')\
    .connect(finished).run()

fast.TIFFImagePyramidExporter.create(f'{image_name}_heatmap.tiff')\
    .connect(tissueSegmentation).run()

tried OpenVINO and ONNXRuntime inference engine

from fast-pathology.

lomshabhishek avatar lomshabhishek commented on June 3, 2024

There is variation in results maybe due to a change in the inference engine.
Attribute values were the same still getting variations when I ran the model from frontend vs backend script.

from fast-pathology.

lomshabhishek avatar lomshabhishek commented on June 3, 2024

PipelineName "BACH Classification"
PipelineDescription "Patch-wise image classification model trained on data from the 2018 breast cancer histology (BACH) challenge: https://iciar2018-challenge.grand-challenge.org/"
PipelineInputData WSI "Whole-slide image"
PipelineOutputData heatmap stitcher 0
Attribute classes "Normal;Benign;In Situ Carcinoma;Invasive Carcinoma"

### Processing chain

ProcessObject tissueSeg TissueSegmentation
Attribute threshold 85
Input 0 WSI

ProcessObject patch PatchGenerator
Attribute patch-size 512 512
Attribute patch-overlap 0.0
Attribute mask-threshold 0.05
Input 0 WSI
Input 1 tissueSeg 0

ProcessObject network NeuralNetwork
Attribute scale-factor 0.00392156862
Attribute model "$CURRENT_PATH$/../bach-model/pw_classification_bach_mobilenet_v2.onnx"
Input 0 patch 0

ProcessObject stitcher PatchStitcher
Input 0 network 0

### Renderers
Renderer imgRenderer ImagePyramidRenderer
Input 0 WSI

Renderer heatmap HeatmapRenderer
Attribute interpolation false
Attribute hidden-channels 0
Attribute channel-colors "0" "red" "1" "magenta" "2" "green" "3" "green"
Input 0 stitcher 0

import fast
model = fast.DataHub().download('bach-model')
fast.downloadTestDataIfNotExists()
image_name = fast.Config.getTestDataPath() + 'WSI/CMU-1.svs'
importer = fast.WholeSlideImageImporter\
    .create(image_name)

tissueSegmentation = fast.TissueSegmentation.create(threshold=85)\
    .connect(importer)

generator = fast.PatchGenerator.create(
        512, 512,
        overlapPercent=0,
        maskThreshold= 0.05
    ).connect(importer)\
    .connect(1, tissueSegmentation)
classification = fast.NeuralNetwork.create(
        model.paths[0] + '/pw_classification_bach_mobilenet_v2.onnx',
        scaleFactor=1./255.,
        inferenceEngine='OpenVINO'
).connect(generator)

stitcher = fast.PatchStitcher.create()\
    .connect(classification)
finished = fast.RunUntilFinished.create()\
    .connect(stitcher)


# Export at the end
fast.HDF5TensorExporter.create('image_processed.hdf5')\
    .connect(finished).run()
fast.TIFFImagePyramidExporter.create('image_heatmap.tiff')\
    .connect(tissueSegmentation).run()

from fast-pathology.

andreped avatar andreped commented on June 3, 2024

@lomshabhishek I assume you discovered what was wrong with the aforementioned comments, as you marked them as resolved, right? Can you in one sentence state was the fix was, in case someone else in the future run into a similar problem? :]

from fast-pathology.

andreped avatar andreped commented on June 3, 2024

Hi @andreped, I was testing the Python script to run FAST on a few images I found out that the results were slightly different. Investigating it further, something might be wrong with those images as the script works well with FAST's test data.

OK, no worries! If you are unable to find a solution for the images in question, you could open a new issue about it, and we can try to explore/discuss whats causing it. Might be that it is something we have yet to address in the FP backend (FAST) for some images. Which WSI format is it btw?

from fast-pathology.

andreped avatar andreped commented on June 3, 2024

image format is ".otif" -> scanner i guess is from optra

Didn't we talk about this format previously? I was not aware that this format was supported by FastPathology. If you check the source code below, you can see that there is no mention of ".otif", so there is no way of choosing those images. So there is no official support for this format:

tr("WSI Files (*.tiff *.tif *.svs *.ndpi *.bif *.vms *.vsi *.mrxs);;All Files(*)"), //*.zvi *.scn)"),

However, if you deselect the predefined supported formats (then you could even import ".txt" files as images which naturally would fail), I guess you could test if FastPathology supported the .otif format.

Then again, OpenSlide has no official support for .otif, so I am not sure how we by chance support it. Did you say that FP managed to read and view some of the other WSIs you had that also were .otif, or did all .otif-formatted WSIs fail?

Perhaps you could make a separate issue about this, as I think it is smart to track this discussion and feature request. Please, make a feature request here:
https://github.com/AICAN-Research/FAST-Pathology/issues/new?assignees=&labels=new+feature&projects=&template=feature_request.md&title=


EDIT: I forgot to add the first link related to the source code. I placed it now.

from fast-pathology.

lomshabhishek avatar lomshabhishek commented on June 3, 2024

"otif" works with FastPathology, ig its not very different from "tiff", some of the files had magnification issue so for them results were different.

from fast-pathology.

andreped avatar andreped commented on June 3, 2024

"otif" works with FastPathology, ig its not very different from "tiff", some of the files had magnification issue so for them results were different.

But not officially, we should add the *.otif extension to the FP code directly here:

tr("WSI Files (*.tiff *.tif *.svs *.ndpi *.bif *.vms *.vsi *.mrxs);;All Files(*)"), //*.zvi *.scn)"),

But magnification issues we have seen with the cellSens VSI (*.vsi) format, and have resolved that in both FAST and FP as we tracked that issue and made a PR for it.

If it is something we can do on our end to better support Optra's *.otif format, we could do that, but we need an issue to track.

But of course, if it is just something wrong with the image, which we cannot do something about, then I agree, no point in making an issue for it. I hope you understand better now :]

from fast-pathology.

andreped avatar andreped commented on June 3, 2024

But since *.otif works fine for you, I can make a PR to FP which adds official support for it. Such that people can select files with the *.otif extension.

BTW: I assume the extension is *.otif and not just *.tif? Maybe you meant that the format was OTIF with extension *.tif? In that case, then I misunderstood what you meant, and a PR is not needed.

from fast-pathology.

andreped avatar andreped commented on June 3, 2024

Extension is *.otif and works wherever *.tif files are supported. I guess it should be added as a supported format.

OK, then I understand. Then I will make a PR to add official support for it.

from fast-pathology.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.