Comments (8)
Hi Thomas,
Could you please replicate this error in Jupyter Notebooks and and let me know? I haven't encountered any issues when opening HE slides before.
Thank you.
Sreekar.
from pathml.
Hi Sreekar,
I have the exact same issue with Jupyter Notebooks (screenshot attached).
I would really appreciate any input.
Thanks,
Thomas
from pathml.
We can try two approaches,
- Setting up the environment from scratch.
- Installing the docker image as mentioned in this link
Best would be to use the docker image. Let me know if that resolves the issue.
And, could you tell me the version of PathML that's installed.
import pathml
pathml.__version__
Thanks,
Sreekar.
from pathml.
Thanks!
What do you mean by setting up the environment from scratch?
I tried with docker. It doesn't give me the h5manager error, which is nice, but it looks like it doesn't find my file. I tried various locations and I can't make it work.
The version of PathML i have installed is '2.1.1'
I really appreciate your help.
Thomas
from pathml.
I mean, reinstalling the conda environment and PathML as mentioned here.
To load the slide, you may upload the file to JupyterLab. Alternatively, you can mount the local directory and run the docker image.
Here are the commands to mount a folder.
docker run -it -p 8888:8888 -v E:/test.svs:/home/pathml/test.svs pathml/pathml
and execute the below code to load the slide.
from pathml.core import HESlide
image = HESlide('/home/pathml/test.svs')
Let me know if that works.
from pathml.
Hey,
It looks like it works.
My workers die every time both on docker and on colab (I even upgraded to paying version to see if it worked, it didn't) every time I try to run the first pipeline with a relatively small WSI (115 MB).
Am I doing something wrong?
Thanks again for your help
from pathml.
Regarding the issue of AttributeError: module 'pathml.core' has no attribute 'h5managers'
, I have adjusted the import statements in pathml/pathml/core/slide_data.py
Adjusted import statements in 4961630
Regarding the workers dying out, is there an output msg when the workers die?
from pathml.
Thank you for the modification of the import statement.
I did get a subsequent error, though (sorry :( )
wsi = HESlide(r"C:\Users\deniz\OneDrive\Bureau\python_work\AG_1.svs")
Traceback (most recent call last):
Cell In[18], line 1
wsi = HESlide(r"C:\Users\deniz\OneDrive\Bureau\python_work\AG_1.svs")
File D:\Files\Pycharm\Test\pathml\lib\site-packages\pathml\core\slide_data.py:513 in __init__
super().__init__(*args, **kwargs)
File D:\Files\Pycharm\Test\pathml\lib\site-packages\pathml\core\slide_data.py:202 in __init__
self.h5manager = h5pathManager(slidedata=self)
File D:\Files\Pycharm\Test\pathml\lib\site-packages\pathml\core\h5managers.py:84 in __init__
self.slide_type = pathml.core.slide_types.SlideType(**slide_type_dict)
AttributeError: module 'pathml' has no attribute 'core'
Regarding the workers, on colab, I get the following message:
INFO:distributed.http.proxy:To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
INFO:distributed.scheduler:State start
INFO:distributed.scheduler: Scheduler at: tcp://127.0.0.1:39663
INFO:distributed.scheduler: dashboard at: http://127.0.0.1:8787/status
INFO:distributed.nanny: Start Nanny at: 'tcp://127.0.0.1:33993'
INFO:distributed.nanny: Start Nanny at: 'tcp://127.0.0.1:42997'
INFO:distributed.nanny: Start Nanny at: 'tcp://127.0.0.1:33561'
INFO:distributed.nanny: Start Nanny at: 'tcp://127.0.0.1:35047'
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:43365', name: 1, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:43365
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:60068
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:35483', name: 0, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:35483
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:60040
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:40517', name: 3, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:40517
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:60076
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:38553', name: 2, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:38553
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:60052
INFO:distributed.scheduler:Receive client connection: Client-b2c608d7-9f88-11ee-9209-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:40474
INFO:distributed.core:Event loop was unresponsive in Scheduler for 3.39s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 3.39s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 3.40s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 3.40s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 3.40s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 37.13s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 37.14s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 37.20s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 37.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 37.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.nanny:Closing Nanny at 'tcp://127.0.0.1:33993'. Reason: nanny-close
INFO:distributed.nanny:Nanny asking worker to close. Reason: nanny-close
INFO:distributed.nanny:Closing Nanny at 'tcp://127.0.0.1:42997'. Reason: nanny-close
INFO:distributed.nanny:Nanny asking worker to close. Reason: nanny-close
INFO:distributed.nanny:Closing Nanny at 'tcp://127.0.0.1:33561'. Reason: nanny-close
INFO:distributed.nanny:Nanny asking worker to close. Reason: nanny-close
INFO:distributed.nanny:Closing Nanny at 'tcp://127.0.0.1:35047'. Reason: nanny-close
INFO:distributed.nanny:Nanny asking worker to close. Reason: nanny-close
INFO:distributed.core:Received 'close-stream' from tcp://127.0.0.1:60040; closing.
INFO:distributed.core:Received 'close-stream' from tcp://127.0.0.1:60068; closing.
INFO:distributed.core:Received 'close-stream' from tcp://127.0.0.1:60052; closing.
INFO:distributed.scheduler:Remove worker <WorkerState 'tcp://127.0.0.1:35483', name: 0, status: closing, memory: 6336, processing: 0> (stimulus_id='handle-worker-cleanup-1703112777.2456512')
INFO:distributed.scheduler:Remove worker <WorkerState 'tcp://127.0.0.1:43365', name: 1, status: closing, memory: 6337, processing: 0> (stimulus_id='handle-worker-cleanup-1703112777.4646034')
INFO:distributed.scheduler:Remove worker <WorkerState 'tcp://127.0.0.1:38553', name: 2, status: closing, memory: 6336, processing: 0> (stimulus_id='handle-worker-cleanup-1703112777.6592765')
INFO:distributed.core:Received 'close-stream' from tcp://127.0.0.1:60076; closing.
INFO:distributed.scheduler:Remove worker <WorkerState 'tcp://127.0.0.1:40517', name: 3, status: closing, memory: 6336, processing: 0> (stimulus_id='handle-worker-cleanup-1703112777.9168477')
INFO:distributed.scheduler:Lost all workers
INFO:distributed.scheduler:Scheduler closing due to unknown reason...
INFO:distributed.scheduler:Scheduler closing all comms
Thanks again for your help!
from pathml.
Related Issues (20)
- Support GPUs on Docker installation
- preprocessing on mrxs file get h5 file full of 0 HOT 6
- necrotic tissue detection for HE and multiplex imaging
- Need assistance with running Pathml package via Colab or Docker and resolving errors HOT 6
- Tests are failing HOT 2
- Error with spams package in vahadane stain normalisation with StainNormalizationHE() HOT 2
- Make a smaller docker image
- Reading WSI with .tif and ome.tiff format HOT 14
- Implement Tile Stitching Feature in Image Preprocessing
- Unable to read in all image layers in .vdi file, only index 0 is read in HOT 3
- Use updated openslide v4.0.0 HOT 1
- Add Installation and Setup Instructions for Mamba alongside Conda HOT 1
- http://pathml.org/ image broken HOT 1
- Algorithm for Improved Spectral Unmixing with Reference Image Integration
- add install instructions for windows HOT 1
- Compatibility Query: Using PathML for X-Ray Images with Missing 'TotalPixelMatrixRows' Attribute
- pachage install bug
- Unable to Load IMC data (ome.tiff format) properly. Image data appears 0 only...
- add Digital Pathology Assistant notebook HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pathml.