Coder Social home page Coder Social logo

Comments (6)

jacob-rosenthal avatar jacob-rosenthal commented on July 4, 2024

So from what I can see in the notebook, the image has 28 channels total, and you want to use the last channel as the cytoplasm channel? If so, then you would use index 27 (cytoplasm_channel=27) because the indexing system is zero-based. That might explain why the error message is about index being out of range.

The crashing jupyter kernel is harder to diagnose. Maybe it could be due to memory constraints of your machine and holding too much data in memory at once.

from pathml.

DRSEI avatar DRSEI commented on July 4, 2024

Hello @jacob-rosenthal , I appreciate your previous assistance. However, I'm currently encountering additional issues. Please refer to the following information:

`
INFO:distributed.http.proxy:To route to workers diagnostics web server, please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
INFO:distributed.scheduler:State start
INFO:distributed.scheduler:Scheduler at tcp://127.0.0.1:34389
INFO:distributed.scheduler:Dashboard at 127.0.0.1:8787
INFO:distributed.nanny:Start Nanny at 'tcp://127.0.0.1:46283'
INFO:distributed.nanny:Start Nanny at 'tcp://127.0.0.1:37099'
INFO:distributed.nanny:Start Nanny at 'tcp://127.0.0.1:34189'
INFO:distributed.nanny:Start Nanny at 'tcp://127.0.0.1:33063'
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:42611', name: 3, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:42611
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:46628
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:39977', name: 2, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:39977
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:46618
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:33903', name: 1, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:33903
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:46602
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:39173', name: 0, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:39173
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:46638
INFO:distributed.scheduler:Receive client connection: Client-48ecb7b6-f381-11ed-90ba-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:46642

`
Unfortunately, the code encounters an error during execution:

`---------------------------------------------------------------------------
error Traceback (most recent call last)
in <cell line: 2>()
1 # Run the pipeline
----> 2 slidedata.run(pipe, distributed = True, tile_pad=False)
3
4
5

16 frames
/usr/lib/python3.10/gzip.py in read()
494 buf = self._fp.read(io.DEFAULT_BUFFER_SIZE)
495
--> 496 uncompress = self._decompressor.decompress(buf, size)
497 if self._decompressor.unconsumed_tail != b"":
498 self._fp.prepend(self._decompressor.unconsumed_tail)

error: Error -3 while decompressing data: invalid block type`

error: Error -3 while decompressing data: invalid block type

from pathml.

jacob-rosenthal avatar jacob-rosenthal commented on July 4, 2024

I've never seen that before. Not sure what is causing it. Seems likely that it's due to some incompatibility between dask and colab. I'd suggest trying with distributed=False

from pathml.

DRSEI avatar DRSEI commented on July 4, 2024

I have followed your suggestion, but now I am encountering a new issue.

`WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually.
/usr/local/lib/python3.10/dist-packages/anndata/_core/anndata.py:117: ImplicitModificationWarning: Transforming to str index.
warnings.warn("Transforming to str index.", ImplicitModificationWarning)
WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually.
/usr/local/lib/python3.10/dist-packages/anndata/_core/anndata.py:117: ImplicitModificationWarning: Transforming to str index.
warnings.warn("Transforming to str index.", ImplicitModificationWarning)
/usr/local/lib/python3.10/dist-packages/anndata/_core/anndata.py:1755: FutureWarning: The AnnData.concatenate method is deprecated in favour of the anndata.concat function. Please use anndata.concat instead.

See the tutorial for concat at: https://anndata.readthedocs.io/en/latest/concatenation.html
warnings.warn(
WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually.
/usr/local/lib/python3.10/dist-packages/anndata/_core/anndata.py:117: ImplicitModificationWarning: Transforming to str index.
warnings.warn("Transforming to str index.", ImplicitModificationWarning)
/usr/local/lib/python3.10/dist-packages/anndata/_core/anndata.py:1755: FutureWarning: The AnnData.concatenate method is deprecated in favour of the anndata.concat function. Please use anndata.concat instead.
`

The execution of my code, specifically the line "# Run the pipeline slidedata.run(pipe, distributed=False, tile_pad=False)", has been running for over 10 minutes for a single file. I'm wondering if there is a Colab training notebook available that I can refer to for guidance. Alternatively, would you mind reviewing the script I posted earlier to see if I made any mistakes? I appreciate your assistance in troubleshooting this matter.

from pathml.

jacob-rosenthal avatar jacob-rosenthal commented on July 4, 2024

The warnings should be fine to ignore. The workflow in the colab notebook you posted looks fine to me. You can refer to the example vignettes in the documentation - they're at https://pathml.readthedocs.io/ under the "examples" section. Runtime will depend on several factors, including computational resources of the environment you are running it in, size of the input data, what steps are in the pipeline, etc. In this case, inference is being run with the mesmer model for every tile which could be relatively slow. From my experience, 10min is not out of the ordinary; I have run the same pipeline on large images taking up to 24hrs.

from pathml.

DRSEI avatar DRSEI commented on July 4, 2024

I understand. I'll let the process continue running for a while, and I'll keep you updated on any progress or developments.

from pathml.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.