Coder Social home page Coder Social logo

bioimageanalysiscorewehi / napari_lattice Goto Github PK

View Code? Open in Web Editor NEW
11.0 11.0 4.0 172.81 MB

Napari plugin for custom analysis and visualization of lattice lightsheet and Oblique Plane Microscopy data. The plugin is optimized for data from the Zeiss lattice lightsheet microscope.

Home Page: https://github.com/BioimageAnalysisCoreWEHI/napari_lattice/wiki

License: GNU General Public License v3.0

Python 3.54% Jupyter Notebook 96.46% Batchfile 0.01%
deconvolution deskewing image-analysis-and-processing lattice lightsheet microscopy napari oblique-plane python zeiss

napari_lattice's People

Contributors

dependabot[bot] avatar drlachie avatar haesleinhuepf avatar jo-mueller avatar lazigu avatar multimeric avatar ninatubau avatar pr4deepr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

napari_lattice's Issues

1. Separating out batch processing and napari options

Currently the interactive napari bit and batch processing are in the same library.
It would make sense to split it out and perhaps have the batch processing as an independent package so any napari imports can be avoided. For example, currently even if not using napari, we need to add the Xvfb flag in batch processing.

Batch processing is mainly here:
https://github.com/BioimageAnalysisCoreWEHI/napari_lattice/tree/master/src/napari_lattice/cmds

The main init file contains initialisation of the logger and enums so that both batch processing and napari could use it.
https://github.com/BioimageAnalysisCoreWEHI/napari_lattice/blob/master/src/napari_lattice/__init__.py

The parameters for batch processing are here:
https://github.com/BioimageAnalysisCoreWEHI/napari_lattice/wiki/6.-Batch-Processing

Interactive widgets in napari:
https://github.com/BioimageAnalysisCoreWEHI/napari_lattice/blob/master/src/napari_lattice/_dock_widget.py

Use config file for no GUI processing

The napari-lattice plugin has the option for batch processing within the terminal. Considering the addition of more functionalities, the number of arguments isgrowing. This can get confusing and require really long command lines which is not user-friendly. (not that running in the terminal makes it any easier ๐Ÿ˜ )

Instead, use a yaml config file.
Syntax should be:

napari-lattice --config "location-of-config-file"

Arguments: https://github.com/BioimageAnalysisCoreWEHI/napari_lattice/wiki/6.-Batch-Processing

ringing artefact from deconvolution

When performing deconvolution and then deskewing, there is a ringing artefact on the edges.
Consider changing the padding options during decon or maybe cropping a larger area to account for this.

Type annotations throughout

The end goal of this might be to be able to run mypy or pyright in the CI, to identify type errors. However the type signatures need to be improved before this can happen.

processing image as tiles

In addition to processing the image across channels and time dimensions, we may need to consider processing the image as tiles to fit large images into memory

We had thought of implementing this a while back and here is a code example:

23aa36e

implement cropping for skew in X direction

Currently, cropping is only implemented for data skewed in Y direction.

Implement cropping in X direction.
This is important as future versions of Zen (>3.5 ?) will have images skewed in X direction.

Make all other functions agnostic to skew direction, which is the original goal of this plugin.

Does not recognise channels in czi file

Folks,
Thanks for making this plugin :) I have been trying to find a way to deskew/deconvolve that is not on ZEN Blue so this is great!

Here is the issue I faced - I loaded a skewed lattice light sheet dataset but napari-lattice fails to recognize the "channel" dimension in the czi file. I did try the below step as mentioned in the github page, but that did not help.

"If the channels are split into different layers, press Shift and select all the channels. Right click on the highlighted layers and then click Merge to stack."

Here is a screenshot of the error. Any ideas what the issue is? I use napari-czifile2 to read czi files in napari
Thanks!

image

Using "register_function" to add custom workflows

@haesleinhuepf showed us how we can add functions using
@register_function
juglab/napari-n2v#6 (comment)
https://github.com/clEsperanto/napari_pyclesperanto_assistant/blob/ad72ecae7d017ba63f579f05dd00be627f36e89b/napari_pyclesperanto_assistant/_napari_cle_functions.py#L7

This will enable the addition of interactive workflows.
Add options for popular algorithms or workflows not available in pyclesperanto-assistant:

  • Cellpose
  • ilastik
  • regionprops measurements table?

File naming

Default naming of imported files is incompatible with saving at the end of the deskew. e.g. ROI_0__0 :: Image:0 :: LatticeLightsheet 3-T1

Package discovery error with new setuptools

Hi all!

When I try to download napari-lattice via:
pip install git+https://github.com/BioimageAnalysisCoreWEHI/napari_lattice.git

I receive the following error:

error: subprocess-exited-with-error  
    
  ร— Getting requirements to build wheel did not run successfully.  
  โ”‚ exit code: 1  
  โ•ฐโ”€> [14 lines of output]  
      error: Multiple top-level packages discovered in a flat-layout: ['core', 'plugin', 'notebooks', 'resources', 'sample_data', 'workflow_examples'].  
        
      To avoid accidental inclusion of unwanted files or directories,
      setuptools will not proceed with this build.  
        
      If you are trying to create a single distribution with multiple packages
      on purpose, you should not rely on automatic discovery.
      Instead, consider the following options:  
        
      1. set up custom discovery ('find' directive with 'include' or 'exclude')  
      2. use a 'src-layout'  
      3. explicitly set 'py_modules' or 'packages' with a list of names  
        
      To find more information, look for "package discovery" on setuptools docs.  
      [end of output]  
    
  note: This error originates from a subprocess, and is likely not a problem with pip.

After perusing the internet I found that this error can be circumvented by either:

  1. This answer on stackoverflow suggested adding the following lines to the pyproject.toml:
[tool.setuptools]  
py-modules = []
  1. Since I don't have access to the repo, I tried downgrading setuptools via:
    pip install setuptools==version#

Unfortunately downgrading setuptools still causes a host of errors to occur. Any suggestions?

Thank you so much :octocat: !!!

dealing with moving ROIs/cells or ROIs from tracking

Currently, napari-lattice works only with a single ROI and it assumes that the cell doesn't move much.
If cell wobbles a bit, we just draw a bigger ROI.
This is all done using the MIP image after image acquisition for Zeiss lattice.

We need to define how this is going to look like, i..e., scheme of the pydantic model if using this approach and best way to integrate it.

return multichannel image to use in cellpose worklfow

Currently, napari-lattice Workflow module iterates through each channel and applies the custom processing defined in the workflow.
For cellpose, if a multichannel image is given, this can be used as input for improved prediction using cyto models.
For example if channel 1 is nuclei and channel 2 is cytoplasm, segmentation maybe more accurate on the combined multichannel image than on channel 1 or two individually.

Either new function for this or alter the code here:

for time_point in tqdm(time_range, desc="Time", position=0):
output_array = []
data_table = []
for ch in tqdm(channel_range, desc="Channels", position=1,leave=False):
if len(vol.shape) == 3:
raw_vol = vol
else:
raw_vol = vol[time_point, ch, :, :, :]
#TODO: disable if support for resourc backed dask array is added
if type(raw_vol) in [resource_backed_dask_array]:
raw_vol = raw_vol.compute() #convert to numpy array as resource backed dask array not su
#to access current time and channel, create a file config.py in same dir as workflow or in home directory
#add "channel = 0" and "time=0" in the file and save
#https://docs.python.org/3/faq/programming.html?highlight=global#how-do-i-share-global-variables-across-modules
config.channel = ch
config.time = time_point
#if deconvolution, need to define psf and choose the channel appropriate one
if deconvolution:
workflow.set(psf_arg,psf[ch])
#if decon_processing == "cuda_gpu":
#workflow.set("psf",psf[ch])
#else:
#workflow.set("psf",psf[ch])
#Set input to the workflow to be volume from each time point and channel
workflow.set(input_arg,raw_vol)
#execute workflow
processed_vol = workflow.get(last_task)
output_array.append(processed_vol)
output_array = np.array(output_array)

fixing documentation

  • Swap "Cropping" and "Deskewing" sections
  • Add more details on installing napari, perhaps point to full documentation?

setting up actions

Hi @haesleinhuepf
I'm testing out napari_lattice and learning how to publish it as a pypi package.
I'm having trouble setting up github actions and automated testing/deployment.
Would you mind helping me out with this?

Cheers

Pradeep

deconvolution add info in documentation

We assume that the pixel size of the psfs are same as that of the image and use that throughout the code.
Its worth adding this note in the documentation and ask the user to verify this is the case, i.e., psf should be acquired using same settings as that of the image

code refactoring

  • Define a class for writer so its easier to incorporate multiple formats, initialize and save images. Add flexibility to save images of any dimension (currently only 3d images can be changed..
  • Security issue: Currently workflows option runs .py files in a folder specified by user. This is a potential security issue. Add option so user can confirm which files to run.
  • Define a function to handle the return types from workflows. For example, if multiple object are returned, they need to be in a tuple. Otherwise, treat it as a single object.
  • use plugin_register to add different software support so workflows can be initialised from napari: #17

Deskewing artefact

As we are using a single affine transformation for shearing and rotation, this can introduce artefacts in the final data, especially on non-isotropic data. @haesleinhuepf has raised it here:
clEsperanto/pyclesperanto_prototype#196

Turns out, this is a well-known issue and has been documented here:
VolkerH/Lattice_Lightsheet_Deskew_Deconv#22
napari/napari#1843 @haesleinhuepf , @VolkerH has discussed an OpenCL-related solution/fix? Not sure I fully understand it

We could technically chain the affine transformations individually, but the intermediate sheared image will be extremely large and cause memory issues on the GPU.
Another option is to write an Opencl kernel specifically for deskewing. This code from OPM can be used for inspiration:
https://github.com/QI2lab/OPM/blob/5dc5a4f2046d220e09d038ae6d292f3590e4f015/reconstruction/image_post_processing.py#L33

Fix pip related requirements issue

the pip bug that was causing installation issues has been fixed as of pip 23.3, so if you are interested we could fix all the requirements in this repo and just ask users to update their pip?

Export yaml config from plugin for use on CLI

If a user has configured the GUI and happy with the settings, batch processing using the CLI would be the next step.
To make the transition easier, add option to export the schema in the form of a yaml config file that can be ingested by CLI.

examples for napari-workflows

  • Add more documentation on using workflows
  • How to use napari-assistant, export workflow and apply it on a lattice dataset
  • How to run custom python scripts
  • Use ilastik classifier (limiting feature sets in ilastik to tackle memory issues)
  • Example workflows on datasets for segmentation and measuring morphological properties

Custom error types

Ideally a library will have its own custom error hierarchy so a user can except those errors specifically. It might be useful to therefore have LlsError and some subclasses of that. This is low priority however.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.