Coder Social home page Coder Social logo

nicalab / support Goto Github PK

View Code? Open in Web Editor NEW
67.0 6.0 13.0 47.73 MB

Accurate denoising of voltage imaging data through statistically unbiased prediction, Nature Methods.

Home Page: https://www.nature.com/articles/s41592-023-02005-8

License: GNU General Public License v3.0

Python 100.00%
calcium-imaging denoising structural-imaging time-lapse-imaging voltage-imaging self-supervised-learning deep-learning microscopy neural-network

support's People

Contributors

eomminho avatar gitter-badger avatar stevejayh avatar ygyoon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

support's Issues

Artefacts - patch size?

Hi,

I am trying to use SUPPORT, but the first results I am getting look really off - an image is attached.

I have trained on my data with a different patch size, since my patches are smaller. Here is the command/settings I used:

python -m src.train --exp_name parallel1 --noisy_data folder --is_folder --results_dir D:\deepSupport\trainedModel --patch_size 61 38 38 --bs_size 3 3

In the python test code I have changed the patch size to

patch_size = [61, 38, 38]
patch_interval = [1, 19, 19]

Is the change in patch size the reason the artefacts? Should I have changed anything else in the test code, or are there certain requirements for the patch size? Or is the blindspot size of 3 too big for smaller patches?

Thanks!

p1_ROI6-1

Is env.yml file missing?

I cloned the repository then tried to create a new environment, but it seems that the env.yml file is missing.

Crashes with Tiff files >4GB

Congrats on the SUPPORT toolbox! It worked impressively well at denoising our voltage imaging data using the default model. We are currently encountering a problem where SUPPORT crashes while processing Tiff files larger than 4GB. Is this a fundamental limitation? Are there plans to support BigTIFF? For context, our data is 1024x1224, 8-bits, recorded at 200Hz. So, a 4 min recording is in the neighborhood of 60GB. The error message is below.

Traceback (most recent call last):
File "C:\Users\2P-user\SUPPORT\src\GUI\test_GUI.py", line 185, in run
img_one = tifffile.imread(
File "C:\Users\2P-user\miniconda3\envs\SUPPORT\lib\site-packages\tifffile\tifffile.py", line 1274, in imread
return tif.asarray(
File "C:\Users\2P-user\miniconda3\envs\SUPPORT\lib\site-packages\tifffile\tifffile.py", line 4465, in asarray
pages = pages._getlist(key)
File "C:\Users\2P-user\miniconda3\envs\SUPPORT\lib\site-packages\tifffile\tifffile.py", line 7788, in _getlist
pages = [getitem(i, validate=validhash) for i in key]
File "C:\Users\2P-user\miniconda3\envs\SUPPORT\lib\site-packages\tifffile\tifffile.py", line 7788, in
pages = [getitem(i, validate=validhash) for i in key]
File "C:\Users\2P-user\miniconda3\envs\SUPPORT\lib\site-packages\tifffile\tifffile.py", line 7834, in _getitem
self._seek(key)
File "C:\Users\2P-user\miniconda3\envs\SUPPORT\lib\site-packages\tifffile\tifffile.py", line 7735, in _seek
raise IndexError('index out of range')
IndexError: index out of range

Train GUI missing is_rotate

Running the GUI for training ran into the following issue:

Traceback (most recent call last):
File "/home/doug/python_projects/support_dir/SUPPORT/src/GUI/train_GUI.py", line 242, in run
noisy_image, _ = random_transform(noisy_image, None, rng)
TypeError: random_transform() missing 1 required positional argument: 'is_rotate'
Aborted (core dumped)

noisy_image, _ = random_transform(noisy_image, None, rng)

Line 242 of train_GUI.py is missing the is_rotate argument. I inserted into line 229
is_rotate = True if self.parent.model.bs_size[0] == self.parent.model.bs_size[1] else False
to specify it prior to training to match the train.py command line interface script.

Following this, the GUI for training runs as expected.

install issue

Hi ,

I started the install of SUPPORT on a windows11 machine, when i try to start the guy a get following error. Thank you in advance for your help.

Fabrice Senger.

(SUPPORT) C:\Users\fabrice_senger\SUPPORT>python -m src.GUI.test_GUI
Traceback (most recent call last):
File "C:\Users\fabrice_senger\miniconda3\envs\SUPPORT\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\fabrice_senger\miniconda3\envs\SUPPORT\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\fabrice_senger\SUPPORT\src\GUI\test_GUI.py", line 29, in
from PIL.ImageQt import ImageQt
ImportError: cannot import name 'ImageQt' from 'PIL.ImageQt' (C:\Users\fabrice_senger\miniconda3\envs\SUPPORT\lib\site-packages\PIL\ImageQt.py)

Mirroring beginning and ending frames

Hi, I understand that by construction the model will remove the first and last n frames from a tif. However, I have trial data within a few frames of the beginning of my recordings and don't want to loose information. I've mirrored the first and last n frames and added them to my data. Is there a foreseeable problem with this?

RGB images

Hello,

My lab has previously used SUPPORT to great satisfaction on a grayscale structural dataset. We would like to also use it on a behavioural dataset of RGB mp4 videos in low light conditions. Per the github, SUPPORT only accepts tif, so i converted it to a RGB tif stack, and tried to train SUPPORT using a 1000 frame snippet (So dimensions were [1000, 1024, 1280, 3]) in the train_gui, however it was not fond of the 4th dimension added by RGB. I was wondering if SUPPORT is capable of running with RGB files and I am simply oblivious as to how? If not I can of course simply split the channels and add them as separate videos and then stitch them together afterwards but as there is information in the correlation between colors, I would prefer not to.

Thank you for your time.

best,
Silas

PyQt5 import issue when running the GUI

Hi, after the installation I tried to launch the GUI, first the "no module" error popped up for PyQt5 so I pip installed PyQt5, and then there was the import error "symbol qt_version_tag version Qt_5.15 not defined in file libQt5Core.so.5 with link time reference" for PyQt5.QtWidgets.
Any hints?

Asymmetric loss of frames beginning/end

Hi,

your FAQ states that the first and last N frames of each timeseries are removed during the inference process. However, when I closely compare the raw and processed data it seems that actually the first N+1 frames are removed and the last N-1.

Best wishes,
Lena

Compatibility with Binary-Frame GEVI Data

Hello,

Thank you for the efforts in creating and sharing this pipeline; it's an impressive project.

I'm interested in applying this pipeline to our dataset recorded with a GEVI and an ultra-high frame rate camera. However, the camera we use produces binary frames. I couldn't find any information regarding the required bit depth of the input images for the AI model in the paper. I'm wondering if it's possible to use a dataset comprised of a series of binary images. From the paper and the documentation, I can understand that this model can be trained with our own dataset by using the traning GUI, but I still wanted to get your opinion before this.

Thank you in advance for your assistance.

Best regards,
Kurtulus

src.test RuntimeError: Error(s) in loading state_dict for SUPPORT:

Following CLI model training, python -m src.test creates a runtime error for loading the state_dict for SUPPORT:
RuntimeError: Error(s) in loading state_dict for SUPPORT:
size mismatch for out_convs.0.weight: copying a param with shape torch.Size([32, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).

Size mismatch is caused by incompatible bs_size specification between the bs_size of the model versus the argument passed to the function call.

Modified the ### Change it with your data ### section to specify bs_size
bs_size = 1 # set to match the bs_size of the model
and the function call to be bs_size=bs_size.

Could not load the Qt platform plugin "xcb" in "" even though it was found.

Ran through the installation and ran into the following error when trying to run src.GUI.test_GUI

(SUPPORT) doug@doug-ubuntu:~/python_projects/support_dir/SUPPORT$ python -m src.GUI.test_GUI
/home/doug/python_projects/support_dir/SUPPORT/src/GUI/test_GUI.py:29: DeprecationWarning: Support for PyQt5 is deprecated and will be removed in Pillow 10 (2023-07-01). Use PyQt6 or PySide6 instead.
from PIL.ImageQt import ImageQt
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.

Hardware/Package info

Distribution: Ubuntu 22.04.1 LTS (64-bit)
GPU: RTX 3090 (dedicated for ML only), RTX 3050 (display)
Nvidia-SMI Driver Version 525.60.11, CUDA Version 12.0
conda package list attached as .txt file
packages.txt

Incorrect loss labels

Hi,

I think that the losses are being incorrectly recorded - in the train function in src.train the l1, l2 and mean errors are incorrectly logged
writer.add_scalar("Loss_l1/train_batch", loss_mean, epoch*len(train_dataloader) + i) writer.add_scalar("Loss_l2/train_batch", loss_mean_l1, epoch*len(train_dataloader) + i) writer.add_scalar("Loss/train_batch", loss_mean_l2, epoch*len(train_dataloader) + i)

Output padding

I've found that the denoised output of the src.test is shorter than the input file. The imsave line removes frames from the beginning and end of the TIFF. Is there supposed to be a padding step or is there a reason why these frames need to be dropped?

Running support on patches of video

Hi Support folks --

I'd like to run SUPPORT on small patches of a voltage imaging video (say 10 pixels x 10 pixels x Time). Is it possible to do this? It seems like with the pretrained models they need a larger image patch. If I train my own model, can I control the size of the filters?

Model used in the publication

Hi, I really interested which model you used for voltage imaging data of mouse cortex layer and zebrafish. Because recently I was dealing with similar voltage imaging data, and I found some tiny irregular spikes that looks like noise on my baseline. Did you use bs1 or bs3, or other models?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.