Coder Social home page Coder Social logo

ai-med / quicknat_pytorch Goto Github PK

View Code? Open in Web Editor NEW
101.0 9.0 35.0 350.12 MB

PyTorch Implementation of QuickNAT and Bayesian QuickNAT, a fast brain MRI segmentation framework with segmentation Quality control using structure-wise uncertainty

License: MIT License

Python 100.00%
deep-learning brain-imaging mri-images segmentation biomarkers machine-learning convolutional-neural-networks computer-vision medical-imaging ai

quicknat_pytorch's People

Contributors

abhi4ssj avatar joshicola avatar jyotirmay123 avatar shayansiddiqui avatar wachinger avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

quicknat_pytorch's Issues

Predicting input nii files?

Hello,

I can't seen to figure out how to run your code in, nift input files.

I was looking for something simple such as python3 run.py --in path/to/file.

Thank you!

I can't find pretrained model

Thank you for your work.

I want to run a segmentation using your code

But i could not find a model that was pretrained model by pytorch

How can i find the pretrainined model files

Thank you

Re-train the model with new data: 3 view aggregation

Dear authors,

First of all, great work and thanks for the code!
I'm trying to re-train the model with my data and it seems to run.

I only have a question. It seems possible to select the orientation (COR, AXI, SAG), but how I create a model that aggregate all of these 3?

Thanks!

Large Binaries in Repo

It would be great if the trained weights could be removed from the repo and downloaded separately from a server. Cloning takes about 5 minutes for me.

Run with new data

Hi,

What is the folder structure of the data?
I need to know this to incorporate my own data.

Thanks.

Out of Memory

Hi,

I tried to run QuickNAT on a dataset that I have preprocessed using Freesurfer as described in your documentation.
My computer is equipped with a NVIDIA GeForce GTX 1650 (4GB RAM).
I am using python run.py --mode=eval_bulk to start execution. I have set the batch size to 1, however I am still getting this:

RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 4.00 GiB total capacity; 1.67 GiB already allocated; 0 bytes free; 2.74 GiB reserved in total by PyTorch)

Is it possible to work around this problem somehow or are 4GB RAM simply not enough for executing QuickNAT?

Thanks for your time!

Fixed - Installing Requirements Error: squeeze_and_excite

Just to let other users know, when running the requirements.txt installation in a venv, our team encountered an error:

raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://pypi.org/simple/git/

We found that the fix for this issue was to manually run the command bellow. When testing the pip install requirements.txt again, the issue appeared to be solved.

pip install https://github.com/abhi4ssj/squeeze_and_excitation/releases/download/v1.0/squeeze_and_excitation-1.0-py2.py3-none-any.whl

"cpu" or cpu not supported as device in settings_eval.ini

When i set device to "cpu" in settings_eval.ini, just to test cpu performance, i get:

Traceback (most recent call last):
File "run.py", line 187, in
evaluate_bulk(settings_eval['EVAL_BULK'])
File "run.py", line 136, in evaluate_bulk
mc_samples)
File "/home/diedre/git/quickNAT_pytorch/utils/evaluator.py", line 260, in evaluate
model.cuda(device)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 311, in cuda
return self._apply(lambda t: t.cuda(device))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 208, in _apply
module._apply(fn)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 208, in _apply
module._apply(fn)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 230, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 311, in
return self._apply(lambda t: t.cuda(device))
RuntimeError: Invalid device, must be cuda device

Seems like an easy fix, change of using model.cuda(device) to model.to(device), with device being redefined as something like:
torch.device("cpu") if device == "cpu" else torch.device("cuda:{}".format(device))

I also tested cpu, and got:

Traceback (most recent call last):
File "run.py", line 186, in
settings_eval = Settings('settings_eval.ini')
File "/home/diedre/git/quickNAT_pytorch/settings.py", line 10, in init
self.settings_dict = _parse_values(config)
File "/home/diedre/git/quickNAT_pytorch/settings.py", line 27, in _parse_values
config_parsed[section][key] = ast.literal_eval(value)
File "/usr/lib/python3.6/ast.py", line 85, in literal_eval
return _convert(node_or_string)
File "/usr/lib/python3.6/ast.py", line 84, in _convert
raise ValueError('malformed node or string: ' + repr(node))
ValueError: malformed node or string: <_ast.Name object at 0x7f13134c55c0>

The code probably expects an int or string.

training example

Hi,

I try to use QuickNAT to train my data but feel a bit confused about the usage and some arguments. Should we convert our data to h5 files first? If so, could you explain how to choose arguments like --data_id and --remap_config in convert_h5.py?

Could you offer some instructions or sample dataset/folder structure to demonstrate how to train the model?

Thanks!

Seems like a bug while training my own module...

==== Epoch [ 1 / 10 ] START ====
<<<= Phase: train =>>>
D:\Data\quickNAT_pytorch-master\venv\lib\site-packages\torch\optim\lr_scheduler.py:131: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "
Traceback (most recent call last):
File "D:\Data\quickNAT_pytorch-master\run.py", line 187, in
train(train_params, common_params, data_params, net_params)
File "D:\Data\quickNAT_pytorch-master\run.py", line 57, in train
solver.train(train_loader, val_loader)
File "D:\Data\quickNAT_pytorch-master\solver.py", line 113, in train
output = model(X)
File "D:\Data\quickNAT_pytorch-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Data\quickNAT_pytorch-master\quicknat.py", line 49, in forward
e1, out1, ind1 = self.encode1.forward(input)
File "D:\Data\quickNAT_pytorch-master\venv\lib\site-packages\nn_common_modules\modules.py", line 151, in forward
out_block = self.SELayer(out_block, weights)
File "D:\Data\quickNAT_pytorch-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
TypeError: forward() takes 2 positional arguments but 3 were given

I don't know whether it is a bug or my wrong operation...

Question about `mri_convert`

Hello,

The readme states:

Before deploying our model you need to standardize the MRI scans. Use the following command from FreeSurfer
mri_convert --conform <input_volume.nii> <out_volume.nii>
The above command standardizes the alignment for QuickNAT, re-samples to isotrophic resolution (256x256x256) with some contrast enhamcement. It takes about one second per volume.

Assuming FreeSurfer is not available, could you please elaborate on the pre-processing steps?

  1. What exactly does it mean to standardize alignment?
  2. Is the contrast enhancement necessary?

"EOL while scanning string literal" for .nii file

When I tried to load a .nii file from the ADNI database, using the eval_bulk run.py module, I am getting an error in reading the file. Going through the code line by line, I managed to find that the error originates in the load_and_preprocess_eval function, in data_utils.py, in particular, in the volume_nifty = nb.load(file_path[0]) function. When running the same command for the same file path in the terminal, I am getting the above error:

EOL while scanning string literal

Could you please suggest a solution to this issue? Could this be due to my file? The file is an extrated brain using FSL.

what did squeeze_and_excitation module do?

In quicknat.py the module 'squeeze_and_excitation' are imported but the only use of it is:"print(se.SELayer(params['se_block']))". What's the function of this module?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.