Coder Social home page Coder Social logo

ayanglab / swinmr Goto Github PK

View Code? Open in Web Editor NEW
60.0 1.0 9.0 357.85 MB

This is the official implementation of our proposed SwinMR

Home Page: https://arxiv.org/abs/2201.03230

License: Apache License 2.0

Python 100.00%
mri-reconstruction transformer-models swin-transformer deep-learning computer-vision

swinmr's Introduction

SwinMR

by Jiahao Huang ([email protected])

This is the official implementation of our proposed SwinMR:

Swin Transformer for Fast MRI

Please cite:

@article{HUANG2022281,
    title = {Swin transformer for fast MRI},
    journal = {Neurocomputing},
    volume = {493},
    pages = {281-304},
    year = {2022},
    issn = {0925-2312},
    doi = {https://doi.org/10.1016/j.neucom.2022.04.051},
    url = {https://www.sciencedirect.com/science/article/pii/S0925231222004179},
    author = {Jiahao Huang and Yingying Fang and Yinzhe Wu and Huanjun Wu and Zhifan Gao and Yang Li and Javier Del Ser and Jun Xia and Guang Yang},
    keywords = {MRI reconstruction, Transformer, Compressed sensing, Parallel imaging},
    abstract = {Magnetic resonance imaging (MRI) is an important non-invasive clinical tool that can produce high-resolution and reproducible images. However, a long scanning time is required for high-quality MR images, which leads to exhaustion and discomfort of patients, inducing more artefacts due to voluntary movements of the patients and involuntary physiological movements. To accelerate the scanning process, methods by k-space undersampling and deep learning based reconstruction have been popularised. This work introduced SwinMR, a novel Swin transformer based method for fast MRI reconstruction. The whole network consisted of an input module (IM), a feature extraction module (FEM) and an output module (OM). The IM and OM were 2D convolutional layers and the FEM was composed of a cascaded of residual Swin transformer blocks (RSTBs) and 2D convolutional layers. The RSTB consisted of a series of Swin transformer layers (STLs). The shifted windows multi-head self-attention (W-MSA/SW-MSA) of STL was performed in shifted windows rather than the multi-head self-attention (MSA) of the original transformer in the whole image space. A novel multi-channel loss was proposed by using the sensitivity maps, which was proved to reserve more textures and details. We performed a series of comparative studies and ablation studies in the Calgary-Campinas public brain MR dataset and conducted a downstream segmentation experiment in the Multi-modal Brain Tumour Segmentation Challenge 2017 dataset. The results demonstrate our SwinMR achieved high-quality reconstruction compared with other benchmark methods, and it shows great robustness with different undersampling masks, under noise interruption and on different datasets. The code is publicly available at https://github.com/ayanglab/SwinMR.}
}

Overview_of_SwinMR

Highlight

  • A novel Swin transformer-based model for fast MRI reconstruction was proposed.
  • A multi-channel loss with sensitivity maps was proposed for reserving more details.
  • Comparison studies were performed to validate the robustness of our SwinMR.
  • A pre-trained segmentation network was used to validate the reconstruction quality.

Requirements

matplotlib==3.3.4

opencv-python==4.5.3.56

Pillow==8.3.2

pytorch-fid==0.2.0

scikit-image==0.17.2

scipy==1.5.4

tensorboardX==2.4

timm==0.4.12

torch==1.9.0

torchvision==0.10.0

Training and Testing

Use different options (json files) to train different networks.

Calgary Campinas multi-channel dataset (CC)

To train SwinMR (PI) on CC:

python main_train_swinmr.py --opt ./options/example/train_swinmr_CCpi_G1D30.json

To test SwinMR (PI) on CC:

python main_train_swinmr.py --opt ./options/example/train_swinmr_CCnpi_G1D30.json

To train SwinMR (nPI) on CC:

python main_test_swinmr_CC.py --opt ./options/example/test/test_swinmr_CCpi_G1D30.json

To test SwinMR (nPI) on CC:

python main_test_swinmr_CC.py --opt ./options/example/test/test_swinmr_CCnpi_G1D30.json

This repository is based on:

SwinIR: Image Restoration Using Swin Transformer (code and paper);

Swin Transformer: Hierarchical Vision Transformer using Shifted Windows (code and paper).

swinmr's People

Contributors

ayanglab avatar jiahaohuang99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

swinmr's Issues

A question about the CC dataset

In CC dataset, some pixel values can be very large at about 4x10^4. This will result in, when I use the network structure within the paper to run under my own framework, the loss of values will be very large. But such a problem does not arise in the source code provided by the author. So I would like to ask the author whether the dataset has been preprocessed, and if it is convenient, can you tell me the method of processing?

有关输入数据的问题

您好 我看到您在对全采样数据做降采先将其变为实数,可以问一下为什么要这么操作吗

数据集处理

hi,请问作者是否有对CC数据集进行特殊处理?
能否上传已经训练好的checkpoint,供读者进行测试?
还有论文中给出SwinMRI(npi) in mask(G1D30) ,psnr=32.11这个值是否偏低?
谢谢🙏

小的疑问和反馈

您好,很感谢您的工作,我最近刚开始学习MRI重建,所以对蓝框中的 ESPIRiT 作用不是很明白,还有应用到下游任务的分割网络初始化权重是用重建网络的SwinMR吗?最后有一点反馈是图中红框IM与OM是不是反了
1644568794(1)

小疑问

I am very interested in this article and have cited the article to make a comparison with SwinMRI approach, but such a problem that no training model arise in the source code provided by the author. So I would like to ask the author whether send me the training model and the necessary information about it. I promise they will be used only for research purpose.

Working with Non-square images

If I want to work on non-square images, for instance with the dimension 30*436, where do I have to make changes?
Does the model architecture needs any change? or only changing the configuration will suffice?

some questions about CC dataset

In CC dataset, some pixel values can be very large at about 4x10^4. This will result in, when I use the network structure within the paper to run under my own framework, the loss of values will be very large. But such a problem does not arise in the source code provided by the author. So I would like to ask the author whether the dataset has been preprocessed, and if it is convenient, can you tell me the method of processing?

mask生成方式

你好,请问可以提供一下生成mask的方式吗,谢谢。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.