Coder Social home page Coder Social logo

xinario / sagan Goto Github PK

View Code? Open in Web Editor NEW
124.0 4.0 31.0 800 KB

Sharpness-aware Low Dose CT Denoising Using Conditional Generative Adversarial Network

Lua 80.82% Python 12.96% MATLAB 6.22%
deep-learning generative-adversarial-network denoising low-dose computed-tomography convolutional-neural-networks image-to-image-translation gan kaggle

sagan's Introduction

SAGAN

Update 2019.01.22

For those who want to use the piglet dataset for CT denoising research and use this work as a baseline, please refer to this issue for details on how I used it.

Update 2018.03.27

The piglet dataset we used in the publication is now open for download! Please find the link in my personal webpage. (Note: for non-commercial use only)

This repo provides the trained denoising model and testing code for low dose CT denoising as described in our paper. Here are some randomly picked denoised results on low dose CTs from this kaggle challenge.

How to use

To better use this repo, please make sure the dose level of the LDCTs are larger than 0.71 mSv.

Prerequistites

  • Linux or OSX
  • NVIDIA GPU
  • Python 3.x
  • Torch7

Getting Started

  • Install Torch7
  • Install torch packages nngraph and hdf5
luarocks install nngraph
luarocks install hdf5
  • Install Python 3.x (recommend using Anaconda)
  • Install python dependencies
pip install -r requirements.txt
  • Clone this repo:
git clone [email protected]:xinario/SAGAN.git
cd SAGAN
  • Download the pretrained denoising model from here and put it into the "checkpoints/SAGAN" folder

  • Prepare your test set with the provided python script

#make a directory inside the root SAGAN folder to store your raw dicoms, e.g. ./dicoms
mkdir dicoms
#then put all your low dose CT images of dicom format into this folder and run
python pre_process.py  --input ./dicoms --output ./datasets/experiment/test
#all your test images would now be saved as uint16 png format inside folder ./datasets/experiment/test. 
  • Test the model:
DATA_ROOT=./datasets/experiment name=SAGAN which_direction=AtoB phase=test th test.lua
#the results are saved in ./result/SAGAN/latest_net_G_test/result.h5
  • Display the result with a specific window, e.g. abdomen. Window type can be changed to 'abdomen', 'bone' or 'none'
python post_process.py --window 'abdomen'

Now you can view the result by open the html file index.html sitting in the root folder

Citations

If you find it useful and are using the code/model/dataset provided here in a publication, please cite our paper:

Yi, X. & Babyn, P. J Digit Imaging (2018). https://doi.org/10.1007/s10278-018-0056-0

Acknowlegements

Code borrows heavily from pix2pix

sagan's People

Contributors

xinario avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

sagan's Issues

DATA_ROOT: command not found

 您好,我在复现您的工作的时候,按照您GitHub上的步骤来进行,install torch和其他packages,然后在clone项目时是直接下载下来放到服务器上(一个sagan-master的文件夹),接着我不能Download the pretrained denoising model ,因为链接在国内失效了,我猜测;接着Prepare test set成功,显示:318 files in the folder;but 当我 test the model时,显示: DATA_ROOT: command not found,我在想是不是预训练模型的问题,不知道您有没有解决办法,谢谢!

sharpness detection

I wonder why training an independent network for sharpness detection? Why not calculate the sharpness difference between synthesized images and real images and then add the sharpness loss to objective function?

Problems in poisson noise simulation

Hello! I am trying to process my own dicom file using your Matlab code. However, I find the dicom file I get doesn't have the attribute PixelPaddingValue. I want to ask that is there another attribute to replace PixelPaddingValue? is this attribute necessary if I want to add poisson noise to the dicom file? Thanks!

Running problem?

When I was runing the code,I met the following problem.Can you help me?

DATA_ROOT=./datasets/experiment name=SAGAN which_direction=AtoB phase=test th test.lua
{
input_nc : 3
results_dir : "./results/"
Size : 512
batchSize : 1
phase : "test"
fineSize : 512
aspect_ratio : 1
how_many : "all"
gpu : 1
nThreads : 1
DATA_ROOT : "./datasets/experiment"
serial_batch_iter : 1
preprocess : "regular"
norm : "batch"
which_epoch : "latest"
name : "SAGAN"
cudnn : 1
serial_batches : 1
flip : 0
output_nc : 3
which_direction : "AtoB"
checkpoints_dir : "/home/angus/Documents/project/Medical_image/SAGAN/checkpoints"
display_id : 200
display : 0
}
Random Seed: 7774
#threads...1
Starting donkey with id: 1 seed: 7775
table: 0x41c20428
./datasets/experiment
trainCache /home/angus/Documents/rfid-project/Medical_image/SAGAN/cache/_home_angus_Documents_project_Medical_image_SAGAN_datasets_experiment_test_trainCache.t7
Creating train metadata
serial batch:, 1
table: 0x41dc8138
running "find" on each class directory, and concatenate all those filenames into a single file containing all image paths for a given class
now combine all the files to a single large file
load the large concatenated list of sample paths to self.imagePath
cmd..wc -L '/tmp/lua_RsZajh' |cut -f1 -d' '
64 samples found..... 0/64 .....................] ETA: 0ms | Step: 0ms
Updating classList and imageClass appropriately
[=================== 1/1 =====================>] Tot: 0ms | Step: 0ms
Cleaning up temporary files
Dataset Size: 64
checkpoints_dir /home/angus/Documents/project/Medical_image/SAGAN/checkpoints
/home/angus/torch/install/bin/luajit: /home/angus/torch/install/share/lua/5.1/threads/threads.lua:315: /home/angus/torch/install/share/lua/5.1/threads/threads.lua:183: [thread 1 callback] /home/angus/torch/install/share/lua/5.1/dok/inline.lua:738: <image.scale> could not find valid dest size
stack traceback:
[C]: in function 'error'
/home/angus/torch/install/share/lua/5.1/dok/inline.lua:738: in function 'error'
/home/angus/torch/install/share/lua/5.1/image/init.lua:718: in function 'scale'
.../project/Medical_image/SAGAN/data/donkey_folder.lua:40: in function 'preprocessAandB'
.../project/Medical_image/SAGAN/data/donkey_folder.lua:256: in function 'sampleHookTrain'
...uments/project/Medical_image/SAGAN/data/dataset.lua:342: in function 'getByClass'
...uments/project/Medical_image/SAGAN/data/dataset.lua:375: in function <...uments/project/Medical_image/SAGAN/data/dataset.lua:367>
[C]: in function 'xpcall'
/home/angus/torch/install/share/lua/5.1/threads/threads.lua:234: in function 'callback'
/home/angus/torch/install/share/lua/5.1/threads/queue.lua:65: in function </home/angus/torch/install/share/lua/5.1/threads/queue.lua:41>
[C]: in function 'pcall'
/home/angus/torch/install/share/lua/5.1/threads/queue.lua:40: in function 'dojob'
[string " local Queue = require 'threads.queue'..."]:15: in main chunk

Why the sharpness detection network was trained?

I enjoyed reading your high-quality paper. I think it will be very helpful for my task.

But in the mean time, I have a question about "sharpness detection network".

There seems to be a reason for additionally training the sharpness network instead of using the LBP-based sharpness metric directly for loss calculation.

May I know the reason for that? Or, if you have any experiments, please let me know, I would be very grateful.

Thanks

FAQ regarding the usage of the dataset

  1. Which are the 850 images mentioned in your paper?
    Inside each experiment folder (SE0, SE1....SE28), there are 906 images. So to get the exact 850 images, you need to, first, reordering the image sequence according to the [SliceLocation] field of the Dicom image (sort in ascending order) and you will get images arranged from pelvis to head. Then just keeping slice 21 to 870 and discard the rest which has almost no content.

  2. What is the train/test split?
    After obtaining the 850 ordered images, test images were selected at an interval of 6, i.e. slice 1, 7, 13 … 847.

  3. How do you compute the PSNR and SSIM for the simulated and real dataset?
    For the simulated data, the generated noisy image was converted to uint8 using the abdomen window (center: 40, width: 400). The training and evaluation were all conducted using these narrow range 8 bit images.
    For the piglet data, the original 16 bit value was used. The training and evaluation was on 16 bit images.

  4. What should I do when my test dicom data is not uint16?
    In this case, the correct way to convert the data in preprocess.py is ([Hounsfield units]+1024)*22

unable to locate HDF5 header file at hdf5.h stack traceback:

torch/install/share/lua/5.1/hdf5/ffi.lua:42: Error: unable to locate HDF5 header file at hdf5.h
stack traceback:
[C]: in function 'error'
/home/hejian/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
test.lua:10: in main chunk
[C]: in function 'dofile'
...jian/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

请问,你有遇到吗?是怎样处理的,我在Ubuntu下尝试运行的?网上的方法试了不少,还是报错...

GPU issue

Hi,
I have an issue when i tap this command "python pre_process.py -s 1 -i ./dicoms -o ./datasets/experiment/test" , prompt respond " warnings.warn(msg)
0 files in the folder" while i've lot of dicoms format images., and the root is good. Do you ve any idea of the problem ?
My laptop have only a AMD GPU...
Is it possible this made the error ?

There is a solution or a modification to run it with CPU ?
Thanks a lot

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.