Coder Social home page Coder Social logo

rsummers11 / cadlab Goto Github PK

View Code? Open in Web Editor NEW
430.0 430.0 186.0 667.71 MB

Imaging Biomarkers and Computer-Aided Diagnosis Laboratory

Home Page: https://www.cc.nih.gov/meet-our-doctors/rsummers.html

MATLAB 3.66% C++ 25.27% Shell 1.38% Python 10.02% Makefile 13.42% Cuda 10.96% CMake 3.97% C 21.24% Fortran 0.07% M 0.01% Perl 0.02% TeX 0.68% Batchfile 0.02% Tcl 0.01% Scala 0.12% Lua 0.49% Jupyter Notebook 8.54% HTML 0.01% CSS 0.04% Cython 0.07%

cadlab's People

Contributors

caldwellwg avatar delton137 avatar holgerroth avatar jiamin1975 avatar jiamin75 avatar khcs avatar ricbl avatar rsummers11 avatar tangyoubao avatar tangyuxing avatar tmathai avatar viggin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cadlab's Issues

[3DCE] Sensitivity of Different Lesion Sizes

I saw the results of sensitivity at 4 FPs of three lesion diameters (Table 1 and Table 4) provided in the paper 3DCE. But I can’t find the treatment for specific diameter range (e.g. <10 mm) in the released code. I am confusing about how to calculate it.
Do you first select GT lesions bounding box that meets the diameter range and then compare them with all predicted bounding boxes? Or do you have any special treatments for the predict bounding boxes, such as not calculating bounding box beyond the certain range into false positive?

use roi_pooling question

hi! thank you !
I compiled roi_pooling.so using nvcc8.0 in CUDA8.0. It works fine in faster-rcnn.
I coped it to this project as shown.
But something is wrong as below:
ImportError: /mnt/hgfs/DeepLesion/Code/CADLab-master/CADLab-master/LesaNet/roi_pooling/_ext/roi_pooling/_roi_pooling.so: undefined symbol: state

would you please give some advices and directions?
Thank you very much!

About the LymphNodeRFCNNPipeline project

hi, Mr! so sry to bother u, i'm tring rebuilding your project with pure python file and pytorch, because i'm poor in matlab language. I had make a success in preprocess (maybe named data augmentation) using pydicom and numpy. then, i build a CNN similar with yours. But when copied your parameters for traing it, the reslut cant be as good as yours. i doubt there were some wrong in my training ways and params.
so, if possible, could u tell me how did u traing the net? especially, the way make your data blanced and the number of the patches in a batch and a epoch. did you put the total patches(N=NsNrNt) in a single epoch? I know its too far long to recall , but i'm still waiting for your reply, thx <3,<3

[lesion_detector_3DCE] resnet

I'm wondering that have you ever tried to change the backbone from vgg16 to resnet50 or resnet101. I know it seems difficult to change to resnet, as it will decrease the resolution of the feature map with RPN_FEAT_STRIDE set to 16 and if closing the first pooling layer of resnet it will increase the computation cost a lot. But resnet has been proved outperforming vgg in many detection cases.
Do you have any opinion on this?

How can I get the JSRT mask?

Hello.
I recently wanted to reproduce your research, but I have encountered difficulties in datasets. Can you share with me the JSRT and Montgomery data sets you have processed? If it is not convenient, can you tell me how can I get the JSRT mask?
Thank you.

[lesion_detector_3DCE] visualize detected boxes

Hi,
How to visualize the detected results? I changed the parameter default.val_vis = True in rcnn/config.py but I got the error below while running python rcnn/tools/test.py

Traceback (most recent call last):
File "rcnn/tools/test.py", line 117, in
test_net(default.e2e_prefix, default.begin_epoch)
File "rcnn/tools/test.py", line 90, in test_net
default.val_max_box, default.val_thresh)
File "rcnn/tools/test.py", line 75, in test_rcnn
acc = pred_eval(predictor, test_data, imdb, vis=vis, max_box=max_box, thresh=thresh)
File "/mnt/ssd500GB/project/CADLab/lesion_detector_3DCE/rcnn/tools/../../rcnn/core/tester.py", line 225, in pred_eval
vis_all_detection(data_dict['data'].asnumpy(), boxes_this_image, imdb.classes, scale)
File "/mnt/ssd500GB/project/CADLab/lesion_detector_3DCE/rcnn/tools/../../rcnn/core/tester.py", line 296, in vis_all_detection
im = image.transform_inverse(im_array, config.PIXEL_MEANS)
File "/mnt/ssd500GB/project/CADLab/lesion_detector_3DCE/rcnn/tools/../../rcnn/fio/image.py", line 158, in transform_inverse
im += pixel_means[[2, 1, 0]]
IndexError: index 2 is out of bounds for axis 1 with size 1

The code works if I set default.val_vis = False.

Could you please advice?

Thanks!

How to use the trained model?

I am impressed by your work, but i get some problems when i use the trained model.
No matter what the input is, i get the same output.

I also want to train the model myself, but it stop , and i have no idea.

==> online epoch # 1 [batchSize = 50]	
 [==================== 2461/2461 ==============>]  Tot: 37m43s | Step: 749ms    
Train accuracy: 0.00 %	 time: 2263.43 s	
==> testing	
Test accuracy:	47.070707070707	
==> online epoch # 2 [batchSize = 50]	
Killed................ 101/2461 ................]  ETA: 2h9m | Step: 3s289ms

Can you give me some advice?
Roger

Error on multi-GPU when running MULAN

Hello, I run your project MULAN_universal_lesion_analysis on 2 Titan GPUs, but I found the error that "RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 0 does not equal 1 (while checking arguments for cudnn_convolution)" like the picture I send. I found the error is in densenet_custom_trunc.py:x = self.conv0(x). I debuged it and found that self.conv() is in device0 and copyed to device1. But x is in device0, it does't be placed in device1. So could you apply the code on multi-gpu and conld you help me with the error. Thank you!
2020-12-07 19-59-04屏幕截图

getting segmentation fault in MULAN

Hi,

While validation on the MULAN code, I am getting a segmentation fault (core dumped). Could anyone please resolve that or suggest what is causing the segmentation fault?

I got a MXNetError(py_str(_LIB.MXGetLastError()))

INFO:root:providing maximum shape [('data', (2, 3, 512, 512)), ('gt_boxes', (2, 5, 5))] [('label', (2, 61440)), ('bbox_target', (2, 60, 64, 64)), ('bbox_weight', (2, 60, 64, 64))]
Error in proposal.infer_shape: Traceback (most recent call last):
File "F:\ProgramData\Anaconda3\envs\MXnet\lib\site-packages\mxnet\operator.py", line 751, in infer_shape_entry
array('I', rshape[i])),
TypeError: array item must be integer

infer_shape error. Arguments:
data: (2, 3, 288, 349)
im_info: (2, 3)
gt_boxes: (2, 1, 5)
label: (2, 23220)
bbox_target: (2, 60, 36, 43)
bbox_weight: (2, 60, 36, 43)

I encountered these problems,but i am not proficient in MXnet.i use MXnet1.2.1,
It doesn't look like a version problem.I sincerely hope that I can get your help about this iss.

out of memory

Hey,
I tried to run "train.lua" in txtrnn, but it failed with 'out of memory':
"cuda runtime error (2) : out of memory"

I have mac pro with NVIDIA GeForce GT 750M 2048MB

It failed when the program try to load the cnn model at line:230

 local immodel = torch.load(opt.immodelloc)

Can you give any advise to solve this issue?

Many Thanks

How to get the word labels of correspond Patient_ID in DeepLesion Dataset

Hi,
I notice that the id column in /program_data/text_mined_labels_171_and_split.json are not same as the Patient_ID of the image in DeepLesion dataset.
So how to find a connection between two IDs or how to get the word labels of correspond patient image?
Besides, can u public the original diagnosis report data ?

Thanks,

No libs in ccnet

Hi, there is no libs folder in ccnet project to use. Could you guys provide them thanks

error when running train.py

Hello,

For some reason I'm getting the following error when I run train.py. I've Cuda version 10.1. Any help would be really appreciated. Thanks.

Traceback (most recent call last):
File "train.py", line 21, in
from networks.xlsor import XLSor
File "/content/CADLab/Lung_Segmentation_XLSor/networks/xlsor.py", line 15, in
from libs import InPlaceABNSync
File "/content/CADLab/Lung_Segmentation_XLSor/libs/init.py", line 1, in
from .bn import ABN, InPlaceABN, InPlaceABNWrapper, InPlaceABNSync, InPlaceABNSyncWrapper
File "/content/CADLab/Lung_Segmentation_XLSor/libs/bn.py", line 15, in
from .functions import inplace_abn, inplace_abn_sync
File "/content/CADLab/Lung_Segmentation_XLSor/libs/functions.py", line 5, in
from . import _ext
File "/content/CADLab/Lung_Segmentation_XLSor/libs/_ext/init.py", line 3, in
from .__ext import lib as _lib, ffi as _ffi
ImportError: /content/CADLab/Lung_Segmentation_XLSor/libs/_ext/__ext.so: undefined symbol: __cudaPushCallConfiguration

ImportError at from ..cython.cpu_nms import cpu_nms

As I run train.sh, I get the following error:
ImportError: .........cython/cpu_nms.so: undefined symbol: PyFPE_jbuf

Please look at the image, I could not copy paste text.
image

Could you please help me fix this?
Thanks.

How to run this project?

Hey,

I read the CVPR 2016 paper and I tried to read the main.ipynb to understand how to run this library..
Yet, I didn't figure out how to to run the whole project, because I didn't see command for RNN training and other files.

My question is, do you have a better comprehensive main file to understand the whole flow?
Thanks

Error when I run 3DCE project with python=3.5

@viggin , Hello,Sorry to bother you, I get an error:"ImportError: dynamic module does not define module export function (PyInit_bbox)" when my python version is 3.5. Just like the photo. I'm looking forward to your answer. Thank you!

Regarding the code for processing the images. I am stuck here in processing stage, If anyone has the code pls send the link, that would be helpful.

Write a script to process the images:

  1. Convert the image to int32 format, then subtract 32768 from the pixel intensities
    to obtain the original Hounsfield unit (HU) values (generally about -1000 ~ 1000,
    https://en.wikipedia.org/wiki/Hounsfield_scale);
  2. Do intensity windowing (https://radiopaedia.org/articles/windowing-ct) on the
    HU values, i.e., convert the intensities in a certain range (“window”) to 0-255
    for viewing. To view different structures (lung, soft tissue, bone etc.), we need
    different windows. The column “DICOM_windows” in DL_info.csv provides
    the default window for each image. For example, if the min and max values of
    a window is A and B, then the windowed intensity I should be
    I = min(255, max(0, (HU-A)/(B-A)*255);
  3. Save the windowed image to 8-bit image files. This is how the files in
    Key_slices.zip were generated.

[MULAN] Will the images per batch influence the detection accuracy?

Recently I try to reproduce the result of your paper. I set the value of images per batch to 2 and other value remains, and then the mean of detection accuracy is just about 81%.

The three branch are all set to be true and additional feature is added too. when the value of image per batch is 3 ,the accuracy will slight increase. I didn't continue to increase the value due to the limitation of my GPU. However, the tag accuracy and the segmentation accuracy are close to the result of your paper whatever the value is.

Should I change the learning rate or something if the images per batch is different from yours?

Here are the parts of my experiment result,and the images per batch is 2.Thank you for your help!

1: 0.5968
2: 0.6065
3: 0.5630
4: 0.6423
5: 0.7836
6: 0.7941
7: 0.8005
8: 0.8003

Detection accuracy:
Sensitivity @ [0.5, 1, 2, 4, 8, 16] average FPs per image:
0.6932, 0.7852, 0.8571, 0.8998, 0.9281, 0.9471
mean of [0.5, 1, 2, 4]: 0.8088

Tagging accuracy:
hand-labeled tags:
m_AUC pc_F1 pc_Pr pc_Re wm_AUC ov_F1 ov_Pr ov_Re
0.9490 0.4588 0.4437 0.6207 0.9573 0.6270 0.5323 0.7625

Segmentation accuracy:
avg min distance (mm) from groundtruth recist points to predicted contours in GT boxes:
error of lesion diameter (mm) estimated from predicted contours in GT boxes:
1.4376+-1.6287, 1.9919+-2.2651
total test time: 666.962127

[lesion_detector_3DCE] missing files

When running the project [lesion_detector_3DCE], it raises many errors like:

import _init_paths
ImportError: No module named _init_paths

from rcnn.utils.timer import Timer
ImportError: No module named utils.timer

from rcnn.utils.load_data import load_gt_roidb, merge_roidb, filter_roidb
ImportError: No module named utils.load_data

from rcnn.utils.load_data import load_gt_roidb, merge_roidb, filter_roidb
from rcnn.utils.load_model import load_param

There is no subfolder named "utils" in lesion_detector_3DCE/rcnn/
It seems that you forgot to release these files.

Data confusion

I really can't understand the data you provided, I see every patient.
Index_study index_series index" There are many images under the folder, but in the DL_info.csv file, not every image is included, so a fixed "patient"
Index_study index_series index" Isn't there just one image below the folder? Why are there many images, but only one?

Please give me some guidance, thank you

Data Window of Different Body Parts

I find that every key slice has its own ’DICOM_windows‘,but the code use [-1024,3071] as the default value.
Whether different values are applied in the procedure?Or do all the data use the same windows?

Average precision and recall at small, medium, large

Hello,
I am trying to compare area wise results for 3DCE and MULAN and other state-of-the-art. I have two questions is in this direction:

  1. does small, medium. large refer to area of lesion detected ?
  2. can I find the results for both the papers somewhere for small, med, large areas?

visualize the detected picture

hello, I run the project and get the final sensitivity, but how could I see the detected picture with anchors, Thank you for your answer.

TypeError: integer argument expected, got float & infer_shape error.

I try to use ‘python rcnn/tools/train.py’ but I get this wrong information. It's been bothering me for days......
'''
Error in proposal.infer_shape: Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/mxnet/operator.py", line 751, in infer_shape_entry
array('i', rshape[i])),
TypeError: integer argument expected, got float

infer_shape error. Arguments:
data: (2, 3, 288, 349)
im_info: (2, 3)
gt_boxes: (2, 1, 5)
label: (2, 23220)
bbox_target: (2, 60, 36, 43)
bbox_weight: (2, 60, 36, 43)
Traceback (most recent call last):
File "rcnn/tools/train.py", line 265, in
train_net(default)
File "rcnn/tools/train.py", line 196, in train_net
arg_params, aux_params = init_params(args, sym, train_data)
File "rcnn/tools/train.py", line 74, in init_params
arg_shape, out_shape, aux_shape = sym.infer_shape(**data_shape_dict)
File "/usr/local/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1076, in infer_shape
res = self._infer_shape_impl(False, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1210, in _infer_shape_impl
ctypes.byref(complete)))
File "/usr/local/lib/python3.6/site-packages/mxnet/base.py", line 253, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: Error in operator rois: [08:37:02] src/operator/custom/custom.cc:152: Check failed: reinterpret_cast( params.info->callbacks[kCustomOpPropInferShape])( shapes.size(), ndims.data(), shapes.data(), params.info->contexts[kCustomOpPropInferShape]):
Stack trace:
[bt] (0) /usr/local/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x4a148b) [0x7fe23f31e48b]
[bt] (1) /usr/local/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x895896) [0x7fe23f712896]
[bt] (2) /usr/local/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x26baef2) [0x7fe241537ef2]
[bt] (3) /usr/local/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x26bd7c5) [0x7fe24153a7c5]
[bt] (4) /usr/local/lib/python3.6/site-packages/mxnet/libmxnet.so(MXSymbolInferShapeEx+0x103e) [0x7fe2414a0ebe]
[bt] (5) /usr/local/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(ffi_call_unix64+0x4c) [0x7fe2aee943a6]
[bt] (6) /usr/local/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(ffi_call+0x3f1) [0x7fe2aee93101]
[bt] (7) /usr/local/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(_ctypes_callproc+0x2cf) [0x7fe2aee8a7bf]
[bt] (8) /usr/local/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(+0x9719) [0x7fe2aee81719]
'''

Issue with the training.

Hi,

I was training body_part_regressor from scratch with DeepLesion dataset but the network is not converging. Is there any pre processing required for the dataset other than enabling "IMG_IS_16bit: True" in the config.yml. Attaching training log.

Thank You,
Srinivas.

log.traintest.04-06_21-49-35_.txt

IoBB results

Hi, I sew the intersection over the detected bounding-box area ratio (IoBB) using in the paper 3DCE, but I can't found in the released code. And, I only found the Sensitivity (%) at 4 FPs per image of different methods when the IoBB was used as the overlap computation criterion (Table 2) . I'm full of curiosity in Sensitivity (%) at various FPs per image when the IoBB was used as the overlap computation criterion. Could you show me this results?

Looking forward to your reply, Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.