Coder Social home page Coder Social logo

deepseed-3d-convnets-for-pulmonary-nodule-detection's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

deepseed-3d-convnets-for-pulmonary-nodule-detection's Issues

question about FROCeval. py

I use your trained model, and test it, but the result is far from your result in your exciting paper. the detp I set is [0.3, 0.4, 0.5, 0.6, 0.7]. can you help me?
wish for your reply

Cuda Out of Memory

Hello,
I would like to first thank you for this nice project.
I have some questions about your model:

  1. Can we test your model (just detection) with CPU?
  2. I have a 24GB GPU, but I still have an error when I want to test your model (Cuda out of memory). Is there any way to make the testing part as the training part with 128128128 patches instead of 208208208?
  3. Is there any alternative solution to solve this problem?
    Thanks for your help

LIDC-IDRI Data Generator Issue

Hey there!
When trying to generate the preprocessed_data for testing from the LIDC-IDRI dataset using the prepareLIDC.py it throughs the following errors. I tried hard but was not able to solve it. Thanks a lot in advance

starting preprocessing
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/deep.int022/anaconda3/envs/dp_env_gpu/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/deep.int022/anaconda3/envs/dp_env_gpu/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "lidcnew.py", line 79, in savenpy
box = np.array([[np.min(xx),np.max(xx)],[np.min(yy),np.max(yy)],[np.min(zz),np.max(zz)]])
File "/home/deep.int022/anaconda3/envs/dp_env_gpu/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2618, in amin
initial=initial)
File "/home/deep.int022/anaconda3/envs/dp_env_gpu/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation minimum which has no identity
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "lidcnew.py", line 155, in
full_prep()
File "lidcnew.py", line 149, in full_prep
_=pool.map(partial_savenpy,range(N))
File "/home/deep.int022/anaconda3/envs/dp_env_gpu/lib/python3.6/multiprocessing/pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/home/deep.int022/anaconda3/envs/dp_env_gpu/lib/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
ValueError: zero-size array to reduction operation minimum which has no identity

Trained model predicts high number of bboxes.

Hello @shakjm, I hope you're well.

since @ymli39 is not interacting with new issues for some reason (I hope he is well and I hope to hear back from him soon), and since I have noticed that you have been actively interested in this repo, I am hopeful that you may be able to assist me with some questions I have.

I am running the training script using Python 3.5 (and LUNA-16 dataset) and have been trying to reproduce the results from the paper, for some reason, the model is producing a very high number of bounding boxes predictions per scan (~1 million or more sometimes). Have you come across this issue before? I am not sure why this is happening. Obviously, this also is slowing the evaluation drastically (non-maximum suppression becomes extremely slow).

Could this be the difference in python versions (although I have checked the code for inconsistencies between python 2 and 3, namely, I have checked all division lines in all scripts to make sure they are consistent).

Another thing I have noticed is the use of thresh=-3 in train_detector_se.py, which I can't quite understand it clearly, it seems to me that it's being used for producing a mask...!

I apologize but this is a desperate call for help, I have been working on this for a good while now, so I would really appreciate any advice or assistance :)

Many thanks in advance.

checkpoint

Please tell me where the 177.ckpt file is, and how the project's configuration environment is roughly, please teach me

LIDC-IDRI preprocessing steps

Hi @ymli39,

Thanks for sharing your work.

I have reproduced the results for LUNA16 dataset. I am working to do the same with LIDC-IDRI dataset.

I have encountered a problem and need some clarification.

I am using prepareLIDC.py file. It takes three inputs: (1) preprocess path, (2) data_path, (3) new_nodule csv file path.

new_nodule.csv contains patients nodules until LIDC-127 and multiple slices. When I run the script, xx, yy, and zz will be zero for other patients ID, i.e., beyond LIDC-127. Another question is that how did we know which slice has nodule with new_nodule.csv file alone?

I also explored new_non_nodule.csv which also have the same patients ID and until LIDC-127. It does not have slice information as well.

Another question is what was the data structure?
I have as follows:
LICD-IDRI-Data

  • LICD-IDRI-0001
    - dicom files
  • LICD-IDRI-0002
    - dicom files
    (...)

Thank you for your time and effort.

Question about FROC score

I made changes to "noduleCADEvaluationLUNA16.py", added "luna_test.csv", and checked the "predanno0.3.csv" result that you uploaded to the GitHub repository. However, the resulting output differs from what is mentioned in the paper.

Thank you for getting back to me.

froc_predanno0 3

After running the code again, I got the following result.
froc_predanno0 3

Cannot find preprocessed labels

After preparing LUNA dataset I got _clean.npy, _label.npy for training, and _spacing.npy, _extendbox.npy, _origin.npy data. But while running train_detector_se.py from luna_detector directory I am getting following error:

Traceback (most recent call last):
File "train_detector_se.py", line 338, in
main()
File "train_detector_se.py", line 113, in main
split_comber=split_comber)
File "/home/movchinar/home/movchinar/DeepSEED-3D-ConvNets-for-Pulmonary-Nodule-Detection/luna_detector/data_loader.py", line 56, in init
l = np.load(os.path.join(data_dir, '%s_label.npy' %idx))
File "/home/movchinar/.local/lib/python3.6/site-packages/numpy/lib/npyio.py", line 416, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: '/fast-drive/movchinar/LUNA/prepross/668_label.npy'

In fact, there is no data with a 668 index.
Could you guide please the algo which builds indexes.

Issue when training model

Hi, I'm getting the following issue (after preprocessing the image). Would it be possible to look into this please?
Also I couldn't find any ways in testing the model, before training it.

image

threshold set to -3

Hi,

Can you explain why the threshold was set to -3 when calculating the predicted bounding boxes. Does this threshold relate to the probability/confidence score of the pbb. If not, where can I find the cancer probability scores of each nodule detected?

Clarification on luna_abbr parameter (in other words, what is shorter.csv)

Could you please help me understand the parameter luna_abbr in config_training.py
I am trying to prepare LUNA-16 but am I can't find out what/where luna_abbr file is?

From what I see in the config_training.py, luna_abbr seems to be pointing at a file called shorter.csv, which I can't find nor can I tell how to produce it.

Any help would be very much appreciated :)

loss

The first round of these results are normal, the loss is so low, and the FROCeval.py file mentioned in ReadMe is not found, it is automatically generated, thank you very much for your answer!

Train: tpr 77.02, tnr 97.85, total pos 1767, total neg 7068, time 3325.03
loss 0.0707, classify loss 0.0299, regress loss 0.0091, 0.0090, 0.0086, 0.0140

Is there 10-folds training?

@ymli39

The paper mentions 10-folds validation (which I presume, also means there is 10-fold training), my question is :

1-how is the 10-fold training done using the training script?
2-are these files (luna_train.npy and luna_test.npy) related to the 10-fold training?
3-how are these files generated?

Looking forward to hearing back from you.

from layers import iou ModuleNotFoundError: No module named 'layers'

hello, after running the pre-processing of the data, and starting the training, I'm getting the following error:

File "/content/drive/MyDrive/Pibic/DeepSEED-3D-ConvNets-for-Pulmonary-Nodule-Detection-master/luna_detector/data_loader.py", line 16, in
from layers import iou
ModuleNotFoundError: No module named 'layers'

I searched and didn't find much about this library, did it have any updates? Can you give me instructions on how to proceed?

Thanks!

About FROC Score

Hi,
Thanks for sharing your codes, and thanks for your good paper.
I use 177.ckpt provided by you, and according to luna Test.npy has made the corresponding CSV file and PNG file.
I found that the FROC scores of 0.125, 0.25, 0.5, 1, 2, 4, 8 are lower than paper.
Do you have any suggestions?

I changed your code and show 64 false positive per scan, the result shows 0.93 sensitivity.
And I trained 150 epochs, the result was similar.

image


CAD Analysis: predanno0.3


Candidate detection results:
True positives: 80
False positives: 1423
False negatives: 6
True negatives: 0
Total number of candidates: 1735
Total number of nodules: 86
Ignored candidates on excluded nodules: 218
Ignored candidates which were double detections on a nodule: 14
Sensitivity: 0.930232558
Average number of candidates per scan: 19.715909091

clean.npy, mask.npy

我在config_training.py更改了文件路径信息之后,跳入prepare.py进行数据预处理,为什么处理之后的结果clean.npy的值全是170,mask.npy的值不是全true就是全Flase,这个数据显得并不正常。请问是哪里出了什么问题嘛,各段代码又挺正常,有大佬解解惑嘛,非常感谢

Nodule Detection Errors

Hi @ymli39
I am facing a starnge error when I am training the detector network. Do you have any comment on this error?

Traceback (most recent call last):
File "/home/sh/Documents/Codes/My_Models/My_Seed/luna_detector/train_detector_se.py", line 351, in
main()
File "/home/sh/Documents/Codes/My_Models/My_Seed/luna_detector/train_detector_se.py", line 159, in main
train(train_loader, net, loss, epoch, optimizer, get_lr, save_dir)
File "/home/sh/Documents/Codes/My_Models/My_Seed/luna_detector/train_detector_se.py", line 187, in train
for i, (data, target, coord) in enumerate(data_loader):
File "/home/sh/Documents/Codes/DSB2017-master/venv/local/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "/home/sh/Documents/Codes/DSB2017-master/venv/local/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
return self._process_data(data)
File "/home/sh/Documents/Codes/DSB2017-master/venv/local/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
data.reraise()
File "/home/sh/Documents/Codes/DSB2017-master/venv/local/lib/python2.7/site-packages/torch/_utils.py", line 394, in reraise
raise self.exc_type(msg)
AssertionError: Caught AssertionError in DataLoader worker process 29.
Original Traceback (most recent call last):
File "/home/sh/Documents/Codes/DSB2017-master/venv/local/lib/python2.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/sh/Documents/Codes/DSB2017-master/venv/local/lib/python2.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
print(possibly_batched_index)
File "/home/sh/Documents/Codes/My_Models/My_Seed/luna_detector/data_loader.py", line 108, in getitem
label = self.label_mapping(sample.shape[1:], target, bboxes)
File "/home/sh/Documents/Codes/My_Models/My_Seed/luna_detector/data_loader.py", line 286, in call
assert (input_size[i] % stride == 0)
AssertionError

problems about the test results of 207.ckpt on luna16 subset 9

Dear Doc. Li,

I tested the 207.ckpt you provided on the luna16 subset9, but got poor CPM scores. The results as follows,


CAD Analysis: predanno0.3


Candidate detection results:
True positives: 98
False positives: 1316
False negatives: 7
True negatives: 0
Total number of candidates: 1702
Total number of nodules: 105
Ignored candidates on excluded nodules: 271
Ignored candidates which were double detections on a nodule: 17
Sensitivity: 0.933333333
Average number of candidates per scan: 19.340909091

froc_predanno0 3

So strange! The sensitivity is about 93.3 and the average number of candidates are also only 19 but the CPM is very poor (0.18???).
I also re-trained the methods on luna16 subset0-8 and tested on subset9. However, the evaluation results are also odd!


CAD Analysis: predanno0.3


Candidate detection results:
True positives: 100
False positives: 1792
False negatives: 5
True negatives: 0
Total number of candidates: 2342
Total number of nodules: 105
Ignored candidates on excluded nodules: 415
Ignored candidates which were double detections on a nodule: 35
Sensitivity: 0.952380952
Average number of candidates per scan: 26.613636364
froc_predanno0 3

I wonder if you could give me some advice. I cannot find your email on the paper. I wonder if you could provided your email for me in your convenience ?

Thank you in advance!

[email protected]

Dataloader has inputs other than size 128.

Hey,
When I load my dataloader, the input coming in is with image size other than 128.
image

And for executing your step1.py file, to save the mask I followed the following code :
out_folder = 'E:\LUNA Dataset\mask\'
for idx,i in enumerate(target_filenames):
filename = i.split('\')[-1]
print(idx,i)
case_pixels, bw1, bw2, spacing, origin = step1_python(i)
bw = bw13+bw24
bw = sitk.GetImageFromArray(np.uint8(bw))
bw.SetSpacing(spacing)
bw.SetOrigin(origin)
sitk.WriteImage(bw,out_folder+filename, True)

Am I right?

Need some explanation on the variable "coord"

Hi there Mr. Li, I have trouble understanding some of the codes you've written and could not find out an explanation to them.

For example,

start = []
        for i in range(3):
            if not isRand:
                r = target[3] / 2
                s = np.floor(target[i] - r) + 1 - bound_size
                e = np.ceil(target[i] + r) + 1 + bound_size - crop_size[i]
            else:
                s = np.max([imgs.shape[i + 1] - crop_size[i] / 2, imgs.shape[i + 1] / 2 + bound_size])
                e = np.min([crop_size[i] / 2, imgs.shape[i + 1] / 2 - bound_size])
                target = np.array([np.nan, np.nan, np.nan, np.nan])
            if s > e:
                start.append(int(np.random.randint(e, s)))  # !
            else:
                start.append(int(target[i] - crop_size[i] / 2 + np.random.randint(-bound_size / 2, bound_size / 2)))

        normstart = np.array(start).astype('float32') / np.array(imgs.shape[1:]) - 0.5
        normsize = np.array(crop_size).astype('float32') / np.array(imgs.shape[1:])
        xx, yy, zz = np.meshgrid(np.linspace(normstart[0], normstart[0] + normsize[0], self.crop_size[0] / self.stride),
                                 np.linspace(normstart[1], normstart[1] + normsize[1], self.crop_size[1] / self.stride),
                                 np.linspace(normstart[2], normstart[2] + normsize[2], self.crop_size[2] / self.stride),
                                 indexing='ij')
        coord = np.concatenate([xx[np.newaxis, ...], yy[np.newaxis, ...], zz[np.newaxis, :]], 0).astype('float32')

This particular set of codes is from the data_loader.py file, under the Crop class. I was wondering what does s & e stands for, and what is the reason you're doing this particular set of codings? Why do you utilize the meshgrid function to generate a new set of coords?

Another thing I would like some clarifications on is based on the res18_se.py file, where you concatenated the coord variable to comb2 variable, in the forward function.

       rev2 = self.path2(comb3)
        comb2 = self.back2(torch.cat((rev2, out2, coord), 1))

Why do you concatenate coord to the network?

Hope to get some explanation on my confusion. Hope to hear from you soon. Thank you.

Reference to missing numpy array

There's a file missing. On line 94 in train_detector_se.py you're referencing some file on your local computer:
luna_data = np.load('/home/LungNodule_DL/detector/luna_folds/luna_fold6.npy')

Would it be possible to share this file or clarify what this file should be?

Thanks

Multi class support?

@ymli39 Hi, thanks for sharing your code and paper. I wonder whether it supports multi-class detection when I use luna_detector (not only detect pulmonary nodule). If not, how can I modify the code to support multi-class detection?
Thanks in advance!

Do you have any guidance for the following errors in the training step?

Do you have any guidance for the following errors in the training step?

Traceback (most recent call last):
File "train_detector_se.py", line 340, in
main()
File "train_detector_se.py", line 121, in main
test(test_loader, net, get_pbb, save_dir,config)
File "train_detector_se.py", line 257, in test
for i_name, (data, target, coord, nzhw) in enumerate(data_loader):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 434, in reraise
raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/content/drive/MyDrive/Pibic/DeepSEED-3D-ConvNets-for-Pulmonary-Nodule-Detection-master/luna_detector/data_loader.py", line 119, in getitem
xx,yy,zz = np.meshgrid(np.linspace(-0.5,0.5,imgs.shape[1]/self.stride),
File "<array_function internals>", line 6, in linspace
File "/usr/local/lib/python3.7/dist-packages/numpy/core/function_base.py", line 113, in linspace
num = operator.index(num)
TypeError: 'float' object cannot be interpreted as an integer


Thanks!

About FROC Score

Hi,
Thanks for sharing your codes and thanks for your good paper.
I use the predanno0.3.csv provided by you, and according to luna_ Test.npy has made the corresponding CSV file. I found that the FROC scores of 0.125, 0.25, 0.5 and 0.1 are lower than expected. Do you have any good suggestions? I evaluated the CSV detection file provided by you.

Total number of included nodule annotations: 86
Total number of nodule annotations: 3596
(1000, 10000)
##########################
0.47112617 0.125
0.60532993 0.25
0.74687976 0.5
0.8357665 1.0
0.9013603 2.0
0.9310172 4.0
0.93185717 8.0
0.77476245
0.7691029900332226 16.0

CAD Analysis: predanno0.3


Candidate detection results:
True positives: 83
False positives: 1443
False negatives: 6
True negatives: 0
Total number of candidates: 1770
Total number of nodules: 89
Ignored candidates on excluded nodules: 230
Ignored candidates which were double detections on a nodule: 14
Sensitivity: 0.932584270
Average number of candidates per scan: 19.450549451

Need some clarification on outputs

Hi there, I'm trying to replicate your model and I was wondering how are your outputs being used? They're lbb and pbb. I'm guessing lbb is the labels, where as pbb is a feature vector of [x,5]. What are they representing?

Also, is there any way I can generate an image to see where the bounding boxes land? How do I do that and how do the coordinates match with the images that were preprocessed?

csv

Dear Doc. Li,
The code “luna_abbr':'./detector/labels/shorter.csv”. Which is the csv file referred to?
lunna16 does not have this csv file

Division of LIDC-IDRI dataset?

How to use LIDC-IDRI DataSet? How to divide LIDC-IDRI into training set and test set? What does alllabelfiles mean? Where is this file "/home/LungNodule_DL/LIDC/labels/new_nodule.csv" from?

Installation guide

HI @ymli39,
I am trying to reproduce the result. Please share the installation guide or the Python version.
Thank you for your efforts.

problems about froc results

Hi, I use your posted the predanno0.3.csv to get the froc values using the script noduleCADEvaluationLUNA16.py with your codes defualt setting, I get the normal results. The froc results printed are as below:
Senstivity FPS
0.68696856 0.125
0.75847465 0.25
0.8146588 0.5
0.8623556 1.0
0.9020456 2.0
0.9117384 4.0
0.9117384 8.0
mean value: 0.83542573
However, I use your codes train the model with your pretrained model, I get the froc results with same settings are as below:
Senstivity FPS
0.03737833 0.125
0.100776754 0.25
0.1428685 0.5
0.3355601 1.0
0.7871244 2.0
0.93348473 4.0
0.93348473 8.0
0.46723965
mean value: 0.45514950166112955

By the way, I use your luna_train.npy and luna_test.npy do the run and validation, and get the tpr 97.57, tnr 99.53 ,loss 0.0133, classify loss 0.0045, regress loss 0.0020, 0.0019, 0.0024, 0.0025 with total pos 1767, total neg 7068 upon train datasets.

tpr 96.13, tnr 99.99790957, loss 0.0095, classify loss 0.0000, regress loss 0.0015, 0.0021, 0.0026, 0.0033 with total pos 181, total neg 17699714 upon val datasets.

I am confused about the low senstivity with the fps [0.125,0.25,0.5] ,could you give me some advice about the this problem? I will be greatly appreciated about you reply.

Pre-trained model

Would it be possible to post a pre-trained model somewhere for this? I work for Intel and would love to use the pre-trained model to benchmark our inference performance.

Best.
Very respectfully,
-Tony

Class 'float Error

Hello, it's always giving "TypeError: object of type <class 'float'> cannot be safely interpreted as an integer." for the enumerate functions in all methods for train_detector_se.py . How can I fix this? Thank you very much.

out of memory error

Hello, thank you for your selfless sharing. I had an out-of-memory error while running the code (I was using rtx3090-24G) :
enter the directive {set CUDA_VISIBLE_DEVICES=0 python train_detector_se.py -b 2 --save-dir /train_result/ --epochs 100}

The run reports the following error: {
torch.Size([18, 1, 208, 208, 208])
Traceback (most recent call last):
File "D:\FISH\DeepSEED_1\luna_detector\train_detector_se.py", line 410, in
main()
File "D:\FISH\DeepSEED_1\luna_detector\train_detector_se.py", line 158, in main
test(test_loader, net, get_pbb, save_dir,config)
File "D:\FISH\DeepSEED_1\luna_detector\train_detector_se.py", line 355, in test
output = net(input,inputcoord)
File "C:\Users\yiwen\anaconda3\envs\LiuJH\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\yiwen\anaconda3\envs\LiuJH\lib\site-packages\torch\nn\parallel\data_parallel.py", line 169, in forward
return self.module(*inputs[0], **kwargs[0])
File "C:\Users\yiwen\anaconda3\envs\LiuJH\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\FISH\DeepSEED_1\luna_detector\res18_se.py", line 107, in forward
out = self.preBlock(x)#16
File "C:\Users\yiwen\anaconda3\envs\LiuJH\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\yiwen\anaconda3\envs\LiuJH\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "C:\Users\yiwen\anaconda3\envs\LiuJH\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\yiwen\anaconda3\envs\LiuJH\lib\site-packages\torch\nn\modules\batchnorm.py", line 171, in forward
return F.batch_norm(
File "C:\Users\yiwen\anaconda3\envs\LiuJH\lib\site-packages\torch\nn\functional.py", line 2470, in batch_norm
return torch.batch_norm(
The torch. Cuda. OutOfMemoryError: cuda out of memory. Tried to the allocate 6.44 GiB (GPU 0; 24.00 GiB total capacity; 19.61 GiB already allocated; 2.94 GiB free; 19.63 GiB reserve
d in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF}

This problem has troubled me for a long time. I tried to modify the input data and adjust the network structure, but failed to solve this problem due to my limited ability.Could you help me with this problem,thanks a lot.

problem about rusult

I use trained your network, but the test result is about 0.72.
the batch size is 16
the detp I set is 0.3
the nmsthresh is (0.01, 0.25, 0. 75)
I want to know your settings to get your awesome results.
wish for your reply.

The prediction probability is not in 0-1 range?

Hi, thanks for the wonderful explanation.
When I test your model, I found the prediction probability is not in 0-1 range.
I mean the probability is like 4.5, 18, 21.5. Does these phenomenon normal?
It looks like these are the diameter of the nodules. But how can I get the probabilities?
I attach an example of output of the model.
Thanks in advance @ymli39

   [ -2.97817516, 285.57787803, 285.55532278, 425.68683523,
      9.22237027],
   [ -2.78340006, 285.48476638, 285.41791546, 429.51884933,
      5.54941623],
   [ -2.89723754, 285.46616549, 285.3996337 , 429.56445795,
      9.20982807]])

Cannot find CPM (7 Recall Average) Part of your code

Hi,
Thanks for sharing your codes and thanks for your good paper.
I'm working on LUNA16 challenge as an application of our thesis and I have some ideas to extend all current works on this domain.
I'm using some publicly shared works in this domain but I have some problem in the CPM computation.
I think after calling LunaCADEvaluationScript.py we must to interpolate Fps's (False Positive per Scan) but I found some works mismatch in current reported CPM with my implementation.
I would like to know can you share with me CPM computation of your paper implementation?
I cannot found that part in your repo.
Best regards

Clarification on dependencies

@ymli39

Could you kindly clarify what are the dependencies to run this project? What versions were used (e.g. Python, PyTorch, CUDA, cuDNN,..etc.)

This is very helpful for the reproduction of this work.

Many thanks in advance.

Need some clarification on test_result

Hi Mr. Li,

I would like to know how did you achieve 0.93 sensitivity with only 80 true positives and 1218 false positives as reported in your test_result folder? You have only posted subset9 folder results. Shouldn't it be results from subset0 up to subset9 for the FROC calculation to happen? Could you clarify on this? I recall LUNA16 having 1187 nodule candidates as shown in the annotations csv file.

Thank you.

Idcs

new newe
Hello, can you please share how these labels are only picked up for training.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.