Coder Social home page Coder Social logo

3dunet-pytorch's Introduction

lee-zq

🔭 Career

  • 1, in 2014.
  • 2, in 2018.
  • 3, in 2021.

🌱 Interest

  • Image Recognition using Deep Learning
    • Object Detection
    • Semantic Segmentation
  • Image Processing using Deep Learning
    • Super Resolution
    • Style Transfer

👯 Side Jobs & Hobby

 lee-zq

3dunet-pytorch's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

3dunet-pytorch's Issues

维度问题

File "/opt/data/private/3DUNet-Pytorch-master/dataset/transforms.py", line 153, in call
img, mask = t(img, mask)
File "/opt/data/private/3DUNet-Pytorch-master/dataset/transforms.py", line 64, in call
tmp_img[:,:es-ss] = img[:,ss:es]
RuntimeError: The expanded size of the tensor (48) must match the existing size (36) at non-singleton dimension 1. Target sizes: [1, 48, 256, 256]. Tensor sizes: [36, 256, 256]

正常读入数据集,出现这样的报错是什么原因呀?

请问为什么3D UNet结构里没有用normalization?

您好~如题,我注意到Unet.py里似乎没有涉及到任何比如batch或者instance等等的normalization操作,但我记得原始的3D UNet里应该是有的。请问您这样做是有什么用意吗?还是说encoder某一步里有自带normalization呢?
盼复,十分感谢!

test预测不出来结果

您好,最近在用您分享的代码做分割,train的背景dice0=0.9984,目标dice1= 0.6014,但是test的时候dice0=0.6787,dice1=0.0011,且预测出来的标签全黑。我是用4个GPU进行训练和预测的,请问出现这种问题的原因是什么呢?和模型参数的读取有关系?希望得到您的帮助!

Question in preprocess_LiTS.py

Hello,I want to run you code on my computer,and I followed the instruction in README.md.
in the preprocess_LiTS.py file,line of 75 as below:
print('Too little slice,give up the sample:', ct_file)
there was an error like this:
Unresolved reference 'ct_file'

how can i solve the problem?
waiting for your rely
thanks

RuntimeError: Invalid index in scatter

你好,我用你的网盘数据可以训练测试,但是用我自己的CT数据、分割数据,到这一步就卡住报错了:
fixd_path = r'D:\Download\3DUNet-Pytorch-master\fixed_data'
dataset = Lits_DataSet([16, 64, 64],0.5,fixd_path,mode='train') #batch size
data_loader=DataLoader(dataset=dataset,batch_size=2,num_workers=1, shuffle=True)
for batch_idx, (data, target) in enumerate(data_loader):
print("最大值:")
print(torch.max(target.long() , 0))
print(data.shape, target.shape)
target = to_one_hot_3d(target.long())

到to_one_hot_3d()函数里的scatter时报错
torch.Size([2, 1, 16, 64, 64]) torch.Size([2, 16, 64, 64])

Traceback (most recent call last):
File "D:/Download/3DUNet-Pytorch-master/dataset/dataset_lits.py", line 65, in
main()
File "D:/Download/3DUNet-Pytorch-master/dataset/dataset_lits.py", line 57, in main
target = to_one_hot_3d(target.long())
File "D:\Download\3DUNet-Pytorch-master\utils\common.py", line 28, in to_one_hot_3d
one_hot = torch.zeros(n, n_classes, s, h, w).scatter_(1, tensor.view(n, 1, s, h, w), 1)
RuntimeError: Invalid index in scatter at ..\aten\src\TH/generic/THTensorEvenMoreMath.cpp:551

Pytorch Graphics Memory

RuntimeError: CUDA out of memory. Tried to allocate 38.81 GiB (GPU 0; 48.00 GiB total capacity; 21.21 GiB already allocated; 23.81 GiB free; 22.30 GiB reserved in total by PyTorch)
Why can't I run this code with 48GB of video memory?
CPU is a 48GB NVIDIA Quadro RTX 8000

11

你好,为什么在读取数据的时候,无法读取?报下面的错:
Traceback (most recent call last):
File "E:/PytorchProject/3DUNet-Pytorch-master/dataset/dataset_lits_train.py", line 67, in
for i, (ct, seg) in enumerate(train_dl):
File "D:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 521, in next
data = self._next_data()
File "D:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "D:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 1229, in _process_data
data.reraise()
File "D:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch_utils.py", line 434, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "D:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data_utils\worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "D:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "E:\PytorchProject\3DUNet-Pytorch-master\dataset\dataset_lits_train.py", line 29, in getitem
ct = sitk.ReadImage(self.filename_list[index][0], sitk.sitkInt16)
File "D:\Anaconda3\envs\pytorch_gpu\lib\site-packages\SimpleITK\extra.py", line 346, in ReadImage
return reader.Execute()
File "D:\Anaconda3\envs\pytorch_gpu\lib\site-packages\SimpleITK\SimpleITK.py", line 8015, in Execute
return _SimpleITK.ImageFileReader_Execute(self)
RuntimeError: Exception thrown in SimpleITK ImageFileReader_Execute: D:\a\1\sitk\Code\IO\src\sitkImageReaderBase.cxx:97:
sitk::ERROR: The file "E:/Pytorch" does not exist.

cannot understand the function of code 'data_np=data_np/200.0'

in dataset_lits_train.py, def get_np_data_3d(),line44:
def get_np_data_3d(self, filename, resize_scale=1):
data_np = sitk.GetArrayFromImage(sitk.ReadImage(self.dataset_path + '/data/' + filename, sitk.sitkInt16))
if self.resize_scale !=1.0:
data_np = ndimage.zoom(data_np, zoom=self.resize_scale, order=3)
data_np = data_np.astype(np.float32) # --------不理解的地方二
data_np=data_np/200.0 #--------------不理解的地方一

i cannot understand the function of this code:'data_np=data_np/200.0',why 200 divided by 3, how do we get the parameter 200? assign at random?or other reason?
Could anyone answer my question?

TypeError: Caught TypeError in DataLoader worker process 0.

Hi lee-zp
Thanks for sharing your code.When I was training my own data, I had the following problem.My data is the data of MRI rectum, and I want to see the dice of tumor.

image

I tried some online methods, but nothing worked.Can you help me ?
Best wish

Question in 'test'

Thank you very much for the code you provided. Could you please tell me how to solve this problem in my mailbox "test"

Traceback (most recent call last):
File "test.py", line 52, in
model.load_state_dict(ckpt['net'])
File "/home/ubuntu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.encoder_stage1.0.weight", "module.encoder_stage1.0.bias", "module.encoder_stage1.1.weight", "module.encoder_stage1.2.weight", "module.encoder_stage1.2.bias", "module.encoder_stage1.3.weight", "module.encoder_stage2.0.weight", "module.encoder_stage2.0.bias", "module.encoder_stage2.1.weight", "module.encoder_stage2.2.weight", "module.encoder_stage2.2.bias", "module.encoder_stage2.3.weight", "module.encoder_stage2.4.weight", "module.encoder_stage2.4.bias", "module.encoder_stage2.5.weight", "module.encoder_stage3.0.weight", "module.encoder_stage3.0.bias", "module.encoder_stage3.1.weight", "module.encoder_stage3.2.weight", "module.encoder_stage3.2.bias", "module.encoder_stage3.3.weight", "module.encoder_stage3.4.weight", "module.encoder_stage3.4.bias", "module.encoder_stage3.5.weight", "module.encoder_stage4.0.weight", "module.encoder_stage4.0.bias", "module.encoder_stage4.1.weight", "module.encoder_stage4.2.weight", "module.encoder_stage4.2.bias", "module.encoder_stage4.3.weight", "module.encoder_stage4.4.weight", "module.encoder_stage4.4.bias", "module.encoder_stage4.5.weight", "module.decoder_stage1.0.weight", "module.decoder_stage1.0.bias", "module.decoder_stage1.1.weight", "module.decoder_stage1.2.weight", "module.decoder_stage1.2.bias", "module.decoder_stage1.3.weight", "module.decoder_stage1.4.weight", "module.decoder_stage1.4.bias", "module.decoder_stage1.5.weight", "module.decoder_stage2.0.weight", "module.decoder_stage2.0.bias", "module.decoder_stage2.1.weight", "module.decoder_stage2.2.weight", "module.decoder_stage2.2.bias", "module.decoder_stage2.3.weight", "module.decoder_stage2.4.weight", "module.decoder_stage2.4.bias", "module.decoder_stage2.5.weight", "module.decoder_stage3.0.weight", "module.decoder_stage3.0.bias", "module.decoder_stage3.1.weight", "module.decoder_stage3.2.weight", "module.decoder_stage3.2.bias", "module.decoder_stage3.3.weight", "module.decoder_stage3.4.weight", "module.decoder_stage3.4.bias", "module.decoder_stage3.5.weight", "module.decoder_stage4.0.weight", "module.decoder_stage4.0.bias", "module.decoder_stage4.1.weight", "module.decoder_stage4.2.weight", "module.decoder_stage4.2.bias", "module.decoder_stage4.3.weight", "module.down_conv1.0.weight", "module.down_conv1.0.bias", "module.down_conv1.1.weight", "module.down_conv2.0.weight", "module.down_conv2.0.bias", "module.down_conv2.1.weight", "module.down_conv3.0.weight", "module.down_conv3.0.bias", "module.down_conv3.1.weight", "module.down_conv4.0.weight", "module.down_conv4.0.bias", "module.down_conv4.1.weight", "module.up_conv2.0.weight", "module.up_conv2.0.bias", "module.up_conv2.1.weight", "module.up_conv3.0.weight", "module.up_conv3.0.bias", "module.up_conv3.1.weight", "module.up_conv4.0.weight", "module.up_conv4.0.bias", "module.up_conv4.1.weight".
Unexpected key(s) in state_dict: "module.encoder1.weight", "module.encoder1.bias", "module.encoder2.weight", "module.encoder2.bias", "module.encoder3.weight", "module.encoder3.bias", "module.encoder4.weight", "module.encoder4.bias", "module.decoder2.weight", "module.decoder2.bias", "module.decoder3.weight", "module.decoder3.bias", "module.decoder4.weight", "module.decoder4.bias", "module.decoder5.weight", "module.decoder5.bias".
size mismatch for module.map4.0.weight: copying a param with shape torch.Size([2, 2, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 32, 1, 1, 1]).

self.scale_factor = float(scale_factor) if scale_factor else None出现问题

在上采样的时候出现这个问题是什么情况呢,是我的版本不对吗
File "/home/qinxueke/.conda/envs/vic/lib/python3.6/site-packages/torch/nn/modules/upsampling.py", line 125, in init
self.scale_factor = float(scale_factor) if scale_factor else None
TypeError: float() argument must be a string or a number, not 'tuple'

作者你好

我要训练我自己的数据,为MRI格式,需要大量修改数据预处理的代码和训练代码吗?

Mismatched Dimensions of input and output (couldn't visualize the result)

Hi, thank you for sharing.

I follow your tutorial to train a model with two .nii samples.

It trained success and could be tested, but I noticed one thing that the shape of the model output didn't match with the input's whether I use raw data or fixed data as input data.

I use LiTS dataset number 28 data for testing.

Here is the code:
checkDimension.py

import SimpleITK as sitk
from scipy import ndimage

def sitk_read_raw(img_path, resize_scale=1): # 读取3D图像并resale(因为一般医学图像并不是标准的[1,1,1]scale)
    nda = sitk.ReadImage(img_path)
    if nda is None:
        raise TypeError("input img is None!!!")
    nda = sitk.GetArrayFromImage(nda)  # channel first
    nda=ndimage.zoom(nda,[resize_scale,resize_scale,resize_scale],order=0) #rescale

    return nda


if __name__ == '__main__':
    input_name = './fixed_data/data/volume-28.nii'
    data_np = sitk_read_raw(input_name)
    print('model input shape: {:}'.format(data_np.shape))

    output_name = './output/model2/result/result-28.nii'
    data_np = sitk_read_raw(output_name)
    print('model output shape: {:}'.format(data_np.shape))

    raw_data = './raw_dataset/LiTS_batch2/data/volume-28.nii'
    data_np = sitk_read_raw(raw_data)
    print('raw data shape: {:}'.format(data_np.shape))

    raw_label = './raw_dataset/LiTS_batch2/label/segmentation-28.nii'
    data_np = sitk_read_raw(raw_label)
    print('raw data shape: {:}'.format(data_np.shape))

The result is:

model input shape: (122, 512, 512)
model output shape: (64, 256, 256)
raw data shape: (129, 512, 512)
raw label shape: (129, 512, 512)

I want to visualize it with ITK-SNAP.

The original data could be visualized like this:

image

But currently, the output shape of the model can't be visualized like above:

image

I still can't find out where the code I should edit to meet my needs. Could you give me some advice? Thank you!

关于数据的维度问题

首先,非常感谢作者提供了开源代码!
在处理数据的时候,有语句是将三维的数据用unsqueeze(0)增加了一个维度,不是很理解这么做的意义。希望有人能解答一下!!非常感谢!

Results on LITS

Can you show me your result when submitting LITS on leaderboard?
Thankyou very much!

维度不匹配问题

有哪位大佬知道这个问题怎么解决
dice += (pred[:,i] * target[:,il)sum(din=l)sum(din=l)sun(din=l) / ((pred[:,i] * target[:,i)sum(din=l)sum(din=l).sumn(din=l)+
RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-sinaleton dimension 3

无法运行train

您好,最新更新的文件,我下载之后运行train说找不到module.nn文件,无法运行怎么回事?

Dice loss problem

Hi, thank you for your amazing work.
I thing there is some problem for dice loss, especially when u calculate with weight class.
Regrading this paper:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8393549/

the weight should be apply to both numerator and de numerator.

I have try your dice loss code and loss give negative value. I thing this problem cause by the values of numerator is too high.

loss不下降,dice不高

你好,我用您的模型跑了一下只分割肝脏的,结果loss降到0.3左右就不再下降了,dice1也只能达到0.7多,请问您自己能跑到0.9多不是用的这个网络吗?

我已解决test_dice极低的问题 | I have resolved the problem that the test_dice is extremely low

4219d3454a33e3d8670b9c5962935a49

d7ed6caf4221afffec5cf90fbb07ef42_720

这个代码我跑了四次,最后发现是自己对题主的意思理解错误导致,然后我花了几天时间逐行阅读代码,希望我的遭遇能够帮助到各位:`

①不要预处理‘测试集(test)’!!!不要预处理‘测试集(test)’!!!不要预处理‘测试集(test)’!!!重要的事情说三遍,只需要预处理训练集(train),dataset_lits_test.py会对测试(test)集图像进行处理。

②前面前辈说到的ResUNet.py里135-161行的map里的upsample里的scale参数后两位减半。

③把test.py第25行的注释给取消注释,该行的作用看注释的注释,做完后便!!!不!!!需要像前辈之前说的运行test.py时把②里说到的减半参数->进行恢复。

I ran this code four times and finally realized that it was due to my incorrect understanding of the author's meaning. I spent a few days reading the code line by line, hoping that my experience can help everyone:
① Do not preprocess the 'test set'!!! Do not preprocess the 'test set'!!! Do not preprocess the 'test set'!!! Say important things three times, just preprocess the training set, dataset_ Lits_ Test. py will process the test set images.
② The last two digits of the scale parameter in the upsample in the map from lines 135 to 161 of ResUNet. py mentioned by the previous generation are halved.
③ Uncomment the 25th line of test.py. The purpose of this line depends on the annotation. Once it is done, it will be ready.We don't need to restore the parameters halved mentioned in ② when running test. py.

trained model

Hello, do you have a trained model that I can use directly

cannot open output file

hello, when i run python ./preprocess_LiTS.py in the fix_data() function the following error occured "ERROR (nifti_image_write_hdr_img2): cannot open output file './fixed_datadata/volume-52.nii' ** ERROR (nifti_image_write_hdr_img2): cannot open output file './fixed_datalabel/segmentation-52.nii.gz'",

應用在做多類別分割,請問我該如何修改程式

如題,我目前是匯入腦影像與分割結果(class=87),我目前只有修改config.py中
parser.add_argument('--n_labels', type=int, default=87,help='number of classes')

若要開始訓練則會出現以下錯誤
File "train.py", line 89, in <module> train_log = train(model, train_loader, optimizer, loss, args.n_labels, alpha) File "train.py", line 43, in train target = common.to_one_hot_3d(target,n_labels) File "/home/ma/3DUNet-Pytorch/utils/common.py", line 9, in to_one_hot_3d one_hot = torch.zeros(n, n_classes, s, h, w).scatter_(1, tensor.view(n, 1, s, h, w), 1) RuntimeError: index 87 is out of bounds for dimension 1 with size 87

关于模型训练的过程中遇到的问题

尊敬的作者您好,我想问一下,当我复现您的代码一直到训练的时候,我发现我的终端虽然会显示训练的进度条和epoch,但是过了一段时间后仅仅是显示但是并没有任何进展,这个是因为该模型训练一次epoch需要很长时间呢还是哪里出了问题?期待您的回复

Train dice

你好,想问一下,在运行train.py时有一些输出不是很明白,train dice0,1,2分别是代表什么呢?同理,val dice0,1,2又分别代表什么,如果要评估分割效果,真正的dice系数是看哪一个,谢谢!

I met this error when run with my data.

HI, @lee-zq

I met this error when run with my data.

My data is brain MRI OASIS data (nii file format).

What's wrong to me?

...
)
(map1): Sequential(
(0): Conv3d(256, 2, kernel_size=(1, 1, 1), stride=(1, 1, 1))
(1): Upsample(scale_factor=(8.0, 8.0, 8.0), mode=trilinear)
(2): Softmax(dim=1)
)
)
Total number of parameters: 9498744
=======Epoch:1=======lr:0.0001
0%| | 0/3 [00:00<?, ?it/s]torch.Size([2, 1, 48, 128, 128]) torch.Size([2, 48, 128, 128])
0%| | 0/3 [00:01<?, ?it/s]
Traceback (most recent call last):
File "/Users/tessor2/3DUNet-Pytorch/train.py", line 94, in
train_log = train(model, train_loader, optimizer, loss, args.n_labels, alpha)
File "/Users/tessor2/3DUNet-Pytorch/train.py", line 46, in train
target = common.to_one_hot_3d(target,n_labels)
File "/Users/tessor2/3DUNet-Pytorch/utils/common.py", line 9, in to_one_hot_3d
one_hot = torch.zeros(n, n_classes, s, h, w).scatter_(1, tensor.view(n, 1, s, h, w), 1)
RuntimeError: index 243 is out of bounds for dimension 1 with size 2
E

Thanks in advance ~

Best,
@bemoregt.

RuntimeError: CUDA error: an illegal memory access was encountered

你好lee-zq,

非常感谢你的方法!
我在训练的时候遇到了这个问题:


=======Epoch:1=======
  0%|                                                                                                                                                                                | 0/2 [00:00<?, ?it/s]/home/jiaxi/anaconda3/envs/DKFZ/lib/python3.6/site-packages/scipy/ndimage/interpolation.py:583: UserWarning: From scipy 0.13.0, the output shape of zoom() is calculated with round() instead of int() - for these inputs the size of the returned array has changed.
  "the returned array has changed.", UserWarning)
  0%|                                                                                                                                                                                | 0/2 [00:04<?, ?it/s]
Traceback (most recent call last):
  File "train.py", line 103, in <module>
    train_log = train(model, train_loader, optimizer)
  File "train.py", line 66, in train
    optimizer.step()
  File "/home/jiaxi/anaconda3/envs/DKFZ/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "/home/jiaxi/anaconda3/envs/DKFZ/lib/python3.6/site-packages/torch/optim/adam.py", line 119, in step
    group['eps']
  File "/home/jiaxi/anaconda3/envs/DKFZ/lib/python3.6/site-packages/torch/optim/functional.py", line 94, in adam
    denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(eps)
RuntimeError: CUDA error: an illegal memory access was encountered

非常感谢你的帮助。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.