m3dv / acsconv Goto Github PK
View Code? Open in Web Editor NEW[IEEE JBHI] Reinventing 2D Convolutions for 3D Images - 1 line of code to convert pretrained 2D models to 3D!
License: Apache License 2.0
[IEEE JBHI] Reinventing 2D Convolutions for 3D Images - 1 line of code to convert pretrained 2D models to 3D!
License: Apache License 2.0
Can someone help me get an implementation of ACS Conv in Keras?
Thanks in advance!
Yusuf
Thanks for the amazing work on this. Is there a specific reason for this dependency requirement or has this not been evaluated for newer releases?
Cheers,
Sarthak
Hello,
Though the shape of the kernels which is supposed to map axial features is said to be of dimensions (Co,Ci,k,k,1) in the paper, the implementation weight_a = weight[0 : acs_kernel_split[0]].unsqueeze(2)
makes it so that this weight will be (Co,Ci,1,k,k) in dimensions. So only weight_c
is correct as in the paper since weight_s
is also not being unsqueezed in the proper dimension.
I do not believe this is of importance for the experiments you have conducted but, for my experimental purposes I need to match the kernels partitions to views of 2D slices so I notices this when I went to make sure of what is happening.
Any plans to integrate k fold cross validation?
Thanks
It would be great if this was available on conda in addition to pip for the wider community.
Happy to work on this, if you want.
After training using "python train_segmentation.py", Readme fild doesn't provide evaluation step and running the demo. I don't know how to evaluate, and how to run the demo? Could you provide the details?
Hi, I'm using your library to convert segmentation models from 2D to 3D. In segmentation, transpose convolution layers are common and your library does not seem to include this conversion. We forked your repo and implemented some changes to include these types of layers in the following repo. However this code does not work. I have tried to debug it and the problem seems to be in the file functional.py on the acs_conv_f function, where we split the kernel in 3 (a,c,s) and then we concatenate the 3 3D conv layers together. When the same is done for transposed convolutions, the sizes of the produced layers do not match and throws an error because it can not concatenate arrays with different dimensions.
Would you be able to maybe share a light on what we are implementing wrong?
Thank you in advance!
After running the proof-of-concept experiment, how to evaluate and run the demo?
Hello
Really excited about this,Thanks for sharing your work, waiting for more updates.
Hi, I am trying to go through the train_3d experiment. I found that in the base_convert.py when calling the convert_module, it will raise NotImplementedError no matter what input module you give. This is quite confusing as it terminates the program without any expected output. Would you mind to tell me how to solve that?
Hello, thank you to share your work. I wanted to try your program but i got an AttributeError.
from torchvision.models import resnet18
from acsconv.converters import ACSConverter
model_2d = resnet18(pretrained=True)
model_3d = ACSConverter(model_2d)
with this error :
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3427, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-126-2824c693a04c>", line 1, in <module>
model_3d = ACSConverter(model_2d)
File "/home/loiseau/.local/lib/python3.9/site-packages/acsconv/converters/acsconv_converter.py", line 29, in __init__
model = self.convert_module(model)
File "/home/loiseau/.local/lib/python3.9/site-packages/acsconv/converters/base_converter.py", line 30, in convert_module
kwargs = {k: getattr(child, k) for k in arguments}
File "/home/loiseau/.local/lib/python3.9/site-packages/acsconv/converters/base_converter.py", line 30, in <dictcomp>
kwargs = {k: getattr(child, k) for k in arguments}
File "/home/loiseau/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Conv2d' object has no attribute 'device'
Hello
Thanks for sharing your work, could you please share as to how you generate the npz files for lidc dataset?
###############################EDIT
i see that the npz files contain 4 things,need some clarification as to
1.Does every (80,80,80) have a nodule?
2.What is answer1, answer2, answer3, answer4?
Thanks in advance.
Hi there.
Thanks for your excellent work!
I'm trying to modify the ConvNext in your repo to a feature extraction module. My inputs are 3 slices of grayscale medical images, and I concatenated them to a 3 x 512 x 512 tensor. Thus, my dataloader return data in shape of [batch, 3, 512, 512].
This setting works on normal convnexts. But for your version with ACSConv, as I don't know the shape of the default input, error occurs. So I wonder the size of default input if I want to make use of context info.
My questions are as follows:
If so,
This question has been bothering me for a long time. Thanks for your consideration!
[---------------------------------------------------------------------------]
AttributeError Traceback (most recent call last)
<ipython-input-4-2a0f9d424028> in <module>
4 output_2d = model_2d(input_2d)
5
----> 6 model_3d = ACSConverter(model_2d)
7 # once converted, model_3d is using ACSConv and capable of processing 3D volumes.
8 B, C_in, D, H, W = (1, 3, 64, 64, 64)
~/miniconda3/envs/rENE/lib/python3.6/site-packages/ACSConv-0.1.0-py3.6.egg/acsconv/converters/acsconv_converter.py in __init__(self, model)
27 """ Save the weights, convert the model to ACS counterpart, and then reload the weights """
28 preserve_state_dict = model.state_dict()
---> 29 model = self.convert_module(model)
30 model.load_state_dict(preserve_state_dict,strict=False) #
31 self.model = model
~/miniconda3/envs/rENE/lib/python3.6/site-packages/ACSConv-0.1.0-py3.6.egg/acsconv/converters/base_converter.py in convert_module(self, module)
24 if isinstance(child, nn.Conv2d):
25 arguments = nn.Conv2d.__init__.__code__.co_varnames[1:]
---> 26 kwargs = {k: getattr(child, k) for k in arguments}
27 kwargs = self.convert_conv_kwargs(kwargs)
28 setattr(module, child_name, self.__class__.target_conv(**kwargs))
~/miniconda3/envs/rENE/lib/python3.6/site-packages/ACSConv-0.1.0-py3.6.egg/acsconv/converters/base_converter.py in <dictcomp>(.0)
24 if isinstance(child, nn.Conv2d):
25 arguments = nn.Conv2d.__init__.__code__.co_varnames[1:]
---> 26 kwargs = {k: getattr(child, k) for k in arguments}
27 kwargs = self.convert_conv_kwargs(kwargs)
28 setattr(module, child_name, self.__class__.target_conv(**kwargs))
~/miniconda3/envs/rENE/lib/python3.6/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
946 return modules[name]
947 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 948 type(self).__name__, name))
949
950 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
AttributeError: 'Conv2d' object has no attribute 'kernel_size_'
model_2d = resnet18(pretrained=True)
Pretrained resnet is not converted properly.
I'd like to try using i3d as well as your method to do medical image segmentation. Would you like to share the i3d pipeline? Thanks.
I tried to convert my CNN using your method. I got this error:
ValueError: optimizer got an empty parameter list
Hi, great repository. I've found one issue, that when an AdaptiveMaxPooling layer is converted it will always return the pooling indices afterwards.
Example code:
>>> import torch
>>> from acsconv import converters
The ``converters`` are currently experimental. It may not support operations including (but not limited to) Functions in ``torch.nn.functional`` that involved data dimension
>>> x2 = torch.randn(4,3,128,128)
>>> x3 = torch.randn(4,3,128,128,128)
>>> model2 = torch.nn.Sequential(torch.nn.Conv2d(3, 9, 3), torch.nn.AdaptiveMaxPool2d((32,32)))
>>> model2(x2).shape
torch.Size([4, 9, 32, 32])
>>> converter = converters.ACSConverter
>>> model3 = converter(model2)
>>> model3(x3).shape
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'tuple' object has no attribute 'shape'
>>>
The problem is as far as I can see that in the base converter_file _triple_same
converts return_indices=False
to return_indices=(False,False,False)
, which is not False
and by the AdaptiveMaxPool3D interpreted as True
.
A simple bugfix would be to add if not isinstance(kwargs[k], bool):
into the for loop over kwargs. I tested it and it seems to solve the problem. I'm happy to provide a pull request if you agree.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.