Coder Social home page Coder Social logo

xmindflow / daeformer Goto Github PK

View Code? Open in Web Editor NEW
103.0 3.0 11.0 212 KB

[MICCAI 2023] DAE-Former: Dual Attention-guided Efficient Transformer for Medical Image Segmentation

Home Page: https://arxiv.org/abs/2212.13504

License: MIT License

Python 100.00%
efficient-transformers medical-image-analysis segmentation transformer medical-image-segmentation unet-image-segmentation deep-learning pytorch

daeformer's Introduction

xmindflow.github.io

daeformer's People

Contributors

amirhossein-kz avatar nitr098 avatar renearimond avatar rezazad68 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

daeformer's Issues

how train the skin disease dataset?

Hello dear author, how does your model train the skin disease dataset?I cannot achieve training ISIC datasets! Can you show me the code you used to train the skin disease dataset? I am a student from Yunnan University, and your code is only for learning! Thank you very much。

Will the model trained with the BTCV dataset work well on other full-body CT images?

Hello, thanks for your great work on the multi-organ segmentation task.
I am currently working on the full-body segmentation task as well.
My task is to use PET/CT images and make pseudo labels for the prostate cancer, and to accomplish the task, accurate segmentation maps for the healthy organs are essential.

But of course, the intensity values or sight of view of our CT images are slightly different from the BTCV dataset.
I have tried out the pretrained weights on our dataset, (I appreciate that you have provided the pretrained checkpoints)
but the especially did not seem to segment well on our dataset.

Do you have any recommendations on how to make your pretrained model work well on our dataset, in terms of pre-processing??
Currently, we do not have any ground truth labels, so we only need a rough pseudo label on the organs, but it is important for the kidneys to be correctly segmented.

Also, how did you normalize the intensity values to [0-1]? Was it simply min-max scaling??

I'll be looking forward for your reply.
Thanks.

D-Former (skip-connection)

Hello author, how should I modify the code to reproduce the skip-connection =0,1,2,3 in your paper and get the corresponding dice

using X2 as an input for the key?

keys = x2.transpose(1, 2)
queries = x2.transpose(1, 2)
values = x1.transpose(1, 2)

As shown in bold font above,I cannot understand the reason that using X2 as an input for the key.The figure of SCCA in this paper shows that the key from X1.

Dice loss is weird?

Hi @rezazad68

I hope you have a great last week of the year!

Can you explain your implemented dice loss:

    def _dice_loss(self, score, target):
        target = target.float()
        smooth = 1e-5
        intersect = torch.sum(score * target)
        y_sum = torch.sum(target * target)
        z_sum = torch.sum(score * score)
        loss = (2 * intersect + smooth) / (z_sum + y_sum + smooth)
        loss = 1 - loss
        return loss

Why are y_sum and z_sum calculated by square of target and score tensor, respectively? Following the formula, we just sum it basically without squaring itself.

可视化(visualization)

请问多器官分割可视化代码可以分享一下吗?(Can you share the visualization code for multi organ segmentation?)

How to apply DAEFormer to 1D[B,C,H]?

Hello, thank you very much for coming up with DAEFormer, very innovative. Now DAEFormer is a two-dimensional level [B,C,H,W], but when the data is one-dimensional data [B,C,H], how to use the DAEFormer model? I would like to ask a question: Can you provide a DAEFormer with 1D level for reference?

regarding code

hello author can you explain how to run the code. This will help me a lot.
thank you

Questions about isic2018

Dear author,:
I have read your article on DAEFormer and tested the Synapse dataset using the weights you provided. The final data is consistent with what you proposed in the paper. I believe this is a great paper.
But when I used ISIC2018, the DSC ultimately only reached 85.7 and I referred to the link you provided:
https://github.com/xmindflow/deformableLKA/tree/main/2D/skin_code

I haven't made any modifications to some key codes:

Optimizer=optim SGD (Net. parameters(), lr=args. base_lr, momentum=0.9, weight_decay=0.0001)

Scheduler=optim. lr_scheduler ReduceLROnPlateau (optimizer,'min ', factor=0.5, patient=10)

Base_lr: 0.01 batch_size: 4 max_epoch: 300

Thank you very much

Dear author,

Dear author, can you provide the code for calculating indicators?

Hello, author, I have some code problems to ask you

Hello, author. First of all, thank you very much for your selfless provision of the code. I have some small questions to ask you, why did I modify the dataset into my own dataset and perform binary segmentation, dice and HD are all 0 in the test

Verification of the details of the DAE-Former paper

Hello Author, recently I have been doing research on medical image segmentation. For the dae article you wrote, I have the following doubts about the training strategy code for training isic data sets that you did not disclose, this part of the code is very important to me. When I tried to replicate it myself, it was much lower. Hope to get your reply.

Custom Dataset

Hello, Thank you for sharing your work. I would like to train my custom dataset which is 3-channel real RGB images. But during training am facing a problem in inference beacuse of the image channel. what kind of changing I need to do in your code for the custom dataset? The training part is right but shows an error in inference. What kind of changing I need to do in the inference part to find metrics for evaluation? Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.