Coder Social home page Coder Social logo

dsrl's Introduction

👋 Hi there, I am Hongyi aka [Dotman]

  • Mainly focus on medical image processing
  • Also interested in lightweight models and image reconstruction
  • Currently working on multiple instance learning for WSIs
  • Love all kinds of ball games 🏸🎾🏓
  • Find me on Steam and let's play some Dota !
🛠️ Languages and Tools
Python Java C Cpp Vue Markdown Visual Studio Code HTML5 JavaScript Node.js MySQL Git Terminal

✉️ Connect with me

Github Badge LinkedIn GrandChallenge
Gmail Mail
Reddit Twitch Weibo Douban Steam

Blog


😎 Github Stats


⚡ Recent Activities

  1. ❗ Opened issue #1 in efss24/SPMLD
  2. ❗ Opened issue #29 in wisdomikezogwo/quilt1m
  3. 🗣 Commented on #45 in Dootmaan/MT-UNet
  4. 🗣 Commented on #44 in Dootmaan/MT-UNet
  5. 🗣 Commented on #24 in Dootmaan/DSRL

dsrl's People

Contributors

dootmaan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

dsrl's Issues

Regarding your final experiment setting

Hi there,

Thanks for sharing your codes. I wonder if the experience setting in your current train.py can achieve mIoU=0.6768 on Cityscapes. Like with deeplab v3 as SSSR backbone, init_lr=0.005 with fast decay speed and w_sr=w_fa=0.5?
I see you set the default epoch for Cityscapes is 1000 and mentioned the model converges very fast. May I know how many epochs you trained to get a relatively good mIoU (~60%) (for me I trained for 10 epochs and the performance is around 30%).
How about your experiment settings on VOC2012? I tried your bash script but the model didn't converge at all with in 50 epochs.
Thank you in advance:)!

关于SISR分支

您好,非常感谢您的分享,我想请问您,本文中提到的超分辨率分支SISR的实现代码是在哪个文件中呢?文中提到的MSE Loss是在哪个文件中呢?

I can't seem to trained the model to get good result

Hi there,
Thanks for sharing your codes. I wonder if the experience setting in your current train.py can achieve mIoU=0.6768 on Cityscapes.
I trained the model, but I can get good result.
Can you share the setting for getting mIoU=0.6768?

some questions about environment

Thank you for your work! I would like to ask about your GPU configuration and the specific version of some dependent packages used

What is subscale in FALoss?

Hi, Thanks for sharing the code.
I am curious that how you determine the subscale=0.0625?
And have you trained the model with w_sr=0.1, w_fa=1.0? Because the paper is set to these parameters.

Looking forward to your reply.

Question about FA module

The paper didn't say much about the feature transform module(FTM), so I'm not sure whether it should be a 19->19 channel 1*1 conv or 19->3 channel conv. In my experiment, 19->19 channel FTM along with 19 channel SISR feature calculating FA loss can achieve a mIoU of 0.6225, while 19->3 only achieves 0.5563. The normalization method in FA module also confuzes me, and I actually found that these normalizations make the result worse.(So the code here by default doesn't uses normalization. If you want to try using them please remove the comment of line 16,18,23,25 in utils/fa_loss.py)

Some problem about Data processing during training with Cityscapes

I encountered the following issues during training:

Traceback (most recent call last):
File "/data2/zixuan/DSRL-subpixel/train.py", line 313, in
main()
File "/data2/zixuan/DSRL-subpixel/train.py", line 306, in main
trainer.training(epoch)
File "/data2/zixuan/DSRL-subpixel/train.py", line 110, in training
output,output_sr,fea_seg,fea_sr = self.model(input_img)
File "/home/zixuan/anaconda3/envs/DSRL/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/zixuan/anaconda3/envs/DSRL/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 159, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/zixuan/anaconda3/envs/DSRL/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/data2/zixuan/DSRL-subpixel/modeling/deeplab.py", line 46, in forward
x = self.aspp(x)
File "/home/zixuan/anaconda3/envs/DSRL/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/data2/zixuan/DSRL-subpixel/modeling/aspp.py", line 70, in forward
x5 = self.global_avg_pool(x)
File "/home/zixuan/anaconda3/envs/DSRL/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/zixuan/anaconda3/envs/DSRL/lib/python3.6/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/home/zixuan/anaconda3/envs/DSRL/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/zixuan/anaconda3/envs/DSRL/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 136, in forward
self.weight, self.bias, bn_training, exponential_average_factor, self.eps)
File "/home/zixuan/anaconda3/envs/DSRL/lib/python3.6/site-packages/torch/nn/functional.py", line 2054, in batch_norm
_verify_batch_size(input.size())
File "/home/zixuan/anaconda3/envs/DSRL/lib/python3.6/site-packages/torch/nn/functional.py", line 2037, in _verify_batch_size
raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 256, 1, 1])

May I ask if there is an error downloading my data or if there is an issue with my model parameter settings? Can you please refer to your format for storing Cityscapes data?

feature viualization

Hi, nice to see that you have replicated the implementation.
Have you done the visualization of features for the decoders of both branches?
Maybe checking the feature map would help know the correctness of the implementation.
Thanks

dataset instructions

Hi. I'm a noob.

I'm able to follow your clear instructions so far. (Thanks!)

I'm not sure exactly about the datasets I need.

I downloaded "checkpoint.pth.tar" and put it in my "pytorch-deeplab-xception" folder. Is that correct? Is that enough? Is this the folder that contains all the datasets referenced in mypath.py?

Any assistance would be so much appreciated.

All the best.

About test code

I have trained my own model and I want to test my own data. How can I use my test set on it and do you have a code ?

about input size

I see the model's input reduce by half, SSSR and SISR output's size is origin input size, i think the output size is twice bigger than origin input size.

some questions about datasets

tks for ur work. i have git clone ur code, and got some trouble on the datasets. e.g: while i run the main in 'cityscapes.py' it print the right picture, that image and label concatnate together. but it will return a distortion picture(colour went wrong) from dataloader in the 'train.py'. as a result the output_sr got a whole black picture and output_seg has the same problem.

about half the amount of fea_seg are zero

Hi there,

thank you for your works. i have git clone your code, and i have cancel the comment of line 16, 18, 23, 25 in 'util/fa_loss.py' and run 'train.py' with cityscapes. Then, i get NaN or Inf caused by feature of deeplabv3. About half amount of it are zero. What should i do for preparation?

Thank you.

SISR

Hello, maybe SISR is following "Real-Time Single Image and Video Super-Resolution Using an Efficient
Sub-Pixel Convolutional Neural Network" with Sub-Pixel convolution ?

关于上采样

您好,非常感谢您的分享,我想请问您,
x_seg_up = F.interpolate(x_seg, size=input.size()[2:], mode='bilinear', align_corners=True)
x_seg_up = F.interpolate(x_seg_up,size=[2*i for i in input.size()[2:]], mode='bilinear', align_corners=True)
这里为什么要上采样两次,而不是直接一次上采样到原始大小呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.