Coder Social home page Coder Social logo

Comments (11)

qc17-THU avatar qc17-THU commented on August 30, 2024

Hi,

here're the answers to your issue:

  1. The augmented trainset and validset were both generated from the 35 groups for training (see the MATLAB codes for data augmentation), and the assessment was performed on the other 15 groups (testset) which never appear in the training process.
  2. The folder 'testset' in the file structure is just to indicate where you could place your testing data.
  3. Because the metric NRMSE deliver similar information with PSNR and we found the normalization operation in NRMSE makes it more robust in evaluating fluorescent images.
  4. Even the performance of SRCNN is observably better than bicubic interpolation in the fully supervised manner. We think there's no need to compare with it.
  5. The testing data we uploaded are cell 2 for MTs and cell 4 for F-actin in BioSR.
  6. Sorry that I have no time to do so recently, but I have provided read_mrc.py file accompanied with BioSR dataset on Figshare to help you read MRC files with python. Then you could use certain python package, e.g., Tiff, to resave the images as tif files.

hope you and your colleagues enjoy the codes and looking forward to your work and findings :)

from dl-sr.

sbelharbi avatar sbelharbi commented on August 30, 2024

hi,
thanks for your quick response.

  1. The augmented trainset and validset were both generated from the 35 groups for training (see the MATLAB codes for data augmentation), and the assessment was performed on the other 15 groups (testset) which never appear in the training process.

this wasnt clear to me.

4. Even the performance of SRCNN is observably better than bicubic interpolation in the fully supervised manner. We think there's no need to compare with it.

over natural scene images, yes, deep models seem better than bicubic. but, i wasnt sure for fluorescence microscopy. i say this because we have conducted some initial experiments on a tiny fluorescence dataset. and it seems that both deep and bicubic give the same performance. not sure whether it is because the small amount of data or because bicubic does a good job as well. that's why we are looking for large dataset.
it would be very helpful if you report your results using bicubic over your dataset, at least here on github.
it's fine if you dont have time. once we have your data, we try to do the evaluation.

5. The testing data we uploaded are cell 2 for MTs and cell 4 for F-actin in BioSR.

thanks. i dont think i have seen this information; unless i've missed it. it's very helpful.

thanks again

from dl-sr.

qc17-THU avatar qc17-THU commented on August 30, 2024

Hi,

this wasnt clear to me.

Specifically, cell 1-15, 16-45, 46-50 were used for testing, training and validating, respectively.

it would be very helpful if you report your results using bicubic over your dataset, at least here on github.

It is true that bicubic is a strong opponent for unsupervised or weakly-supervised DLSR model especially for natural image SR tasks, which I think you are majored in. However, super-resolution in the field of microscopy not only means to improve the sampling rate, but also needs to enhance the optical resolution, i.e., narrowing or sharpening the structure. It's quite different from natural image SR. To this end, DLSR networks trained in a supervised manner outperform bicubic by a large margin in this area.

from dl-sr.

sbelharbi avatar sbelharbi commented on August 30, 2024

do you have the performance of bicubic vs your deep model on your dataset? and can you report them over the test set?
thanks

from dl-sr.

qc17-THU avatar qc17-THU commented on August 30, 2024

Bicubic is not comparable with DLSR models for microscopic image SR task. I have explained the reason above. You could also refer to Supplementary Note 1 of our Nat. Methods paper (https://doi.org/10.1038/s41592-020-01048-5) for more descriptions about this.

from dl-sr.

sbelharbi avatar sbelharbi commented on August 30, 2024

here are 2 hypothesizes:

  1. your hypothesis h1: "DLSR networks trained in a supervised manner outperform bicubic by a large margin in this area."
  2. my hypothesis h2: "your dlsr model might have a performance as good as bicubic method".

you are asking me, and everyone else, to accept your h1 without any empirical evidence. that's not how things go. you need to provide evidence to support your hypothesis which will be good for your paper.

can you convince me that your hypothesis h1 is right, and mine (h2) is wrong by comparing your deep model vs bicubic interpolation head to head over the test set using the same metrics resented in your paper? to settle this discussion once for all.

it is really strange to say that "Bicubic is not comparable with DLSR models for microscopic image SR task" and use it as argument to dismiss the comparison. they are 2 valid methods commonly used (more bicubic before deep models) to increase the resolution of images by factor x independently of the domain. what if h1 is wrong? i say this because i've seen deep model method yields similar performance as bicubic over our microscopy dataset.

you already have all the tools (code, data) to do the comparison. you just need to replace the model call with bicubic call. i think i am asking a reasonable request. if i were you, this is the first thing i would have done first to establish an ok baseline before going any further in deep models.
also, your supplementary note 1 does not talk much about this; and certainly does not justify dismissing bicubic method.

thanks

from dl-sr.

qc17-THU avatar qc17-THU commented on August 30, 2024

I think you mistaken the essential difference between super-resolution microscopy and image super-resolution in the field of computer vision, and that's why I suggest you to read the supp. material of my paper.

For microscopic images, their resolution, or called optical resolution is limited by the optical transfer function (OTF) of the imaging system, but not the spatial sampling rate. In fact, we can realize any sampling rates by modifying the optical system. But for the so called "image super resolution" in the field of natural image processing (maybe you are familiar with), the ultimate goal is just upsampling, which is the bicubic interpolation do.

Let me show you a case to make it clearer.
Here is the WF image of 128*128, i.e., with the diffraction limited resolution.
Img_WF_128

Then upsample this image to 256*256 by bicubic interpolation:
Img_WF_256

The SIM super-resolution image (256*256) of the same ROI (the target/GT-SIM) is shown below:
Img_SIM_256

Now can you understand my statements that "bicubic is not comparable here"?

from dl-sr.

sbelharbi avatar sbelharbi commented on August 30, 2024

i understand that

  • bicubic or any other upsampling method does not solve the optical resolution issue making it a poor method.
  • and that people do not use it in practice probably due to its poor and limited expected performance.
  • you showed above that interpolated image is bad compare to ground truth.

these is no doubt about all these points.

however, this does not mean we should exclude it from comparison. it is quite the opposite.
unless you have another simple baseline different than interpolation. please let me know.

all these drawbacks are actually arguments to use it as a simple baseline even if we know beforehand that it is poor. why? so we show that it is worth it to use your method and go through all the trouble to collect data (low-res, high-res), and train your method which comes at an expensive cost. meanwhile, interpolation techniques are simple and come at almost no cost.

this will show that learning-based methods are beneficial to your application compared to simple and naive methods such as interpolation. this strategy has its merits in research setup to justify using a learning setup like yours. if a deep model yields the same or worse performance than interpolation, what is the point of developing the deep model method?

right now, you dont know whether your model is better or not than bicubic or any other fancy interpolation method.
and i find it really strange that you are not even considering the idea or even curious to know.

my request is to measure how good is interpolation in your setup, to contrast the benefit of your method.
if you are not convinced or willing to do the experiment, i'll do it once you release the script that generates the test set.

in natural images application, the issue is identical to your application. except that we dont have "true" low resolution image as you do. but the application is exactly the same. currently, the solution is to simulate low resolution from high resolution using interpolation. this makes the model learns the inverse of bicubic. this is a limitation of the experimental protocol caused by what is available as data. this is an ongoing issue in this field https://arxiv.org/pdf/1904.00523.pdf.
we dont simply want to merely increase the number of pixels but to restore the details. you are using the same metrics as in natural images where the evaluation is done at pixel level.
so, basically, there is nothing really new in your application except that you have true low res samples and not simulated samples as in natural data. then, you applied learning method, deep model in your case, to learn the mapping low res --> high res which is identical to what is done over natural scene data.

even when we know the degradation process, which is bicubic in this case, and being trained on large dataset up to million samples, deep methods over natural images are not getting ahead of bicubic method by far; and their performance degrades by increasing the scale factor. so, imagine the situation in your data where the degradation process is unknown.

thanks.

from dl-sr.

qc17-THU avatar qc17-THU commented on August 30, 2024
  1. The core problem that we considered in our NM manuscript is about super-resolution microscopy and enhancing the optical resolution under low or medium SNR conditions. As I have repeated many times, bicubic or other interpolation methods falls out of this scope. Our baseline method is the conventional SIM which is quite evident in the manuscript. I really suggest you to learn more about super-resolution microscopy (especially SIM) before you use our dataset.

  2. Of course, you can test how bicubic interpolation performs in this problem by yourself. Here, I also recommend you considering the training-free RL deconvolution or recently pulished sparse-deconvolution (at NBT, 2021) as your candidates of baseline methods.

If you would like to have more discussion about this, I am happy to arrange a zoom meeting with you recently.

from dl-sr.

sbelharbi avatar sbelharbi commented on August 30, 2024
  • yes, bicubic wont enhance the optical resolution. but it does not justify discarding it from the comparison.

in some predictions of some cnns in your paper, they look blurry as bicubic predictions especially for dense structures such as f-actin where the net failed to recover the details. completely rejecting it as a baseline is what i do not agree with. saying that you have a better baseline is fine (even though conv sim is really bad in low-fluor.). but saying it is out of the scope or they are not comparable is not really fair. and yes, i am not an expert in microscopy; and i am learning about this field, but simply reading the manuscript does not really confirm that learning-based methods are worth it since they are the only ones evaluated. how one would know that simple deconvolution, fancy interpolation, and other combination of image processing methods wont outperform these cnns? these nets will learn anything you throw at them to some accuracy, even random noise; given a large amount of data. so, of course they will learn the mapping wf -> gt-sim to some accuracy. but how their accuracy compares to learning-free method is the blind spot in your paper. you may think that your trained cnn is very good, but probably some tweaking of a deconv method could yield relatively the same perf.

also, try to drop the number of training samples from 20k to 1k or 500 samples, and see the drop of performance. my guess it that they will become as bad as simple interpolation. my whole question is how good the approximation of interpolation is compared to a trained cnn? we know it is bad, but how bad is it? you can forget your paper for a second because you seem to be tied to its setup. in an application like yours, why someone will completely reject the idea of approximating the mapping wf -> gt-sim via interpolation? this is what i am trying to understand. throughout this discussion, you gave different answers:
1- deep-super res methods outperform bicubic, so there is no need to compare. 2- both are not comparable. 3- interpolation falls out of the scope of the paper. 4- then later you said it is ok for me to do the comparison.

initially, somehow you gave me the impression that using bicubic in this application is inconceivable and it is out of the question to look to its performance. then later you said it is ok for me to do it which gave me a different impression.
if you were too busy or dont want to do it, you could have just said so.
i discussed with an expert about this. they said, indeed, bicubic/interpolation are not used to enhance optical resolution in this field because they wont enhance the optical res. but, it is fine to use it as a baseline (poor one) in a research setup for comparison which is my whole point. again, i know you didnt do it in your paper and you have a specific setup. my question was can you do it now, obviously without being tied to your protocol (after your paper has already been published)? you could have save me all this time by saying no since now you are saying it is ok for me to do it. which is it? we cant compare at all? or we can but your cant do it?

p.s.
in addition, the cnn-based methods you are using were designed and validated over the job of interpolation over natural images. if you are saying it is out of the question to compare to bicubic, why using these methods in the first place? why did you expect that they will do a good job at enhancing the optical resolution because they werent designed for that?
a simple answer is because they will try to approximate your function. the same way bicubic will do. expect that with supervision, these dl methods are more likely to do a better job than bicubic which is unsupervised and does not look into your gt-sim. so, the comparison is always possible. but dl methods are likely to outperform bicubic. this i understand. but it is not a reason to not include it.

  • did you compare numerically to the learning-free methods you are suggesting? deconv, sparse-deconv. if not, why not? if yes, what are the results?
  • yes, please setup a zoom meeting to discuss all the above. my email: [email protected]
    thanks.

from dl-sr.

qc17-THU avatar qc17-THU commented on August 30, 2024

I'm not a native English speaker and maybe there's some culture difference in some words above which may mislead you.
I'd like to reclarify that in my opinion, bicubic is applicable to upsample the WF images, but there's no super-resolution information (i.e., information lying out of the cutoff frequency of the optical system) enhancement in the upscaled images. However, retrieving or obtaining this high-res. information is the key point of SR microscopy. That's why I said: 1. bicubic is not comparable here; 2. interpolation falls out of the scope of SRM; 3. DLSR methods outperform bicubic (i.e., use bicubic to upsampled the images and calculate PSNR with GT-SIM); 4. We didn't compare it in our NM paper.

In addition, I have some ongoing works, in which we compare our DLSR methods to RL deconv. and sparse deconv. I can share some results with you on the zoom meeting. And you could also say directly what project you are working on and what problems you're dealing with. I'm pleased to help you and I'll send you the zoom meeting info. via email.

from dl-sr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.