Coder Social home page Coder Social logo

sharif-apu / bjdd_cvpr21 Goto Github PK

View Code? Open in Web Editor NEW
68.0 68.0 20.0 73.92 MB

This is the official implementation of Beyond Joint Demosaicking and Denoising from CVPRW21.

Python 99.82% Shell 0.18%
bayercfa cvpr2021 deep-learning demosaicking denoising quadbayercfa

bjdd_cvpr21's People

Contributors

boogerlad avatar sharif-apu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

bjdd_cvpr21's Issues

Bug issue

  1. When I use the inference, I need to change "binningFactor" in config.json to "binnigFactor" otherwise it will throw error.
  2. line 83 in processDataset.py should be changed to "elif self.gridSze == 2:"

Color Loss using normalized input

Hey, I believe that the color loss might be a bit off. The input is a normalized image, and the ciede_2000 color difference is in the range 0 to 100 i think? Something like:

imageNP = (image * 0.5 + 0.5).permute(1,2,0).detach().cpu().numpy() * 255

or use the UnNormalize in dataTools... and to normalize the color difference:

deltaE /= 100.0

Normally I would do a PR, but my fork diverges quite a bit at the moment. Side note, your model also works great on 12 channel mosaicked images (with some trivial alterations) !

[edit: reshape becomes permute, going from C,H,W to H,W,C]

How to use this with RAW files?

I'm really impressed with your work. Also, how well does this compare with other algorithms for traditional bayer sensors?

Why the color loss can optimize the generator?

Dear author, I'm confused about the function of the color loss function. I find you detach the generated tensor from the foward graph and convert it to numpy array. However, after these operations, the grads cannot backpropagate to the network. So why the color loss can optimize the parameters of the network???

test on McM

Hi, this work is fantastic and interesting.
I have some questions when I using your code to test Bayer mode on McM.
I would like to reproduce the results of Table.2 in your paper.
I used your code and pertained bayer model and got the resulting images are in size of 512x512.
However, the ground truth of McM is smaller (500x500).
How do you evaluate the metrics (psnr/ssim)?
There are two main questions:

  1. Line 26 of inferenceUtils.py,
    sigma = self.noiseLevel/100.
    maybe should be
    sigma = self.noiseLevel/255.?
  2. Line 64-75 of inferenceUtils.py,
    resizeDimension = (512, 512)
    img = img.resize(resizeDimension)
    Why the image should be resized to 512, 512 before testing?

BR,
Wenzhu

PIPNet training and noise

In dataTools/customTransform.py, line 13, you have the line

noisyTensor = tensor + torch.randn(tensor.size()).uniform_(0, 1.) * sigma  + self.mean

This looks to be adding uniform random noise (U(0,sigma)) instead of Gaussian random noise (N(0,sigma^2)), as described in the paper (see PyTorch docs). This function is used in dataTools/customDataloader.py (customDatasetReader) which is then used in the main training code (mainModule/BJDD.py, lines 124, 91). Are the numbers reported in the paper derived from this code?

Another question: are the numbers reported in the paper a single model PIPNet model trained over a range of noise-levels, or multiple models trained at each level? If it's a single model, what was the training noise-level range?

Thanks,
Nikola

Results with pre-trained weights

I have read your interesting paper and appreciate that you made your code available! Unfortunately, the results using the pretrained weights (bayer or quad bayer) do not give satisfactory results. Maybe this is related to #4.

Using the provided weights gives the following result:
Noise: 5 sigma
01690_sigma_5_PIPNet
Noise: 10 sigma
01690_sigma_10_PIPNet

Any input what the reason could be and what I do wrong would be highly appreciated.

How are the training samples generated

Hi! In the paper, you said that you extracted 741,968 non-overlapping image patches of dimension 128128 from DIV2K and Flickr2K datasets. While in the code, when doing data sampling in the code ("python main.py -ds"), the 128128 images are directly resized from original images of DIV2K dataset. I am wondering when you trained your model (which led to the released pretrained weights), which way did you use to generate the training samples. Thanks a lot!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.