Coder Social home page Coder Social logo

jinyeying / night-enhancement Goto Github PK

View Code? Open in Web Editor NEW
384.0 5.0 33.0 44.06 MB

[ECCV2022] "Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression", https://arxiv.org/abs/2207.10564

Home Page: https://github.com/jinyeying/night-enhancement

License: MIT License

MATLAB 0.23% Python 1.07% HTML 51.71% Jupyter Notebook 47.00%
deep-learning light-effects low-level-vision low-light low-light-image-enhancement night nighttime nighttime-lights flare glare

night-enhancement's Introduction

night_enhancement (ECCV'2022)

Introduction

This is an implementation of the following paper.

Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression
European Conference on Computer Vision (ECCV'2022)

Yeying Jin, Wenhan Yang and Robby T. Tan

[Paper] [Supplementary] arXiv [Poster] [Slides] [Link]

PWC 🔥Replicate🔥

Datasets

1. Light-Effects Suppression on Night Data

  1. Light-effects data [Dropbox] | [BaiduPan (code:self)]
    Light-effects data is collected from Flickr and by ourselves, with multiple light colors in various scenes.

  1. LED data [Dropbox] | [BaiduPan (code:ledl)]
    We captured images with dimmer light as the reference images.

  1. GTA5 nighttime fog [Dropbox] | [BaiduPan (code:67ml)]
    Synthetic GTA5 nighttime fog data.

  1. Syn-light-effects [Dropbox] | [BaiduPan (code:synt)]
    Synthetic-light-effects data is the implementation of the paper:
  • ICCV2017 A New Convolution Kernel for Atmospheric Point Spread Function Applied to Computer Vision [Paper]
    Run the Matlab code to generate Syn-light-effects:
glow_rendering_code/repro_ICCV2007_Fig5.m

2. Low-Light Enhancement Data

  1. LOL dataset
    "Deep Retinex Decomposition for Low-Light Enhancement", BMVC, 2018. [Baiduyun (code:sdd0)] | [Google Drive]

  2. LOL-Real dataset
    "Sparse Gradient Regularized Deep Retinex Network for Robust Low-Light Image Enhancement", TIP, 2021. [Baiduyun (code:l9xm)] | [Google Drive]

3. Low-Light Enhancement Results:

Pre-trained Model

  1. Download the pre-trained LOL model [Dropbox] | [BaiduPan (code:lol2)], put in ./results/LOL/model/
  2. Put the test images in ./LOL/

Low-light Enhancement Test

🔥Replicate🔥 Online test: https://replicate.com/cjwbw/night-enhancement

python main.py

Results

  1. LOL-test Results (15 test images) [Dropbox] | [BaiduPan (code:lol1)]

Get the following Table 3 in the main paper on the LOL-test dataset.

Learning Method PSNR SSIM
Unsupervised Learning Ours 21.521 0.7647
N/A Input 7.773 0.1259

  1. LOL-Real Results (100 test images) [Dropbox] | [BaiduPan (code:lolc)]

Get the following Table 4 in the main paper on the LOL-Real dataset.

Learning Method PSNR SSIM
Unsupervised Learning Ours 25.51 0.8015
N/A Input 9.72 0.1752

Re-train (train from scratch) in LOL_V2_real (698 train images), and test on LOL_V2_real [Dropbox] | [BaiduPan (code:lol2)].
PSNR: 20.85 (vs EnlightenGAN's 18.23), SSIM: 0.7243 (vs EnlightenGAN's 0.61).

4. Light-Effects Suppression Results:

Pre-trained Model

  1. Download the pre-trained de-light-effects model [Dropbox] | [BaiduPan (code:dele)], put in ./results/delighteffects/model/
  2. Put the test images in ./light-effects/

Light-effects Suppression Test

python main_delighteffects.py

Inputs are in ./light-effects/, Outputs are in ./light-effects-output/

demo_all.ipynb
python demo.py

demo_decomposition.m
Initial Background Results [Dropbox] Light-Effects Results [Dropbox] Shading Results [Dropbox]
[BaiduPan (code:jjjj)] [BaiduPan (code:lele)] [BaiduPan (code:llll)]

Feature Results:

  1. Run the MATLAB code to adaptively fuse the three color channels, and output I_gray.
checkGrayMerge.m

  1. Download the fine-tuned VGG model [Dropbox] | [BaiduPan (code:dark)] (fine-tuned on ExDark), put in ./VGG_code/ckpts/vgg16_featureextractFalse_ExDark/nets/model_best.tar

  2. Obtain structure features.

python test_VGGfeatures.py

Summary of Comparisons:

License

The code and models in this repository are licensed under the MIT License for academic and other non-commercial uses.
For commercial use of the code and models, separate commercial licensing is available. Please contact:

Citations

If this work is useful for your research, please cite our paper.

@inproceedings{jin2022unsupervised,
  title={Unsupervised night image enhancement: When layer decomposition meets light-effects suppression},
  author={Jin, Yeying and Yang, Wenhan and Tan, Robby T},
  booktitle={European Conference on Computer Vision},
  pages={404--421},
  year={2022},
  organization={Springer}
}

@inproceedings{jin2023enhancing,
  title={Enhancing visibility in nighttime haze images using guided apsf and gradient adaptive convolution},
  author={Jin, Yeying and Lin, Beibei and Yan, Wending and Yuan, Yuan and Ye, Wei and Tan, Robby T},
  booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
  pages={2446--2457},
  year={2023}
}

If light-effects data is useful for your research, please cite the paper.

@inproceedings{sharma2021nighttime,
	title={Nighttime Visibility Enhancement by Increasing the Dynamic Range and Suppression of Light Effects},
	author={Sharma, Aashish and Tan, Robby T},
	booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
	pages={11977--11986},
	year={2021}
}

night-enhancement's People

Contributors

chenxwh avatar jinyeying avatar tumuyan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

night-enhancement's Issues

Save the glow layer image and the low-light image after light effect suppression?

Excellent work, thank you very much for your outstanding contribution to this paper!

Here I have a question and would appreciate your help. When testing my own dataset, I wish to save the glow layer images obtained from the decomposition as well as the low light images after the light effect suppression, how can I do that?

Training

Can you share the trainning code please ?

how to train?

bother to u.can u tell me how to train this model.i only see the test..

The process concerning Jinit and G

Hi! Sorry to bother. Thanks for your interesting work.
I am doing the related work recently. Since I fail to get access to the implementation of 'Single Image Layer Separation using Relative Smoothness', I am wondering if you can upload the process concerning Jinit and G
Best wishes : )

train in LOL test in LOL_real

LOL dataset has three versions, one is LOL_V1, which contain 485 training images and 15 testing images, one is LOL_V2_real, which contain 689 training images and 100 testing image, the rest one is LOL_V2_sys, which contains systhetic low-light image. However, LOL_V1 and LOL_V2 are different splits in same datasets, that means LOL_V1's training images contain LOL_V2_real's testing image, if you want to evaluate in LOL_V2_real testing images, I think you should re-train in LOL_V2_real's training image, instead of training in LOL_V1 and testing in LOL_V2_real.

图像分解

打扰了,请问图像分解中的三个网络在代码中的哪个地方呢?谢谢!

No training.py?

Sorry to bother you,i just want to know how to train this model?

How to get the illumination effect map Gi?

I'm sorry to bother you.I don't know how to get the illumination effect map Gi in Section 3.1 of your paper.If possible, would it be convenient for you to share this part of the code?

It seems no demo.py files

1、syntax error in ENHANCEMENT.py in line 28. "self.dom_weight = args.dom_weight" .When I change to " self.dom_weight =args.atten_weight",there is no error. 2、After fixed bug of question1,when I test use pre-trained LOL model,It seems no effect on image with light from LED data.Why this happens?Is this pre training model not the model in the paper?

a small and fixed bug
There are two tasks in our paper, one is the low-light enhancement (LOL), and another one is light-effects suppression (LED data), these two are different training data and checkpoints.
Thank you very much.As you said,When get Light-Effects Suppression Results,you should run "python demo.py",but I find no demo.py.Is this file no upload?thanks

How to get light-effects map?

Thank you for presenting such interesting work. I have some questions about how to get the light-effects map. Gi in Formula 2 of the paper is obtained by the relative smoothness technique. May I ask whether the septRelSmo function provided by Yu Li is used in the process of calculating Gi? And do you need to multiply the decomposed image by 2 to enlarge the pixel values? Thank you!

图像分解训练代码

Hi, may I ask how the first half of the layer decomposition network is implemented? I feel in the figure of your paper that it is derived from training the deep network, but you put out the code for matlab direct decomposition and I feel some inconsistency? Can you give me the full code of the layer decomposition network?

I encounter some bug when run "python main.py"

1、syntax error in ENHANCEMENT.py in line 28. "self.dom_weight = args.dom_weight" .When I change to " self.dom_weight =args.atten_weight",there is no error.
2、After fixed bug of question1,when I test use pre-trained LOL model,It seems no effect on image with light from LED data.Why this happens?Is this pre training model not the model in the paper?

Cannot guide light-effects suppression with results_G

The python code for the light-effects suppression network (main_delighteffects.py) only takes one image as an input.
How do I guide the light-effects suppression with the light-effects map generated by the matlab code (in results_G)?
Is there an additional python script to do this you could release?

I can not find demo.py

Thank you for your great work!

I want to test the light effect suppression module in your paper. However, I can not find the demo.py. Where can I get it?
Aslo, while testing test_VGGfeatures.py, I can not generate Jrefine.png with the matlab code. I want to know how to generate it?

Thanks!

Light suppression

Hi,Sorry to bother you, Looks like you didn't publish the demo. py , So I would like to ask if your light suppression work can also play a good role in suppressing glare in the daytime scene

cam attentionmap

论文中说将feature map取平均后可以得到attentionmap,但是代码里面却是写着torch.sum,这是为什么呢

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.