Coder Social home page Coder Social logo

sjmoran / sidgan Goto Github PK

View Code? Open in Web Editor NEW
31.0 8.0 5.0 187 KB

Repository for the ECCV 2020 paper: "Low Light Video Enhancement using Synthetic Data Produced with an Intermediate Domain Mapping"

License: Other

Python 100.00%
eccv iccv cvpr computer-vision generative-adversarial-network low-light-image-enhancement video-enhancement

sidgan's Introduction

SIDGAN: Low Light Video Enhancement using Synthetic Data Produced with an Intermediate Domain Mapping (ECCV 2020)

Danai Triantafyllidou, Sean Moran, Steven McDonagh, Sarah Parisot, Greg Slabaugh

Huawei Noah's Ark Lab

Author's personal repository for the ECCV 2020 paper SIDGAN: Low Light Video Enhancement using Synthetic Data Produced with an Intermediate Domain Mapping. Here you will find a link to the code, pre-trained models and information on the datasets. Please raise a Github issue if you need assistance of have any questions on the research. The official Huawei repository for SIDGAN is located here.

Installation

1. Clone the code from Github

Clone the code and place it under $PATH_TO_CODE

git clone https://github.com/huawei-noah/noah-research.git
cd SIDGAN

2. Install dependencies

Python: see requirement.txt for complete list of used packages. We recommend doing a clean installation of requirements using virtualenv

conda create -n sidganenv python=3.5
source activate sidganenv
pip install -r requirement.txt

If you dont want to do the above clean installation via virtualenv, you could also directly install the requirements through:

pip install -r requirement.txt --no-index

Running the code

1. Download the data

Please download the SID Motion and Vimeo-90k datasets and create the following folder structure in $PATH_TO_DATA

.
└── data
    ├── SID_long
    ├── VBM4D_rawRGB
    └── vimeo

2. Running the training code

To train a cycleGAN model for the task of mapping Vimeo videos (domain A) into the SID Motion long exposure (domain B), do:

python train_cyclegan_a2b.py --data_root' $PATH_TO_DATA --project_root $PATH_TO_CODE --name $EXP_NAME

To train a cycleGAN model for the task of mapping SID Motion long exposure (domain B) into the SID Motion short exposure (domain C), do:

python train_cyclegan_b2c.py --data_root' $PATH_TO_DATA --project_root $PATH_TO_CODE --NAME $EXP_NAME

Training results will be saved in $PATH_TO_CODE/experiments/$EXP_NAME

.
├── images
├── log
├── meta_data.json
└── saved_models

Bibtex

If you find the code useful, please cite this paper:

@inproceedings{triantafyllidou2020low,
  title={Low Light Video Enhancement using Synthetic Data Produced with an Intermediate Domain Mapping},
  author={Triantafyllidou, Danai and Moran, Sean and McDonagh, Steven and Parisot, Sarah and Slabaugh, Gregory},
  booktitle={European Conference on Computer Vision},
  pages={103--119},
  year={2020},
  organization={Springer}
}

License

BSD-0-Clause License

Contributions

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.

If you plan to contribute new features, utility functions or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR, because we might be taking the core in a different direction than you might be aware of.

sidgan's People

Contributors

sjmoran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sidgan's Issues

How to train SIDGAN?

I have read your paper "Low Light Video Enhancement using SyntheticData Produced with an Intermediate Domain Mapping'' and downloaded the code of the model from Github. But I notice that the hyperparameter settings in the code are inconsistent with the paper.I want to train the SIDGAN model from scratch, but I don't know how to use Vimeo-90k dataset to translate real-world videos into the low-light sensor specific domain. It is mentioned in the paper that you randomly select 400 Vimeo-90K videos to train the model. But your code trains all videos of Vimeo-90k. And we know the Vimeo-90k dataset has 91, 701 septuplet samples. If you randomly select 400 videos for training the model, shouldn't there be 91301 videos left to get the synthesized videos? However, It is mentioned in the paper that you finally generate 9, 366 synthetic videos, so I want to know how to use Vimeo-90k dataset to generate 9, 366 synthetic videos?Can you release a version of the code that is consistent with the hyperparameters in the paper? And can you tell me the detailed data you use?
Hope you can answer my questions and reply to me, I will thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.