Coder Social home page Coder Social logo

liuguoyou / relighting Goto Github PK

View Code? Open in Web Editor NEW

This project forked from audreycui/relighting

0.0 2.0 0.0 86.13 MB

Local Relighting of Real Scenes

Home Page: https://arxiv.org/abs/2207.02774

License: MIT License

Shell 0.04% C++ 0.21% Python 87.52% C 0.55% Cuda 3.64% HTML 0.59% Jupyter Notebook 7.45%

relighting's Introduction

Local Relighting of Real Scenes

Audrey Cui, Ali Jahanian, Agata Lapedriza, Antonio Torralba, Shahin Mahdizadehaghdam, Rohit Kumar, David Bau
https://arxiv.org/abs/2207.02774

teaser image

Can we use deep generative models to edit real scenes “in the wild”? We introduce the task of local relighting, which changes a photograph of a scene by switching on and off the light sources that are visible within the image. This new task differs from the traditional image relighting problem, as it introduces the challenge of detecting light sources and inferring the pattern of light that emanates from them.

We propose using a pretrained generator model to generatean unbounded dataset of paired images to train an image-to-image translation model to relight visible light sources in real scenes. In the examples above, the original photo is highlighted with a red border. We show that our unsupervised method is able to turn on light sources present in the training domain (i.e. lamps) in (a). Additionally, our method can detect and adjust prelit light sources that are way outside its training domain, such as fire fans (b), traffic lights (c), and road signs (d).

Setting up the relight_env environment

To run our scripts or notebooks, first create a conda environment by running the following:

conda env create --file=relight_env.yml

Relighting Scripts

To relight a folder of images using our unsupervised method, run

python test.py --name unsupervised --netG modulated --no_instance --input_nc 3 --label_nc 0 --dataroot [PATH/TO/DATA] --which_epoch 200 

Interactive notebooks

  • unsupervised.ipynb: contains an interactive demo for our unsupervised method. Add your own test images to test_images and change the image path in the notebook to run our unsupervised method on your image.
  • user_selective.ipynb: contains an interactive demo for our user selective method. Likewise, you may add your own test images.
  • light_finder.ipynb demonstrates our method for identifying the light channel in StyleSpace
  • stylespace_decoupling.ipynb demonstrates step-by-step our method for creating a spatially masked style vector based on light source location
  • eval_metrics.ipynb contains the evaluation metrics reported in our paper.

Training

To train our modified version of pix2pixHD, run

python train.py --name [NAME] --netG modulated --batchSize 8 --max_dataset_size 2000 --no_instance --generated true --label_nc 0 --niter 200 --alternate_train true

  • --name: Name of the folder this model is saved to (or loaded from)
  • --netG: Type of generator. modulated is our version for relighting. global is the default from the original pix2pixHD paper.
  • --no_instance: Include this flag if instance maps (see original pix2pixHD code) are not being used. During training of our user selective method, the mask is treated as an instance map and this flag is not used. In all other experiments, including our unsupervised method, this flag is used.
  • --generated: Include this flag if using a generated dataset. otherwise, --dataroot should be used to specify the path to the real images
  • --alternate_train: Reverses training sample with negated modulation during training, which results in improvements in turning off lights. See paper for more details.
  • --dataroot: include path to data if using real data for training/testing. We don't need this for using generated data.
  • See base_options.py for more general options, train_options.py for more training specific options

Acknowledgments

  • Our code borrows heavily from pix2pixHD for its pix2pix architecture.
  • Our code borrows from rewriting for its utility functions.
  • We thank the authors of StyleGAN2 and Stylegan2 ADA, encoder4editing, and LPIPS.
  • We thank Daksha Yadav for her insights, encouragement, and valuable discussions
  • We are grateful for the support of DARPA XAI (FA8750-18-C-0004), the Spanish Ministry of Science, Innovation and Universities (RTI2018-095232-B-C22), and Signify Lighting Research.

relighting's People

Contributors

audreycui avatar ali-design avatar davidbau avatar

Watchers

James Cloos avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.