Coder Social home page Coder Social logo

shinywings / hdr-map-reconstruction-from-a-single-ldr-sky-panoramic-image-for-outdoor-illumination-estimation Goto Github PK

View Code? Open in Web Editor NEW
9.0 1.0 1.0 10.25 MB

Reconstructed HDR maps from a single LDR sky panoramic image that considers the sun and sky regions separately and accounts for various weather conditions.

License: MIT License

Python 100.00%
computer-vision hdr-imaging deep-learning tensorflow

hdr-map-reconstruction-from-a-single-ldr-sky-panoramic-image-for-outdoor-illumination-estimation's Introduction

HDR Map Reconstruction From a Single LDR Sky Panoramic Image For Outdoor Illumination Estimation [Paper]

result1 A multi-faceted approach to reconstructing HDR maps from a single LDR sky panoramic image that considers the sun and sky regions separately and accounts for various weather conditions.

(This paper has been published on IEEE Access)

Results

  1. HDR maps reconstructed using the proposed method under outdoor environmental conditions achieved better HDR image scores than maps reconstructed using other existing methods.

  2. The proposed method directly estimates the lighting of the sun and its surrounding area, overcoming the performance limit of existing HDR reconstruction methods. This limit is caused by overexposure areas of input LDR images under various outdoor weather conditions.

Architecture

arch

Requirements

  • tensorflow >= 2.4
  • tensorflow_adds_on
  • opencv >= 4 (conda install -c conda-forge opencv)
  • pandas
  • tqdm
  • matplotlib

DataGeneration

  • Generate input traing & test data from the Laval HDR dataset.
    (Redistribution of the Laval-HDR-dataset is not permitted. Please contact Jean-Francois Lalonde at jflalonde at gel dot ulaval dot ca to obtain the dataset.)

    python datasetGenerator.py
    
    (Optional)
    --dir :
        Absolute path of your dataset directory to save. (default : current working directory)
     
    --imheight :
        Vertical size of the output panoramic image (default : 32)
    
    --imwidth :
        Horizontal size of the output panoramic image (default : 128)
    
  • To use your own input data,

    1. Make sure your input images conform to the sky-dome image format.

      Only sky-dome images converted from fisheye-lens images are available as input images. (The sky-dome image is a panoramic image that captures the sky of $0\degree$ - $90\degree$ for elevation and $0\degree$ - $360\degree$ for azimuth.)

    2. Your dataset must conform to the TFRecord format described below.

      The input data is encoded as TFRecord to improve training latency due to loading overhead.

      feature_description = {
          'image': _bytes_feature(image),
          'azimuth': _float_feature(azimuth),
          'elevation': _float_feature(elevation),
      }
      

Training

  1. Download the pre-trained weights of vgg16. (This is a property of SingleHDR)

  2. Pretrain a sun luminance estimator.

    python pretrain_sun.py --dir "your/dir/path" --train True --inference_img_dir "your/dir/path"
    
    
    --dir :
        Absolute path of your dataset directory to train.
    
    --train :
        If "False", you can evaluate the sun luminance estimator that you trained.
    
    --inference_img_dir :
        Absolute path of your input LDR directory to evaluate your sun luminance estimator. (Enable only if --train False) 
    
    (Optional)
        --dorf :
            Absolute path of DoRF file. (provided in git repository)
        
        --lr : 
            learning rate of training model (default : 1e-4)
    
        --batchsize :
            batch size of training model (default : 32)
    
        --epochs :
            epochs number of training model (default : 1000)
        
        --imheight :
            Vertical size of the input panoramic image (default : 32)
    
        --imwidth :
            Horizontal size of the input panoramic image (default : 128)
    

    Please make sure your dataset conform to our input format
    (See item 2 in DataGenration).

  3. Train a main model

    python train.py --dir "/your/dir/path" --sky "/sky/preweight/path" --sun "sun/preweight/path" --dorf="/txt/path" --vgg "/npy/path"
    
        --dir :
            Absolute path of your dataset directory to train.
    
        --sun :
            Absolute path of your sun luminance estimator weights file that pretrained on previous step (step 2).
    
        --vgg :
            Absolute path of pretrained weights file of vgg16. (see step 1)
    
    (Optional)
        --sky :
            Absolute path of your sky luminance estimator weights file that you pretrained.
    
        --dorf :
            Absolute path of DoRF file. (provided in git repository)
        
        --lr : 
           learning rate of training model (default : 1e-4)
    
        --batchsize :
            batch size of training model (default : 32)
    
        --epochs :
            epochs number of training model (default : 1000)
        
        --imheight :
            Vertical size of the input panoramic image (default : 32)
    
        --imwidth :
            Horizontal size of the input panoramic image (default : 128)
    

The pre-trained weight file can be downloaded from here.

Inference

We have evaluated our model using two dataset (Laval-dataset, CAU dataset) respectively.

python inference.py --indir "abs/path" --outdir "name"

--indir :
    Absolute path of your dataset directory to inference.

--outdir :
    Specifies the directory name of the output inference image.

(Optional)
    --sky :
        Absolute path of your sky luminance estimator weights file that you pretrained.
        (default : os.path.join(CURRENT_WORKINGDIR, "checkpoints/SKY")))

    --sun :
        Absolute path of your sun luminance estimator weights file that you pretrained.
        (default : os.path.join(CURRENT_WORKINGDIR, "checkpoints/SKY")))

The CAU dataset can be downloaded from here.

Citation

@article{shin2023hdr,
  title={HDR Map Reconstruction From a Single LDR Sky Panoramic Image For Outdoor Illumination Estimation},
  author={Shin, Gyeongik and Yu, Kyeongmin and Mark, Mpabulungi and Hong, Hyunki},
  journal={IEEE Access},
  year={2023},
  publisher={IEEE}
}

hdr-map-reconstruction-from-a-single-ldr-sky-panoramic-image-for-outdoor-illumination-estimation's People

Contributors

shinywings avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

lucciffer

hdr-map-reconstruction-from-a-single-ldr-sky-panoramic-image-for-outdoor-illumination-estimation's Issues

Error while running

I am really sorry to keep bugging you again and again, but I really wanna request you to help me get it running.

Traceback (most recent call last):
File "C:\Users\MyPC\Downloads\HDR\inference.py", line 151, in
pred_hdr = inference(ldr_val, args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\MyPC\Downloads\HDR\inference.py", line 117, in inference
pred = generator_in_step(ldr, training= False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\MyPC\AppData\Local\Programs\Python\Python311\Lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\MyPC\AppData\Local\Programs\Python\Python311\Lib\site-packages\tensorflow\python\framework\func_graph.py", line 1200, in autograph_handler
raise e.ag_error_metadata.to_exception(e)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: in user code:

File "C:\Users\MyPC\Downloads\HDR\inference.py", line 88, in generator_in_step  *
    sunpose_cmf, sunpose_actvMaps= _sun.sunposeEstimation(ldr, training=training)
File "C:\Users\MyPC\Downloads\HDR\sunpose_net.py", line 65, in sunposeEstimation  *
    fc1 = self.fc1(flat)
File "C:\Users\MyPC\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler  **
    raise e.with_traceback(filtered_tb) from None
File "C:\Users\MyPC\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\backend.py", line 2101, in random_uniform
    return tf.random.stateless_uniform(

ResourceExhaustedError: {{function_node __wrapped__StatelessRandomUniformV2_device_/job:localhost/replica:0/task:0/device:CPU:0}} OOM when allocating tensor with shape[37748736,4096] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu [Op:StatelessRandomUniformV2]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.