Coder Social home page Coder Social logo

cdan's Introduction

CDAN: Convolutional Dense Attention-guided Network for Low-light Image Enhancement

Hossein Shakibania, Sina Raoufi, and Hassan Khotanlou

paper paper

Abstract: Low-light images, characterized by inadequate illumination, pose challenges of diminished clarity, muted colors, and reduced details. Low-light image enhancement, an essential task in computer vision, aims to rectify these issues by improving brightness, contrast, and overall perceptual quality, thereby facilitating accurate analysis and interpretation. This paper introduces the Convolutional Dense Attention-guided Network (CDAN), a novel solution for enhancing low-light images. CDAN integrates an autoencoder-based architecture with convolutional and dense blocks, complemented by an attention mechanism and skip connections. This architecture ensures efficient information propagation and feature learning. Furthermore, a dedicated post-processing phase refines color balance and contrast. Our approach demonstrates notable progress compared to state-of-the-art results in low-light image enhancement, showcasing its robustness across a wide range of challenging scenarios. Our model performs remarkably on benchmark datasets, effectively mitigating under-exposure and proficiently restoring textures and colors in diverse low-light scenarios. This achievement underscores CDAN's potential for diverse computer vision tasks, notably enabling robust object detection and recognition in challenging low-light conditions.

Figure 1: The overall structure of the proposed model.

Experimental Results

In this section, we present the experimental results obtained by training our CDAN model using the LOw-Light (LOL) dataset and evaluating its performance on multiple benchmark datasets. The purpose of this evaluation is to assess the robustness of our model across a spectrum of challenging lighting conditions.

Datasets

Dataset No. of Images Paired Characteristics
LOL 500 Indoor
ExDark 7363 Extremely Dark, Indoor, Outdoor
DICM 69 Indoor, Outdoor
VV 24 Severely under/overexposed areas

Quantitative Evaluation

Learning method Method Avg. PSNR ↑ Avg. SSIM ↑ Avg. LPIPS ↓
Supervised LLNET 17.959 0.713 0.360
LightenNet 10.301 0.402 0.394
MBLLEN 17.902 0.715 0.247
Retinex-Net 16.774 0.462 0.474
KinD 17.648 0.779 0.175
Kind++ 17.752 0.760 0.198
TBEFN 17.351 0.786 0.210
DSLR 15.050 0.597 0.337
LAU-Net 21.513 0.805 0.273
Semi-supervised DRBN 15.125 0.472 0.316
Unsupervised EnlightenGAN 17.483 0.677 0.322
Zero-shot ExCNet 15.783 0.515 0.373
Zero-DCE 14.861 0.589 0.335
RRDNet 11.392 0.468 0.361
Proposed (CDAN) 20.102 0.816 0.167

Qualitative Evaluation

Figure 2: Visual comparison of state-of-the-art models on ExDark dataset.

Figure 3: Visual comparison of state-of-the-art models on DICM dataset.

Installation

To get started with the CDAN project, follow these steps:

1. Clone the Repository

You can clone the repository using Git. Open your terminal and run the following command:

git clone [email protected]:SinaRaoufi/CDAN.git

2. Configure Environmental Variables

After cloning, navigate to the project directory and locate the .env file. This file contains important hyperparameter values and configurations for the CDAN model. You can customize these variables according to your requirements.

Open the .env file using a text editor of your choice and modify the values as needed:

# Example .env file

# Directory paths
DATASET_DIR_ROOT=/path/to/your/dataset/directory
SAVE_DIR_ROOT=/path/to/your/saving/model/directory
MODEL_NAME=model

# Hyperparameters
INPUT_SIZE=200
BATCH_SIZE=32
EPOCHS=80
LEARNING_RATE=0.001

3. Install Dependencies

You can install project dependencies using pip:

pip install -r requirements.txt

4. Run the Project

You are now ready to run the CDAN project. To start the training, use the following command:

python train.py

To test the trained model, run:

python test.py --datasetPath "path/to/the/dataset" --modelPath "path/to/the/saved/model" --isPaired "True/False"

Requirements

The following hardware and software were used for training the model:

  • GPU: NVIDIA GeForce RTX 3090
  • RAM: 24 GB SSD
  • Operating System: Ubuntu 22.04.2 LTS
  • Python version: 3.9.15
  • PyTorch version: 2.0.1
  • PyTorch CUDA version: 11.7

Citation

@misc{shakibania2023cdan,
      title={CDAN: Convolutional Dense Attention-guided Network for Low-light Image Enhancement}, 
      author={Hossein Shakibania and Sina Raoufi and Hassan Khotanlou},
      year={2023},
      eprint={2308.12902},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

cdan's People

Contributors

hossshakiba avatar sinaraoufi avatar

Stargazers

Saleh avatar  avatar Erfan Gh avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.