Coder Social home page Coder Social logo

fapnet's Introduction

Feature Aggregation and Propagation Network for Camouflaged Object Detection

Authors: Tao Zhou, Yi Zhou, Chen Gong, Jian Yang, and Yu Zhang.

1. Preface

  • This repository provides code for "Feature Aggregation and Propagation Network for Camouflaged Object Detection" IEEE TIP 2022. Paper Arxiv Page

2. Overview

2.1. Introduction

Camouflaged object detection (COD) aims to detect/segment camouflaged objects embedded in the environment, which has attracted increasing attention over the past decades. Although several COD methods have been developed, they still suffer from unsatisfactory performance due to the intrinsic similarities between the foreground objects and background surroundings. In this paper, we propose a novel Feature Aggregation and Propagation Network (FAP-Net) for camouflaged object detection. Specifically, we propose a Boundary Guidance Module (BGM) to explicitly model the boundary characteristic, which can provide boundary-enhanced features to boost the COD performance. To capture the scale variations of the camouflaged objects, we propose a Multi-scale Feature Aggregation Module (MFAM) to characterize the multi-scale information from each layer and obtain the aggregated feature representations. Furthermore, we propose a Cross-level Fusion and Propagation Module (CFPM). In the CFPM, the feature fusion part can effectively integrate the features from adjacent layers to exploit the cross-level correlations, and the feature propagation part can transmit valuable context information from the encoder to the decoder network via a gate unit. Finally, we formulate a unified and end-to-end trainable framework where cross-level features can be effectively fused and propagated for capturing rich context information. Extensive experiments on three benchmark camouflaged datasets demonstrate that our FAP-Net outperforms other state-of-the-art COD models. Moreover, our model can be extended to the polyp segmentation task, and the comparison results further validate the effectiveness of the proposed model in segmenting polyps.

2.2. Framework Overview


Figure 1: The overall architecture of the proposed FAP-Net, consisting of three key components, i.e., boundary guidance module, multi-scale feature aggregation module, and cross-level fusion and propagation module.

2.3. Qualitative Results


Figure 2: Qualitative Results.

3. Proposed Method

3.1. Training/Testing

The training and testing experiments are conducted using PyTorch with one NVIDIA Tesla P40 GPU of 24 GB Memory.

  1. Configuring your environment (Prerequisites):

    • Installing necessary packages: pip install -r requirements.txt.
  2. Downloading necessary data:

    • downloading training dataset and move it into ./data/, which can be found from Google Drive or Baidu Drive (extraction code: fapn).

    • downloading testing dataset and move it into ./data/, which can be found from Google Drive or Baidu Drive (extraction code: fapn).

    • downloading our weights and move it into ./checkpoints/FAPNet.pth, which can be found from Google Drive or (Baidu Drive) (extraction code: fapn). .

    • downloading Res2Net weights and move it into ./lib/res2net50_v1b_26w_4s-3cf99910.pth, which can be found from Google Drive or Baidu Drive (extraction code: fapn).

  3. Training Configuration:

    • After you download training dataset, just run train.py to train our model.
  4. Testing Configuration:

    • After you download all the pre-trained model and testing dataset, just run test.py to generate the final prediction maps.

    • You can also download prediction maps ('CHAMELEON', 'CAMO', 'COD10K') from Google Drive or Baidu Drive (extraction code: fapn)).

    • You can also download prediction maps (NC4K) from Google Drive or Baidu Drive (extraction code: fapn)).

3.2 Evaluating your trained model:

One evaluation is written in MATLAB code (link), please follow this the instructions in ./eval/main.m and just run it to generate the evaluation results in.

4. Citation

Please cite our paper if you find the work useful, thanks!

@article{zhou2022feature,
   title={Feature Aggregation and Propagation Network for Camouflaged Object Detection},
   author={Zhou, Tao and Zhou, Yi and Gong, Chen and Yang, Jian and Zhang, Yu},
   journal={IEEE Transactions on Image Processing},
   volume={31},
   pages={7036--7047},
   year={2022},
   publisher={IEEE}
}

โฌ† back to top

fapnet's People

Contributors

taozh2017 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.