Coder Social home page Coder Social logo

aslfeat's Introduction

ASLFeat implementation

Framework

TensorFlow implementation of ASLFeat for CVPR'20 paper "ASLFeat: Learning Local Features of Accurate Shape and Localization", by Zixin Luo, Lei Zhou, Xuyang Bai, Hongkai Chen, Jiahui Zhang, Yao Yao, Shiwei Li, Tian Fang and Long Quan.

This paper presents a joint learning framework of local feature detectors and descriptors. Two aspects are addressed to learn a powerful feature: 1) shape-awareness of feature points, and 2) the localization accuracy of keypoints. If you find this project useful, please cite:

@article{luo2020aslfeat,
  title={ASLFeat: Learning Local Features of Accurate Shape and Localization},
  author={Luo, Zixin and Zhou, Lei and Bai, Xuyang and Chen, Hongkai and Zhang, Jiahui and Yao, Yao and Li, Shiwei and Fang, Tian and Quan, Long},
  journal={Computer Vision and Pattern Recognition (CVPR)},
  year={2020}
}

Requirements

Please use Python 3.7, install NumPy, OpenCV (3.4.2) and TensorFlow (1.15.2). Refer to requirements.txt for some other dependencies.

If you are using conda, you may configure ASLFeat as:

conda create --name aslfeat python=3.7 -y && \
pip install -r requirements.txt && \
conda activate aslfeat

Get started

Clone the repo and download the pretrained model:

git clone https://github.com/lzx551402/aslfeat.git && \
cd ASLFeat/pretrained && \
wget https://research.altizure.com/data/aslfeat_models/aslfeat.tar && \
tar -xvf aslfeat.tar

A quick example for image matching can be called by:

cd /local/aslfeat && python image_matching.py --config configs/matching_eval.yaml

You will be able to see the matching results by displaying disp.jpg.

You may configure configs/matching_eval.yaml to test images of your own.

Evaluation scripts

1. Benchmark on HPatches dataset

TODO

2. Benchmark on FM-Bench

Download the (customized) evaluation pipeline, and follow the instruction to download the testing data:

git clone https://github.com/lzx551402/FM-Bench.git

Configure configs/fmbench_eval.yaml and call:

cd /local/aslfeat && python evaluations.py --config configs/fmbench_eval.yaml

The extracted features will be stored in FM-Bench/Features_aslfeat. Use Matlab to run Pipeline/Pipeline_Demo.m" then Evaluation/Evaluate.m to obtain the results.

3. Benchmark on visual localization

Download the Aachen Day-Night dataset and follow the instructions to configure the evaluation.

Configure data_root in configs/aachen_eval.yaml, and call:

cd /local/aslfeat && python evaluations.py --config configs/aachen_eval.yaml

The extracted features will be saved alongside their corresponding images, e.g., the features for image /local/Aachen_Day-Night/images/images_upright/db/1000.jpg will be in the file /local/Aachen_Day-Night/images/image_upright/db/1000.jpg.aslfeat (the method name here is aslfeat).

Finally, refer to the evaluation script to generate and submit the results to the challenge website.

4. Benchmark on Oxford Buildings dataset for image retrieval

Take Oxford Buildings dataset as an example. First, download the evaluation data and (parsed) groundtruth files:

mkdir Oxford5k && \
cd Oxford5k && \
mkdir images && \
wget https://www.robots.ox.ac.uk/~vgg/data/oxbuildings/oxbuild_images.tgz && \
tar -xvf oxbuild_images.tgz -C images && \
wget https://research.altizure.com/data/aslfeat_models/oxford5k_gt_files.tar && \
tar -xvf ... 

This script also allows for evaluating Paris dataset. The (parsed) groundtruth files can be found here. Be noted to delete the corrupted images of the dataset, and put the remaining images under the same folder.

Next, configure configs/oxford_eval.yaml, and extract the features by:

cd /local/aslfeat && python evaluations.py --config configs/oxford_eval.yaml

We use Bag-of-Words (BoW) method for image retrieval. To do so, clone and compile libvot:

cd Oxford5k && \
git clone https://github.com/hlzz/libvot.git && \
mkdir build && \
cd build && \
cmake -DLIBVOT_BUILD_TESTS=OFF -DLIBVOT_USE_OPENCV=OFF .. && \
make

and the mAP can be obtained by:

cd Oxford5k && \
python benchmark.py --method_name aslfeat_ms

Please cite libvot if you find it useful.

5. Benchmark on ETH dataset

TODO

6. Benchmark on IMW2020

Download the data (validation/test) Link, then configure configs/imw2020_eval.yaml, finally call:

cd /local/aslfeat && python evaluations.py --config configs/imw2020_eval.yaml

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.