Coder Social home page Coder Social logo

cvlab-epfl / log-polar-descriptors Goto Github PK

View Code? Open in Web Editor NEW
58.0 18.0 7.0 51.11 MB

Public implementation of "Beyond Cartesian Representations for Local Descriptors", ICCV 2019

License: Apache License 2.0

Python 10.39% Shell 0.47% Jupyter Notebook 89.14%

log-polar-descriptors's Introduction

Summary

This repository provides a reference implementation for the paper "Beyond Cartesian Representations for Local Descriptors" (link). If you use it, please cite the paper:

@article{Ebel19,
    andauthor = {Patrick Ebel and Anastasiia Mishchuk and Kwang Moo Yi and Pascal Fua and Eduard Trulls},
   title = {{Beyond Cartesian Representations for Local Descriptors}},
   booktitle = {Proc. of ICCV},
   year = 2019,
},

Please consider also citing the paper upon which the network architecture is based:

@article{Mishchuk17,
   author = {Anastasiya Mishchuk and Dmytro Mishkin and Filip Radenovic and Jiri Matas},
   title = {{Working hard to know your neighbor's margins: Local descriptor learning loss}},
   booktitle = {Proc. of NIPS},
   year = 2017,
}

Poster link

Setting up your environment

Our code relies on pytorch. Please see system/log-polar.yml for a list of dependencies. You can create an environment with miniconda including all the dependencies with conda env create -f system/log-polar.yml.

Inference

We provide two scripts to extract descriptors given an image. Please check the notebook example.ipynb for an demo where you can visualize the log-polar patches. You can also run example.py for a command-line script that extracts features and saves them as a HDF5 file. Run python example.py --help for options. Keypoints are extracted with SIFT via OpenCV.

Training

Download the data

We rely on the data provided by the 2019 Image Matching Workshop challenge (2020 edition here). You will need to download the following:

  • Images (a): available here. We use the following sequences for training: brandenburg_gate, grand_place_brussels, pantheon_exterior, taj_mahal, buckingham, notre_dame_front_facade, sacre_coeur, temple_nara_japan, colosseum_exterior, palace_of_westminster, st_peters_square. (you only need the images, feel free to delete other files).
  • Ground truth match data for training (b): available here.

We generate valid matches with the calibration data and depth maps available in the IMW dataset: please refer to the paper for details. We do not provide the code to preprocess it as we are currently refactoring it.

This data should go into the dl folder, which contains colmap (a) and patchesScale16_network_1.5px (b).

Train

Configuration files are stored under configs. You can check init.yml for an example. This is the default configuration file. You can specify a different one with:

$ python modules/hardnet/hardnet.py --config_file configs/init.yml

Please refer to the code for the different parameters.

Evaluation

AMOS patches dataset. HPatches dataset.

You can evaluate performance on the AMOS and HPatches datasets. First, clone the dependencies with git submodule update --init, and download the weights for GeoDesc, following their instructions. You can then run the following script, that downloads and extracts data in the appropriate format:

$ sh eval_amos.sh
$ sh eval_amos_other_baselines.sh

$ sh eval_hpatches.sh
$ sh eval_hpatches_other_baselines.sh

log-polar-descriptors's People

Contributors

dagnyt avatar etrulls avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

log-polar-descriptors's Issues

Checking Validation Loss

Hi,

I am getting 0.083 Accuracy(FPR95) on the validation set for PTN, and 0.54 for STN; using the trained models you provided on the website. Is this result ok?
I found that in the STN config file you set
VAL_SPLIT: 'dl/patchesScale16_network_1.5px/PTN/val/'
should it be addressed to PTN for validating the model on the validation set?

Train Data

Training data had link error.Could you update it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.