This repository contains the PyTorch implementation of the project: Joint-PL - Joint point and line detection and description. This project is conducted as ETH Zürich Master Semester Project under the supervision of Rémi Pautrat.
Joint-PL is the first deep network for joint line and point detection and description. In this work, we innovatively construct point and line descriptor based on the shared dense description map. The proposed model achieves a reasonable performance compared to the state-of-the-art with less inference time.
Points detection, lines detection and matching:
Lines matching under extreme viewpoint and scale changes:
We recommend using this code in a Python environment (e.g. venv or conda). The following script installs the necessary requirements with pip:
pip install -r requirements.txt
Set your dataset and experiment paths by modifying the variables "EXPER_PATH" and "DATA_PATH" in the file JointPL/settings.py
.
Install the Python package:
pip install -e .
We used the processed Wireframe dataset in the F-Clip work to train and test our model. You can download this dataset from this link.
In order to check the generalization ability of our model, we also test it on HPatches dataset and processed York Urban dataset in the F-Clip work. You can download them separately by link1 and link2.
To train a model execute:
python3 -m JointPL.train experiment_name --conf JointPL/configs/config_name.yaml
It creates a new directory experiment_name/
in EXPER_PATH
and dumps the configuration, model checkpoints, logs of stdout, and Tensorboard summaries.
We provide the checkpoints of two pretrained models:
- checkpoint_best.tar: Joint-PL model that achieves the best performance on the validation set.
- checkpoint_34.tar: Joint-PL model derived from the last training epoch.
Note that you do not need to untar the models, you can directly place them in /output/joint_pl_pretrained_model
.
To evaluate the point detector and descriptor performance, execute:
python3 -m JointPL.evaluation.evaluate_point --model joint_model --dataset Wireframe/HPatches
To evaluate the line detector performance, firstly generate predictions of line locations in npz.file by running:
python3 -m JointPL.evaluation.validate_line --model joint_pl_pretrained_model --dataset Wireframe/York --file best/last
then get the sAP value of the trained model by running:
python3 -m JointPL.evaluation.evalute_line --model joint_pl_pretrained_model --dataset Wireframe/York --file best/last
You can directly run codes in JointPL/visualization
to visualize detected points and lines and matched lines. Especially
you can run JointPL/visualization/vis_line_match.py
locally in CPU configuration. You can generate lines detection and
description file used in JointPL/visualization/vis_line_match.py
by running JointPL/visualization/line_match_process.py
.
This work is supervised by Rémi Pautrat. Sincere thanks to him for his help and guidance. The codes of model and evaluation parts are borrowed or modified from KeyPointNet and F-Clip works. Besides, the line matching part is based on the SOLD² work. Many thanks to their wonderful works and repos.