lzx551402 / aslfeat Goto Github PK
View Code? Open in Web Editor NEWImplementation of CVPR'20 paper - ASLFeat: Learning Local Features of Accurate Shape and Localization
License: MIT License
Implementation of CVPR'20 paper - ASLFeat: Learning Local Features of Accurate Shape and Localization
License: MIT License
Hi, @zjhthu Thanks for your great works!
I used your pretrained model to evaluate on the HPatches dataset, but the results are much worse than reported in the paper. The evaluation results of the pretrained model are as follows:
----------i_eval_stats----------
avg_n_feat 4492
avg_rep 0.5117718
avg_precision 0.5868967
avg_matching_score 0.310993
avg_recall 0.56954247
avg_MMA 0.58678484
avg_homography_accuracy 0.8846155
----------v_eval_stats----------
avg_n_feat 4967
avg_rep 0.49724704
avg_precision 0.53153855
avg_matching_score 0.23847558
avg_recall 0.45807076
avg_MMA 0.48269215
avg_homography_accuracy 0.46428576
----------all_eval_stats----------
avg_n_feat 4738
avg_rep 0.5042403
avg_precision 0.55819243
avg_matching_score 0.2733914
avg_recall 0.5117423
avg_MMA 0.53281087
avg_homography_accuracy 0.6666667
The model trained by myself and post-CVPR update model are also very poor. Do I need to adjust certain parameters? Or are there any problems with the evaluation scripts? Looking forward to your suggestions.
Thanks.
Hi ! Thank you for the great work and sharing! ASLfeat is a very competitive method and is currently one of the best in MMA.
So, I have been interested in the ASLfeat's performance on 3D reconstruction as it seems that ASLfeat is doing very well in providing high registered images and sparse points.
So, I run a small test using the output from your package and integrate it into ETH Benchmark evaluation on Herzjesu.
Could you advice if I should set anything in addition ?
Here is the results of ASLfeat.
Also, just to give a reference. Here is the results of SIFT. The other methods seem to do ok too.
Also, I attached my setting in the config here.
If it happened that I did not set something right, please let me know:.
data_name: 'eth'
data_split: ['Herzjesu', 'Fountain', 'South-Building'] #['Gendarmenmarkt', 'Madrid_Metropolis', 'Tower_of_London']
data_root: '/mnt/HDD4TB1/local-feature-evaluation/datasets'
dump_root:
truncate: [0, null]
model_path: 'pretrained/aslfeat/model.ckpt-380000' # 'pretrained/aslfeat/model.ckpt-60000' #
overwrite: true
net:
max_dim: 2048 #1600 #
config:
kpt_n: 20000 # 20000
kpt_refinement: true
deform_desc: 1
score_thld: 0.5
edge_thld: 10
multi_scale: true
multi_level: true
nms_size: 3
eof_mask: 5
need_norm: true
use_peakiness: true
post_format:
suffix: '_aslfeat20K2048_380'
Could I have the .meta file to load the model?
Hi,
ASLFeat paper says that it can output affine frames, similar to AffNet, however the code outputs keypoint locations only. It is possible to get the full frame?
Best, Dmytro
Hello, how should I calculate the time taken for a simple point of interest extraction? In the code, interest points and descriptors are calculated simultaneously:
desc, kpt, _ = model.run_test_data(gray_img)
def run_test_data(self, data):
""""""
out_data = self._run(data)
return out_data
def _run(self, data):
raise NotImplementedError
This really confuses me how to calculate the time consumed by the point of interest extraction, I look forward to your reply and thank you for your excellent work!
Hi, deer author,
I find that you use need_norm
in def our_score()
https://github.com/lzx551402/ASLFeat/blob/master/models/cnn_wrapper/aslfeat.py#L106
However, you does not talk about this normalization.
Could you tell me that how it affects the performance?
I want to reappear the result of aslfeat on image retrieval. But I can't find the script named benchmark.py. I think the script is important. It may contain: 1、How to use libvot on aslfeat. 2、The parameters of vocabulary tree. Can you provide this script ? My email is "[email protected]". Thank you very mach.
network.py
I meet a error in core in 485 line is
feat_h, feat_w = [i for i in feature_map_size[0: 2]]
x, y = tf.meshgrid(tf.range(feat_w), tf.range(feat_h))
feat_h, feat_w can not to float。How do I fix it?can you help me? Thanks!
" array.".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor (meshgrid/Size_1:0) to a numpy array.
Thanks for your remarkable works! I follow your instruction to evaluate the local feature on Aachen day-and-night benchmark by using evaluation script mentioned. However, the results are much higher than ones provided in your paper (D2-Net, R2D2, SuperPoint, etc.) I have not fine-tuned the setup in the evaluation script and I cannot find the reason causing such gap. Could you give me some advice to reappear the correct results? Thanks :)
As a reference, my results are:
SuperPoint: 70.4/79.6/88.8
D2-Net: 74.5/85.7/99.0
R2D2: 74.5/84.7/98.0
which seems unreasonable ...
Hello,
Thank you for your work!
I have noticed from your 'datasets/base_dataset.py' file that there are cases where your network accepts RGB inputs (if self.config['stage'] == 'reg'). However, when looking at all your evaluations, you are always feeding grayscale images to the network. I was wondering in which cases you would prefer RGB images, if any at all. In general, did you notice an increase in performance by using grayscale input instead of RGB?
Best,
Ali
Hi there, I'm doing some tests on your detector&descriptor and I find it very interesting. I noticed that if I try to evaluate the correspondences between an image and the same image rotated by 90 degrees, ASLFeat does not find any matches. Instead, if I rotate the image by a small angle, the network is able to find the matches up to a limit angle of about 20 degrees. This happens because the descriptor is not invariant to rotations, is it correct? Or am I missing something? Thank you very much!
Hi,
The download links for the pretrained models are not accessible to me. Can you double-check that? Thanks!
Hi @zjhthu, @lzx551402, @vdvchen,
While reviewing your code I came across the step where points near the edge of the frame are removed. This is something I have seen done in SuperPoint also. I am curious about why you do this? And what is the, if any, theoretical need for this step.
Best,
Patrick
In Multi-level keypoint detection (MulDet) of section 3.3, the fig.5a (b,c,d) should be fig.2a (b,c,d)
Any advice will be helpful! Thanks!
Hello!I alse want to reappear the result of aslfeat on image retrieval. But can't find the script named benchmark.py. Can you offer the script? My email is "[email protected]". Thank you very much!
I extract FM-Bench features using python evaluations.py --config configs/fmbench_eval.yaml
,but I find the kpts and descs are symbolic links. So When I run FM-Bench Pipline_Demo, the returned fid of fopen('.keypoints')
is -1. And I get Invalid file identifier. Use fopen to generate a valid file identifier.
There is the result when I run file 0006*
in the features folder:
0006_l.descriptors: symbolic link to /data_ssd/FM-Bench/Features_aslfeat_ms/TUM/0005_l.descriptors
0006_l.keypoints: symbolic link to /data_ssd/FM-Bench/Features_aslfeat_ms/TUM/0005_l.keypoints
0006_r.descriptors: data
0006_r.keypoints: data
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.