Coder Social home page Coder Social logo

aslfeat's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aslfeat's Issues

problem in 3D recon on ETH. Points shifted from reference images

Hi ! Thank you for the great work and sharing! ASLfeat is a very competitive method and is currently one of the best in MMA.

So, I have been interested in the ASLfeat's performance on 3D reconstruction as it seems that ASLfeat is doing very well in providing high registered images and sparse points.

So, I run a small test using the output from your package and integrate it into ETH Benchmark evaluation on Herzjesu.

  • The problem is that I have got the following results. The sparse points projected on images are shifted from where it supposed to be (as shown in the image number 8 on the right).

Could you advice if I should set anything in addition ?

Here is the results of ASLfeat.

Screenshot from 2020-09-18 02-27-54

Also, just to give a reference. Here is the results of SIFT. The other methods seem to do ok too.

Screenshot from 2020-09-18 02-47-16

Also, I attached my setting in the config here.
If it happened that I did not set something right, please let me know:.

data_name: 'eth'
data_split: ['Herzjesu', 'Fountain', 'South-Building']  #['Gendarmenmarkt', 'Madrid_Metropolis', 'Tower_of_London']
data_root: '/mnt/HDD4TB1/local-feature-evaluation/datasets'
dump_root: 
truncate: [0, null]
model_path: 'pretrained/aslfeat/model.ckpt-380000' # 'pretrained/aslfeat/model.ckpt-60000'  # 
overwrite: true
net:
  max_dim: 2048 #1600 #
  config:
    kpt_n: 20000 # 20000
    kpt_refinement: true
    deform_desc: 1
    score_thld: 0.5
    edge_thld: 10
    multi_scale: true
    multi_level: true
    nms_size: 3
    eof_mask: 5
    need_norm: true
    use_peakiness: true
post_format:
  suffix: '_aslfeat20K2048_380'

Benchmark on HPatches dataset

Hi, @zjhthu Thanks for your great works!
I used your pretrained model to evaluate on the HPatches dataset, but the results are much worse than reported in the paper. The evaluation results of the pretrained model are as follows:

----------i_eval_stats----------
avg_n_feat 4492
avg_rep 0.5117718
avg_precision 0.5868967
avg_matching_score 0.310993
avg_recall 0.56954247
avg_MMA 0.58678484
avg_homography_accuracy 0.8846155
----------v_eval_stats----------
avg_n_feat 4967
avg_rep 0.49724704
avg_precision 0.53153855
avg_matching_score 0.23847558
avg_recall 0.45807076
avg_MMA 0.48269215
avg_homography_accuracy 0.46428576
----------all_eval_stats----------
avg_n_feat 4738
avg_rep 0.5042403
avg_precision 0.55819243
avg_matching_score 0.2733914
avg_recall 0.5117423
avg_MMA 0.53281087
avg_homography_accuracy 0.6666667

The model trained by myself and post-CVPR update model are also very poor. Do I need to adjust certain parameters? Or are there any problems with the evaluation scripts? Looking forward to your suggestions.
Thanks.

FM-Bench features are symbolic links?

I extract FM-Bench features using python evaluations.py --config configs/fmbench_eval.yaml,but I find the kpts and descs are symbolic links. So When I run FM-Bench Pipline_Demo, the returned fid of fopen('.keypoints') is -1. And I get Invalid file identifier. Use fopen to generate a valid file identifier.
There is the result when I run file 0006* in the features folder:
0006_l.descriptors: symbolic link to /data_ssd/FM-Bench/Features_aslfeat_ms/TUM/0005_l.descriptors
0006_l.keypoints: symbolic link to /data_ssd/FM-Bench/Features_aslfeat_ms/TUM/0005_l.keypoints
0006_r.descriptors: data
0006_r.keypoints: data

Calculate the time taken for interest point extraction

Hello, how should I calculate the time taken for a simple point of interest extraction? In the code, interest points and descriptors are calculated simultaneously:
desc, kpt, _ = model.run_test_data(gray_img)

    def run_test_data(self, data):
        """"""
        out_data = self._run(data)
        return out_data

    def _run(self, data):
        raise NotImplementedError

This really confuses me how to calculate the time consumed by the point of interest extraction, I look forward to your reply and thank you for your excellent work!

Grayscale vs RGB input

Hello,
Thank you for your work!
I have noticed from your 'datasets/base_dataset.py' file that there are cases where your network accepts RGB inputs (if self.config['stage'] == 'reg'). However, when looking at all your evaluations, you are always feeding grayscale images to the network. I was wondering in which cases you would prefer RGB images, if any at all. In general, did you notice an increase in performance by using grayscale input instead of RGB?

Best,
Ali

EOF Keypoint Removal

Hi @zjhthu, @lzx551402, @vdvchen,

While reviewing your code I came across the step where points near the edge of the frame are removed. This is something I have seen done in SuperPoint also. I am curious about why you do this? And what is the, if any, theoretical need for this step.

Best,
Patrick

Rotational invariance

Hi there, I'm doing some tests on your detector&descriptor and I find it very interesting. I noticed that if I try to evaluate the correspondences between an image and the same image rotated by 90 degrees, ASLFeat does not find any matches. Instead, if I rotate the image by a small angle, the network is able to find the matches up to a limit angle of about 20 degrees. This happens because the descriptor is not invariant to rotations, is it correct? Or am I missing something? Thank you very much!

Question about Affine feature output

Hi,
ASLFeat paper says that it can output affine frames, similar to AffNet, however the code outputs keypoint locations only. It is possible to get the full frame?

Best, Dmytro

Evaluation on Aachen day-and-night benchmark

Thanks for your remarkable works! I follow your instruction to evaluate the local feature on Aachen day-and-night benchmark by using evaluation script mentioned. However, the results are much higher than ones provided in your paper (D2-Net, R2D2, SuperPoint, etc.) I have not fine-tuned the setup in the evaluation script and I cannot find the reason causing such gap. Could you give me some advice to reappear the correct results? Thanks :)

As a reference, my results are:
SuperPoint: 70.4/79.6/88.8
D2-Net: 74.5/85.7/99.0
R2D2: 74.5/84.7/98.0
which seems unreasonable ...

runtime error

Sorry to bother you,when I try to run the command:python image_matching.py --config configs/matching_eval.yaml,I get the error like this:
2021-04-07 14-14-19屏幕截图

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.