Comments (13)
Depth data is only necessary when training (which we can easily get from SfM). For testing, we used the depth to verify if the matches are correct or not (they are treated as GT signals).
However I tried to extract keypoints and descriptors for 2 similar images from outdoors dataset without additional data and tried bruteforce and flann matchers. Both give incorrect results.
For this, there's too little information for me to answer your question.
from lf-net-release.
I tried two approaches with two images from outdoor_examples
1)
nn_dist, nn_inds2, _, _, _ = nearest_neighbors(desc_feats1, desc_feats2)
kpts2_corr = tf.Session().run(tf.cast(tf.gather(kpts2, nn_inds2), tf.float32))
And then draw the nearest corresponding keypoints
- bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
matches = bf.match(desc_feats1, desc_feats2)
matches = sorted(matches, key=lambda x: x.distance)
Then draw the best 10-20. Most of them are incorrect.
The procedure of desc_feats calculation was taken from run_lfnet.py
from lf-net-release.
Have you tried doing the same thing with let's say SIFT keypoints? Maybe they are just very hard images. One way to easily check is to use some sample images in the dataset to try.
For the BFMatcher, you might want to disable cross check.
from lf-net-release.
I tried on sample images with london bridge.
This is the first method result
And the second one (top-10 without cross-check)
from lf-net-release.
from lf-net-release.
Easier example gives slighlty better results. However on sample images the results are as shown above.
So if I take
Try to compare your results with e.g. DELF. This would be honest to other descriptors:)
Also from my point of view you compare methods with different training base. Yours is trained with additional depth data, others are not trainable at all.
However on this pair of images SIFT shows better results for matching (took 50 best matches to compare with the first method above) and this is not an easier example.
from lf-net-release.
This would be honest to other descriptors:)
This actually hurts my feelings :-( I disagree with you on this. What we claim is that under the setup that we tested on, we can get good results. We are not trying to sell you a final product. What we want to show is the potential of our method.
Yours is trained with additional depth data, others are not trainable at all.
I also tend to disagree with this. The depth data is obtained from the images themselves, without any additional data source (at least for the outdoors model). Since we use an off-the-shelf method, we did not put too much emphasis on this.
However on this pair of images SIFT shows better results for matching (took 50 best matches to compare with the first method above) and this is not an easier example.
How is this comparable? You extract 10 from one, and 50 from the other. It is only comparable when only 50 keypoints are extracted per image and matched together for both methods. I would not be surprised if our method performs worse than SIFT in this setup as we trained for a higher number of keypoints.
Try to compare your results with e.g. DELF
In our paper, we compare also with SuperPoint and our prior work on LIFT, which are both trained methods. DELF is aimed at image retrieval, so it would actually be unfair to compare against it in terms of image matching.
If you want good keypoints with just a few keypoints, I'd say you can also try to train our method in that scenario, and maybe you can get better results. This however is something that I can only guess, so please let us know how it turns out.
from lf-net-release.
from lf-net-release.
I see. My bad there. I looked at the few lines in the example above compared to the one below and jumped to conclusions
In this case I guess the method breaks down at these low feature counts. I guess we definitely need to improve on that part.
Thank you very much for the report though. I appreciate it.
from lf-net-release.
However as you can see in the code the features displayed are the best in terms of knn search. So the following key points should be worse. Otherwise how can we say about the end results: the images are matching or not.
Glad I helped you.
from lf-net-release.
end results: the images are matching or not.
Ooh, yeah. These keypoints are really not the best for image retrieval. We are mostly aiming for camera pose estimation. I see why you mentioned DELF.
In the training setup there's really nothing encouraging/discouraging matches between matching and non matching iamges. The setup is assuming that the images match, and trying to be as invariant as possible to the camera pose/scenic changes. For image-level matching, I dont' think our method would be the method to use.
from lf-net-release.
Hi! I am looking to test the matching on a pair of my own pictures. I have the GroundTruth Depth of both of them. Could you explain what is needed to edit in the notebook in order to make it work on my data?
from lf-net-release.
Hi, I'm going to close this thread. @IddoBotzer Can you please create it as a separate issue? This one seems irrelevant to your question.
Thanks,
Kwang
from lf-net-release.
Related Issues (20)
- Matching Score of SIFT on HPatches differs from SuperPoint HOT 1
- Find Matches HOT 11
- Training Pipeline HOT 7
- Invalid argument error HOT 2
- Evaluate on HPatches benchmark HOT 1
- Top K pixels gradient backpropagation HOT 3
- Using lf-net in OpenCV dnn HOT 4
- No file is found under the directory "/home" HOT 3
- What are the thetas and inv_thetas of next batch? HOT 3
- raise ValueError('Miss finding argument: unparsed={}\n'.format(unparsed)) HOT 1
- Inference time for images HOT 3
- Will you release a pytorch version? HOT 1
- A little problem with a function"logarithm_so3"? HOT 1
- description match HOT 9
- test datasets keypoints number HOT 1
- Error while running run_lfnet.py HOT 1
- Return also the keypoint scores
- Test on Roxford5k benchmark HOT 1
- where is the function 'euclidean_augmentation' in train_lfnet from? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lf-net-release.