vcg-uvic / lf-net-release Goto Github PK
View Code? Open in Web Editor NEWCode Release for LF-Net: Learning Local Features from Images
License: Other
Code Release for LF-Net: Learning Local Features from Images
License: Other
In the line 702 of train_lfnet, there is a function named 'euclidean_augmentation' . Hoewever, I can't find the definition of it.
Could you please tell me where is the definition of this function?
Hello,
i try to freeze your model in order to use it with opencv dnn module to test it in my application.
i frozen the model using this lines :
` input_graph_def = sess.graph.as_graph_def()
output_node_names=ops['kpts'].op.name+","+ops['feats'].op.name+","+ops['scale_maps'].op.name+","+ops['kpts_scale'].op.name+","+ops['degree_maps'].op.name+","+ops['kpts_ori'].op.name
output_graph_def = graph_util.convert_variables_to_constants(
sess, # The session
input_graph_def, # input_graph_def is useful for retrieving the nodes
output_node_names.split(",")
)
output_graph="export/frozen.pb"
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
tf.train.write_graph(output_graph_def, 'export/', 'frozentxt.pbtxt',as_text=True)`
and then i use the thoses lines to optimze the networks
`with tf.gfile.FastGFile(output_graph, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.summary.FileWriter('logs', graph_def)
inp_node = 'Placeholder'
out_node = output_node_names.split(",")
graph_def = optimize_for_inference_lib.optimize_for_inference(graph_def, [inp_node], out_node,
tf.float32.as_datatype_enum)
graph_def = TransformGraph(graph_def, [inp_node], out_node, ["sort_by_execution_order"])
tf.train.write_graph(graph_def, "export/", 'graph_opt.pbtxt', as_text=True)`
but i get some warnings
WARNING:tensorflow:Didn't find expected Conv2D input to 'MSDeepDet/ConvOnlyResNet/block-1/pre-bn/FusedBatchNorm'
WARNING:tensorflow:Didn't find expected Conv2D input to 'MSDeepDet/ConvOnlyResNet/block-1/mid-bn/FusedBatchNorm'
WARNING:tensorflow:Didn't find expected Conv2D input to 'MSDeepDet/ConvOnlyResNet/block-2/pre-bn/FusedBatchNorm'
WARNING:tensorflow:Didn't find expected Conv2D input to 'MSDeepDet/ConvOnlyResNet/block-2/mid-bn/FusedBatchNorm'
WARNING:tensorflow:Didn't find expected Conv2D input to 'MSDeepDet/ConvOnlyResNet/block-3/pre-bn/FusedBatchNorm'
WARNING:tensorflow:Didn't find expected Conv2D input to 'MSDeepDet/ConvOnlyResNet/block-3/mid-bn/FusedBatchNorm'
WARNING:tensorflow:Didn't find expected Conv2D input to 'MSDeepDet/ConvOnlyResNet/fin-bn/FusedBatchNorm'
WARNING:tensorflow:Didn't find expected Conv2D input to 'SimpleDesc/bn1/FusedBatchNorm'
WARNING:tensorflow:Didn't find expected Conv2D input to 'SimpleDesc/bn2/FusedBatchNorm'
WARNING:tensorflow:Didn't find expected Conv2D input to 'SimpleDesc/bn3/FusedBatchNorm'
thoses nodes that generate warnings are the nodes with ConvOnlyRestNet as an input.
this bloc have no relation with the placeholder node so how can it get the input blob?
there is something i'm not understanding, could you help me please?
kind regards
Hi,Thanks for your great work for this paper! Tensorflow is really hard, so could you give a pytorch version? Thanks
Hi,the article says that the number of detected feature points is controlled at 500 due to computational memory problems, but this threshold can be raised during testing. I wonder where this setting can be changed. Because it is found in practical application that there are no points where there should be some points, I think it may be related to this threshold value.
thank you!!
I am doing some research on image retrieval. I find your method really innovative. But when I use the model you gave to evaluate on Roxford5k benchmark (Medium level) extracting 1000 keypoints, the conclusion indicators are very low, mAP only 26.7, mp@10 50.14. I also did RANSAC after extracting keypoints.
Is this a reasonable conclusion or I did somethin wrong?
Easier queries seems right, but a little difficult queries with view point change turn out wrong.
thanks~
I understand that support is not provided for training, however, this is more an implementation query than a training query.
In building the network, here
Line 246 in 12ba6eb
What are the last 4 parameters, thetas1
, thetas2
, inv_thetas1
and inv_thetas2
. According to the SfM data loader, as shown here:
lf-net-release/mydatasets/sfmdataset.py
Line 590 in 12ba6eb
They are not returning any values such as thetas1 etc. Quite simply, what are those, and where is the code snippet to get those?
Thanks
Line 197 in ab43651
In the code, the gradients of the top K pixels are stopped, but in the paper, the opposite is stated.
When I ran run_lfnet.py
as specified in the github, I produce this following error:
Found 1179 images...
2%|██▊ | 20/1179 [01:32<1:29:44, 4.65s/it]Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,412,464] = 360 is not in [0, 360)
[[{{node GatherV2}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_lfnet.py", line 227, in <module>
main(config)
File "run_lfnet.py", line 151, in main
outs = sess.run(fetch_dict, feed_dict=feed_dict)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,412,464] = 360 is not in [0, 360)
[[node GatherV2 (defined at /data/userdata/u.rajkumar/lf-net-release/det_tools.py:156) ]]
Caused by op 'GatherV2', defined at:
File "run_lfnet.py", line 227, in <module>
main(config)
File "run_lfnet.py", line 82, in main
ops = build_networks(config, photo_ph, is_training)
File "run_lfnet.py", line 55, in build_networks
degree_maps, _ = get_degree_maps(ori_maps) # degree (rgb psuedo color code)
File "/data/userdata/u.rajkumar/lf-net-release/det_tools.py", line 156, in get_degree_maps
degree_maps = tf.gather(angle2rgb, degree_maps[...,0])
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 3273, in gather
return gen_array_ops.gather_v2(params, indices, axis, name=name)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 3748, in gather_v2
"GatherV2", params=params, indices=indices, axis=axis, name=name)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): indices[0,412,464] = 360 is not in [0, 360)
[[node GatherV2 (defined at /data/userdata/u.rajkumar/lf-net-release/det_tools.py:156) ]]
Everything seems to be starting to run correctly as the code finds all the images and makes progress 2% of the way. However, it abruptly stops with that following error. The dataset is the sacre_coeur dataset downloaded as is and no modifications have been made to it.
Sorry, but I got a problem when running the code in the docker,
No file is found under the directory "/home"
Looking forward to your reply, Thanks.
I download the pretrained model and put them in the /release/models/.
However, while I running the run_lfnet.py, the /release/models/outdoor/config.pkl can't be opened correctly, see the following
Connected to pydev debugger (build 171.3780.115)
Traceback (most recent call last):
File "/home/jue/pycharm-community-2017.1/helpers/pydev/pydevd.py", line 1578, in
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/jue/pycharm-community-2017.1/helpers/pydev/pydevd.py", line 1015, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/jue/Documents/LFNet_mine/run_lfnet.py", line 214, in
raise ValueError('Fail to open {}'.format(config_path))
ValueError: Fail to open ./release/models/outdoor/config.pkl
first call inverse_warp_view_2_to_1, i have show the heatmaps1w,but it is black when heatmaps2w is normal.
i would like to know there is bug here, because there is also has inverse_warp_view_1_to_2, but is commented.
Any one have met this problem?i will read code carefully
I am trying to extract features from png image but I got this error
raise ValueError('Miss finding argument: unparsed={}\n'.format(unparsed))
but this causes a new problem which I get always the same number of keypoint which is 500 for different images
Hi,
We are unable to download your pre-trained model and sequences.
Thanks for releasing the code for this amazing work!
I have a query regarding the execution time on the test images. How long does the code take to extract and describe for a single image. The paper claims that feature extraction can be performed at 25 fps for the VGA frames.
I tried running the code on Google Colaboratory (due to lack of access to a GPU at this time) on the given examples, as instructed in the Readme. And upon running the code, each image takes about 1.7 seconds to complete. Admittedly, the GPU provided is not as good as the Titan X Pascal as mentioned in the paper, but is such a huge difference to be expected?
Is the FPS given only for feature extraction and not combining extraction and description stage? If so, what is the FPS you expect for the entire process for a single image?
Thanks!
Can you please provide a framework or some code snippet that will allow one to use your LF-net to benchmark on HPatches? For example, how to directly feed into the descriptor portion of LF-net? In particular, to recreate to some of the results presented in the paper?
We are currently not returning the keypoint scores, which can be annoying when including it to other benchmarks. Can we also return them? I've seen in
Line 203 in 53c579c
That we might want to also get them here.
Hi, @kmyid ,
Thanks for sharing your excellent work!
It seems than the links to the two important data as mentioned in the caption are dead. Could u please fix it?
Thanks!
hello, I had run your code successfully, even use my datasets. I got much keypoints and corresponding descriptions. Just as in your paper, I want to establish the corresponding relationship between the two graphs, but you emphasize that ratio test cannot be used, so I want to ask, is there any way to match the descriptor?Or how do you filter good matching feature points.
Thanks
Hi,
I am aware that the authors have explicitly confirmed no support in for training.
I am just looking out for others who are also looking to do the training on lf-net.
Anyone, who can share their experience while looking for training the network will be helpful.
Thank you.
Hi,
I was able to run the net test and extract features from my images, now I'm looking to see the matching. Is the only way to do so is to run the demo with the notebook?
If so, what do I need to change in order to run it on my own data? (I have depth images as well)
Thank you
How can we find matches if there is no depth data? I analyzed demo.ipynb and seems matching can be done only if we have depth data.
However I tried to extract keypoints and descriptors for 2 similar images from outdoors dataset without additional data and tried bruteforce and flann matchers. Both give incorrect results.
Am I rights with depth data requirement or I do something wrong?
Thanks!
NameError Traceback (most recent call last)
in ()
1 tf.reset_default_graph()
2 batch_size = 1 # fixed
----> 3 data_loader = RawSfMDataset(longer_edge=640)
4
5 dataset = data_loader.get_dataset(config.dpt_dir, config.img_dir,
NameError: name 'RawSfMDataset' is not defined
I checked the demo.ipynb, there is no ln [4]. So how to solve the this problem. Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.