imatge-upc / retrieval-2016-deepvision Goto Github PK
View Code? Open in Web Editor NEWFaster R-CNN features for Instance Search
Home Page: http://imatge-upc.github.io/retrieval-2016-deepvision/
License: MIT License
Faster R-CNN features for Instance Search
Home Page: http://imatge-upc.github.io/retrieval-2016-deepvision/
License: MIT License
In many places in the code there is normalize(feats)
but the return value is never actually used. The feats array remains un-normalized. This function will not run in-place as it is currently called.
Use feats = normalize(feats)
Hi!
I'm interested in your excellent work.
I'm trying to list your method as a comparison in my paper. However, the download link provided in fetch_model.sh is seems invalid now. could you please update it? Thanks a lot!
i.e. if self.stage is 'rerank2nd':
self.get_query_local_feat(frames_sorted[i_qe],locations_sorted[i_qe]) must be reshaped to (-1,1) before adding to query_feats
Hi
Do you have examples of what the params.py
file should look like for the ZF
model, rather than the VGG
model. I could probably track down the appropriate prototxt
file myself, but I want to make sure I'm not making some mistake.
Thanks!
Ben
Dear Amaia,
I read your paper “Faster R-CNN Features for Instance Search” and find it a great work. We are interested in trying out the method and apply it in our research.
I download the codes from the GitHub and can only get the best performance of Table 1 IPA-max, IPA-max 55.9%. We cannot implement the Fine-tune Faster R-CNN model, would you please share your fine tuned mode or the pre-trained objects of the Microsoft COCO dataset with us then we can fine-tune the model by ourself?
Many thanks,
Lianli Gao
Could I use my own dataset to extract features?
Hi while running features.py I keep getting tis error:
File "/home/Athma/Downloads/InstanceSearch/retrieval-2016-deepvision/test.py", line 76, in _get_rois_blob
rois, levels = _project_im_rois(im_rois, im_scale_factors)
File "/home/Athma/Downloads/InstanceSearch/retrieval-2016-deepvision/test.py", line 91, in _project_im_rois
im_rois = im_rois.astype(np.float, copy=False)
AttributeError: 'NoneType' object has no attribute 'astype'
My caffe installation of faster rcnn seems to be working just fine. and network gets loaded in caffe.
this is where the issue comes:
I0509 13:05:38.588491 22199 net.cpp:270] This network produces output bbox_pred
I0509 13:05:38.588496 22199 net.cpp:270] This network produces output cls_prob
I0509 13:05:38.588518 22199 net.cpp:283] Network initialization done.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 548317115
I0509 13:05:39.171497 22199 net.cpp:816] Ignoring source layer data
I0509 13:05:39.247697 22199 net.cpp:816] Ignoring source layer loss_cls
I0509 13:05:39.247720 22199 net.cpp:816] Ignoring source layer loss_bbox
I0509 13:05:39.249155 22199 net.cpp:816] Ignoring source layer silence_rpn_cls_score
I0509 13:05:39.249172 22199 net.cpp:816] Ignoring source layer silence_rpn_bbox_pred
Extracting database features...
--Traceback ERROR---
Hi, when I run the .py file, there show the error meessage about that where set params['dataset'] = 'oxford' in my params.py, How can I fix it? Thx :
Traceback (most recent call last):
File "ranker.py", line 169, in
R.rank()
File "ranker.py", line 149, in rank
self.get_query_vectors()
File "ranker.py", line 74, in get_query_vectors
self.query_feats[i,:] = self.db_feats[np.where(np.array(self.database_list) == query_file)]
ValueError: could not broadcast input array from shape (0,512) into shape (512)
I have successfully run the following:
But when I run features.py
I have the following error:
Traceback (most recent call last): File "features.py", line 119, in <module> learn_transform(params,feats) File "features.py", line 23, in learn_transform feats = normalize(feats) File "/usr/lib/python2.7/dist-packages/sklearn/preprocessing/data.py", line 1280, in normalize estimator='the normalize function', dtype=FLOAT_DTYPES) File "/usr/lib/python2.7/dist-packages/sklearn/utils/validation.py", line 407, in check_array context)) ValueError: Found array with 0 sample(s) (shape=(0, 512)) while a minimum of 1 is required by the normalize function.
I was curious if one of the creators might be able to give a little insight into the choice of features used.
It's my understanding that the faster-rcnn
roi_pooling
layer maxpools the previous conv_*
layer in a 7x7 grid, yielding a (n_boxes, 512, 7, 7)
output. You take this output and either max- or sum-pool the 7x7 grid to get a 512d vector.
I was wondering if you had tried your current approaches using the fc6
layer that maps (n_boxes, 512, 7, 7) -> (n_boxes, 4096)
. Or, alternatively, flatten (n_boxes, 512, 7, 7) -> (n_boxes, 512 * 7 * 7)
-- this would be a big vector, but the 7x7 grid is an roi_pooling
parameter that could be reduced to 2x2 or 3x3.
Just curious to see if you (or others) had experimented at all with these different featurizations, or if there are reasons to think that they would not perform well.
Thanks!
Ben
# PCA MODEL - use paris for oxford data and vice versa
if self.dataset is 'paris':
self.pca = pickle.load(open(params['pca_model'] + '_oxford.pkl', 'rb'))
elif self.dataset is 'oxford':
self.pca = pickle.load(open(params['pca_model'] + '_paris.pkl', 'rb'))
the names of dataset are exchanged.
Hello~
when the database is paris and run ranker.py, I have the following error:
wh@rsliu-X10DAi:~/FR-for-instance/retrieval-2016-deepvision-master$ python ranker.py
Applying PCA
Traceback (most recent call last):
File "ranker.py", line 165, in
R.rank()
File "ranker.py", line 146, in rank
self.get_query_vectors()
File "ranker.py", line 71, in get_query_vectors
self.query_feats[i,:] = self.db_feats[np.where(np.array(self.database_list) == query_file)]
ValueError: could not broadcast input array from shape (0,512) into shape (512)
It is strange because when the database is oxford, it is OK.
I don't know how to fix it. Please help me.
Many thanks.
i run the script get_oxford.sh
and get_paris.sh
to download the dataset, but i found that only 1059 images for oxford
and 2794 images for paris
. while in your paper, there are 5063 images for oxford
and 6412 for paris
.
btw, i run ranker.py
, and found ValueError: could not broadcast input array from shape (0,512) into shape (512)
, i guess that it causes by missing some images.
thanks.
Dear Amaia and Xavier. I am very interest in your work that “Faster R-CNN Features for Instance Search”. But I download the Faster R-CNN models failed by running "data/models/fetch_models.sh"(https://github.com/imatge-upc/retrieval-2016-deepvision) Could you please give me a new download link. Thank you very much.
Thanks for your great work. I have a question to ask.
It seems that the datasets for fine-tuning (oxford/paris/ins2013) are also the datasets you perform query on? Am I right? I'm new in image retrieval and I not sure if it's a convention, but it seems to be unfair if the "training set" and the "test set" are the same.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.