abbypa / nnproject_deepmask Goto Github PK
View Code? Open in Web Editor NEWDeep Neural Network for object segmentation.
Deep Neural Network for object segmentation.
I am running in ubuntu 16.04. I am getting the following error when using your CreateVggGraphWeights.py helper script to convert the weight file to a Graph model.
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2840)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2798)
File "/home/senthil/envs/dp_mask/local/lib/python2.7/site-packages/h5py/_hl/attrs.py", line 58, in getitem
attr = h5a.open(self._id, self._e(name))
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2840)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2798)
File "h5py/h5a.pyx", line 77, in h5py.h5a.open (/tmp/pip-nCYoKW-build/h5py/h5a.c:2337)
KeyError: "Can't open attribute (Can't locate attribute: 'nb_layers')"
I browsed for two days and came to know that it has something to do with compatibility between keras 0.3.1 and the weight file which is provided. I tried all vgg weights files for theano from this link.
https://github.com/fchollet/deep-learning-models/releases
Still I wasn't able to resolve the issue.
Will be thankful if you could help me through this
I have attached pip freeze of my environment for your perusal
I am doing the research of semantic segmentation using convolutional neural network.After reading your paper,I think your codes are very important for me,but my computer system is Mac OS,do you know how to make the codes run on Mac OS?
Hi,
All the input images identified result in segments being too small or too big for the image. None of the image/segmentation goes past that check.
Output of stats:
imgs found: 164
imgs with illegal annotations: 12
imgs with legal annotations: 152
seg too big: 385
seg too small: 495
seg too close to the edges: 2
seg success: 0
everything else is 0...
What could be the issue?
python CreateVggGraphWeights.py
results in following error
Traceback (most recent call last):
File "CreateVggGraphWeights.py", line 144, in <module>
graph = VGG_16_graph()
File "CreateVggGraphWeights.py", line 99, in VGG_16_graph
model.add_node(Flatten(), name='27', input='26')
File "/home/immanuel/programs/python2-packages/keras/keras/legacy/models.py", line 169, in add_node
layer.add_inbound_node(self._graph_nodes[input])
File "/home/immanuel/programs/python2-packages/keras/keras/engine/topology.py", line 572, in add_inbound_node
Node.create_node(self, inbound_layers, node_indices, tensor_indices)
File "/home/immanuel/programs/python2-packages/keras/keras/engine/topology.py", line 152, in create_node
output_shapes = to_list(outbound_layer.get_output_shape_for(input_shapes[0]))
File "/home/immanuel/programs/python2-packages/keras/keras/layers/core.py", line 402, in get_output_shape_for
'(got ' + str(input_shape[1:]) + '. '
Exception: The shape of the input to "Flatten" is not fully defined (got (0, 7, 512). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model.
Hi,
I try to first run EndToEnd.py to see how the project works, but there are many pre-set paths, like Resources and Predictions containing other files that are not originally in the repository. Does that mean I have to generate those paths and files in advance by myself?
For example, to obtain 'Resources/vgg16_graph_weights.h5', I need to run HelperScripts/CreateVggGraphWeights.py. However, I get a bug there saying "'Graph' object has no attribute 'layers'", occurred at line 150 (graph.set_weights(model.get_weights())).
Can anyone help? Thanks~
"downsample module has been moved to the theano.tensor.signal.pool module.")
creating graph model...
creating sequential model...
Traceback (most recent call last):
File "CreateVggGraphWeights.py", line 147, in
model = VGG_16('..\Resources\vgg16_weights.h5')
File "CreateVggGraphWeights.py", line 55, in VGG_16
model.load_weights(weights_path)
File "build/bdist.linux-x86_64/egg/keras/models.py", line 781, in load_weights
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (-------src-dir-------/h5py/_objects.c:2582)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (-------src-dir-------/h5py/_objects.c:2541)
File "/home/sjtu/anaconda2/lib/python2.7/site-packages/h5py/_hl/attrs.py", line 58, in getitem
attr = h5a.open(self._id, self._e(name))
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (-------src-dir-------/h5py/_objects.c:2582)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (-------src-dir-------/h5py/_objects.c:2541)
File "h5py/h5a.pyx", line 77, in h5py.h5a.open (-------src-dir-------/h5py/h5a.c:2086)
KeyError: "Can't open attribute (Can't locate attribute: 'nb_layers')"
I'm getting error - Unable to open object (Object 'graph' doesn't exist)
$ python EndToEnd.py
Using Theano backend.
/usr/local/lib/python2.7/site-packages/theano/tensor/signal/downsample.py:6: UserWarning: downsample module has been moved to the theano.tensor.signal.pool module.
"downsample module has been moved to the theano.tensor.signal.pool module.")
2016-10-14 12:53:30.489447: creating net...
Traceback (most recent call last):
File "EndToEnd.py", line 208, in <module>
main()
File "EndToEnd.py", line 172, in main
graph = create_net()
File "EndToEnd.py", line 101, in create_net
net = net_generator.create_full_net()
File "/Users/skurilyak/Documents/dev/testing/abbypa/NNProject_DeepMask/FullNetGenerator.py", line 12, in create_full_net
net = vgg_provider.get_vgg_partial_graph(weights_path=self.weights_path, with_output=False)
File "/Users/skurilyak/Documents/dev/testing/abbypa/NNProject_DeepMask/VggDNetGraphProvider.py", line 60, in get_vgg_partial_graph
model = self.get_vgg_full_graph(weights_path, False)
File "/Users/skurilyak/Documents/dev/testing/abbypa/NNProject_DeepMask/VggDNetGraphProvider.py", line 56, in get_vgg_full_graph
model.load_weights(weights_path)
File "/usr/local/lib/python2.7/site-packages/keras/models.py", line 1230, in load_weights
g = f['graph']
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2687)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2645)
File "/usr/local/lib/python2.7/site-packages/h5py/_hl/group.py", line 166, in __getitem__
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2687)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2645)
File "h5py/h5o.pyx", line 190, in h5py.h5o.open (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/h5o.c:3573)
KeyError: "Unable to open object (Object 'graph' doesn't exist)"
Any ideas?
Maybe it's related to keras issue - loading weights of a sequential model into a graph model?
It looks like only masks with larger dimension exactly 128 (as they originally exist in coco) are being taken as canonical positive examples: https://github.com/abbypa/NNProject_DeepMask/blob/master/ExamplesGenerator.py#L157
When I run it this results in under 30K positive examples. Given 80K coco images each with many segments this seems like less data than I'd expect.
Looking at the original deepmask data sampler https://github.com/facebookresearch/deepmask/blob/master/DataSampler.lua#L80 it looks like they're choosing canonicalized versions of objects that are scaled appropriately.
(PS I realize that the paper reads "During training, an input patch x_k is considered to contain a ‘canonical’ positive example if an object is precisely centered in the patch and has maximal dimension equal to exactly 128 pixels", but it fails to mention whether objects of different original size are canonicalized. Given that at inference it seems they pass many scales of the same image, https://github.com/facebookresearch/deepmask/blob/master/InferDeepMask.lua#L59, it seems likely this is for recognizing e.g. a 64px object in its canonicalized form when it is upsampled.)
Can i get the final weights of the model so that I can test the complete system without training?
Any chance you could give this a license? One good possible option would be the MIT license https://tldrlegal.com/license/mit-license, this is what Keras itself uses. In fact they might be interested in a pull request of this code to the official keras-contrib repository. https://github.com/farizrahman4u/keras-contrib
Also what kind of IOU results did you get with the models you define in this repository?
Thanks!
Hi,
Is there a way we can get the intermediate output of the net which shows the masks on the original image representing object proposals? This would help me generate object proposals and I would like to use this on my own classifier net, trained for specific objects.
Thanks.
Hi,
Two questions,
¿How Can I interpret the results?
¿How Can I see the category that is marked from one input on the network?
Thanks,
ImportError: No module named CocoUtils
no cocoApi 1.0.1 version
How to fine tune your model?
I don't have sufficient data to retrain your model from scratch.
I want to fine tune your model on my data which has only two classes ?
Hi,
The last question is how to show the detect results on the image, the scores are not visual to show the results.
How to use the predictions['seg_output'] to draw the output image
What would be needed to use this code with a tensorflow backend?
I am running the FullNetGenerator to test the model loading, comes with such problem
Using gpu device 1: Tesla K80 (CNMeM is disabled, cuDNN 4007)
Traceback (most recent call last):
File "FullNetGenerator.py", line 40, in
fn = fng.create_full_net()
File "FullNetGenerator.py", line 12, in create_full_net
net = vgg_provider.get_vgg_partial_graph(weights_path=self.weights_path, with_output=False)
File "/home/zaikun/dl/statefarm/NNProject_DeepMask/VggDNetGraphProvider.py", line 60, in get_vgg_partial_graph
model = self.get_vgg_full_graph(weights_path, False)
File "/home/zaikun/dl/statefarm/NNProject_DeepMask/VggDNetGraphProvider.py", line 56, in get_vgg_full_graph
model.load_weights(weights_path)
File "/users/zaikun/anaconda2/lib/python2.7/site-packages/keras/legacy/models.py", line 775, in load_weights
super(Graph, self).load_weights(fname)
File "/users/zaikun/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 2309, in load_weights
str(len(flattened_layers)) + '.')
Exception: You are trying to load a weight file containing 37 layers into a model with 0.
Question: could I run it on ubuntu?
Hi guys,
I got this error when I try to do the training. The Traceback show below:
Error allocating 411041792 bytes of device memory < out of memory >. Driver report 169512960 bytes free and 2147483648 bytes total
Have you encounter this problem before? May I ask what size your GPU memory is?
Regards,
Zhengxu
Beside reading the source code ,is there a convenient instructions for both using and training custom data?
Thank you !
I followed the steps given in the readme. Got some examples generated through ExampleGenerator.py
. However, it seems that EndToEnd.py
can't find them. Sifting through the code, I could gather that ExamplesGenerator.py
is writing to Results/pos-train
and Results/neg-train
whereas EndToEnd.py
is trying to read from Predictions/train
. Is there a missing step in the readme where we need to move files to the correct folder or is there some other error?
(For details, the error I'm getting in EndToEnd.py
is the empty list when trying to stack images.)
p.s. Thanks a lot for putting in the effort to make this code available and for pointing to the pre-trained models.
I am tried to run your code. However, I got the for layer in self.layers: “AttributeError: 'Graph' object has no attribute 'layers' ”error. when I running CreateVggGraphWeights.py,
Try to run python HelperScripts/CreateVggGraphWeights.py
Got error:
Traceback (most recent call last):
File "HelperScripts/CreateVggGraphWeights.py", line 150, in
graph.set_weights(model.get_weights())
File "/users/zaikun/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 841, in set_weights
params = self.trainable_weights + self.non_trainable_weights
File "/users/zaikun/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 1840, in trainable_weights
for layer in self.layers:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.