Coder Social home page Coder Social logo

nnproject_deepmask's People

Contributors

abbypa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nnproject_deepmask's Issues

I followed all your installation requirements. I am not able to load vgg_16 model through the keras APi.

I am running in ubuntu 16.04. I am getting the following error when using your CreateVggGraphWeights.py helper script to convert the weight file to a Graph model.

File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2840)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2798)
File "/home/senthil/envs/dp_mask/local/lib/python2.7/site-packages/h5py/_hl/attrs.py", line 58, in getitem
attr = h5a.open(self._id, self._e(name))
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2840)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2798)
File "h5py/h5a.pyx", line 77, in h5py.h5a.open (/tmp/pip-nCYoKW-build/h5py/h5a.c:2337)
KeyError: "Can't open attribute (Can't locate attribute: 'nb_layers')"

I browsed for two days and came to know that it has something to do with compatibility between keras 0.3.1 and the weight file which is provided. I tried all vgg weights files for theano from this link.
https://github.com/fchollet/deep-learning-models/releases
Still I wasn't able to resolve the issue.
Will be thankful if you could help me through this
I have attached pip freeze of my environment for your perusal
screenshot from 2017-06-22 22-14-04

How to run on Mac OS ?

I am doing the research of semantic segmentation using convolutional neural network.After reading your paper,I think your codes are very important for me,but my computer system is Mac OS,do you know how to make the codes run on Mac OS?

ExamplesGenerator not generating examples

Hi,
All the input images identified result in segments being too small or too big for the image. None of the image/segmentation goes past that check.
Output of stats:
imgs found: 164
imgs with illegal annotations: 12
imgs with legal annotations: 152
seg too big: 385
seg too small: 495
seg too close to the edges: 2
seg success: 0

everything else is 0...
What could be the issue?

Error in creating VGG weights python27

python CreateVggGraphWeights.py results in following error

Traceback (most recent call last):
  File "CreateVggGraphWeights.py", line 144, in <module>
    graph = VGG_16_graph()
  File "CreateVggGraphWeights.py", line 99, in VGG_16_graph
    model.add_node(Flatten(), name='27', input='26')
  File "/home/immanuel/programs/python2-packages/keras/keras/legacy/models.py", line 169, in add_node
    layer.add_inbound_node(self._graph_nodes[input])
  File "/home/immanuel/programs/python2-packages/keras/keras/engine/topology.py", line 572, in add_inbound_node
    Node.create_node(self, inbound_layers, node_indices, tensor_indices)
  File "/home/immanuel/programs/python2-packages/keras/keras/engine/topology.py", line 152, in create_node
    output_shapes = to_list(outbound_layer.get_output_shape_for(input_shapes[0]))
  File "/home/immanuel/programs/python2-packages/keras/keras/layers/core.py", line 402, in get_output_shape_for
    '(got ' + str(input_shape[1:]) + '. '
Exception: The shape of the input to "Flatten" is not fully defined (got (0, 7, 512). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model.

Is EndToEnd.py the entrance for the whole project?

Hi,
I try to first run EndToEnd.py to see how the project works, but there are many pre-set paths, like Resources and Predictions containing other files that are not originally in the repository. Does that mean I have to generate those paths and files in advance by myself?

For example, to obtain 'Resources/vgg16_graph_weights.h5', I need to run HelperScripts/CreateVggGraphWeights.py. However, I get a bug there saying "'Graph' object has no attribute 'layers'", occurred at line 150 (graph.set_weights(model.get_weights())).

Can anyone help? Thanks~

CreateVggGraphWeights.py error (not graph error)

"downsample module has been moved to the theano.tensor.signal.pool module.")
creating graph model...
creating sequential model...
Traceback (most recent call last):
File "CreateVggGraphWeights.py", line 147, in
model = VGG_16('..\Resources\vgg16_weights.h5')
File "CreateVggGraphWeights.py", line 55, in VGG_16
model.load_weights(weights_path)
File "build/bdist.linux-x86_64/egg/keras/models.py", line 781, in load_weights
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (-------src-dir-------/h5py/_objects.c:2582)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (-------src-dir-------/h5py/_objects.c:2541)
File "/home/sjtu/anaconda2/lib/python2.7/site-packages/h5py/_hl/attrs.py", line 58, in getitem
attr = h5a.open(self._id, self._e(name))
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (-------src-dir-------/h5py/_objects.c:2582)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (-------src-dir-------/h5py/_objects.c:2541)
File "h5py/h5a.pyx", line 77, in h5py.h5a.open (-------src-dir-------/h5py/h5a.c:2086)
KeyError: "Can't open attribute (Can't locate attribute: 'nb_layers')"

Unable to open object (Object 'graph' doesn't exist)

I'm getting error - Unable to open object (Object 'graph' doesn't exist)

$ python EndToEnd.py
Using Theano backend.
/usr/local/lib/python2.7/site-packages/theano/tensor/signal/downsample.py:6: UserWarning: downsample module has been moved to the theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool module.")
2016-10-14 12:53:30.489447: creating net...
Traceback (most recent call last):
  File "EndToEnd.py", line 208, in <module>
    main()
  File "EndToEnd.py", line 172, in main
    graph = create_net()
  File "EndToEnd.py", line 101, in create_net
    net = net_generator.create_full_net()
  File "/Users/skurilyak/Documents/dev/testing/abbypa/NNProject_DeepMask/FullNetGenerator.py", line 12, in create_full_net
    net = vgg_provider.get_vgg_partial_graph(weights_path=self.weights_path, with_output=False)
  File "/Users/skurilyak/Documents/dev/testing/abbypa/NNProject_DeepMask/VggDNetGraphProvider.py", line 60, in get_vgg_partial_graph
    model = self.get_vgg_full_graph(weights_path, False)
  File "/Users/skurilyak/Documents/dev/testing/abbypa/NNProject_DeepMask/VggDNetGraphProvider.py", line 56, in get_vgg_full_graph
    model.load_weights(weights_path)
  File "/usr/local/lib/python2.7/site-packages/keras/models.py", line 1230, in load_weights
    g = f['graph']
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2687)
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2645)
  File "/usr/local/lib/python2.7/site-packages/h5py/_hl/group.py", line 166, in __getitem__
    oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2687)
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2645)
  File "h5py/h5o.pyx", line 190, in h5py.h5o.open (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/h5o.c:3573)
KeyError: "Unable to open object (Object 'graph' doesn't exist)"

Any ideas?

Maybe it's related to keras issue - loading weights of a sequential model into a graph model?

Train loss and test loss are both NaN

  • Firstly, I download 500 pictures to run ExampleGenerator.py, then generate image and mask.
    Copy them into Predictions' test and train directory. When EndToEnd.py finishes its task, train_loss and test_loss are both NaN. What does this NaN come from, the numeric computation error or some setting errors?
  • Secondly, MSCOCO image set are downloaded in the server, but when I run EndToEnd.py, it rapidly used 80% of the memory and was killed by the system. Are there still some settings errors?
  • The server runs at CentOS 7.1.1503. MSCOCO has 82783 images at all. ExampleGenerator.py generates 49740 images in neg-train and 56956 images in pos-train. I copy them all into the directory Predictions/train.
  • Thanks for your code!

Data preprocessing and object size

It looks like only masks with larger dimension exactly 128 (as they originally exist in coco) are being taken as canonical positive examples: https://github.com/abbypa/NNProject_DeepMask/blob/master/ExamplesGenerator.py#L157
When I run it this results in under 30K positive examples. Given 80K coco images each with many segments this seems like less data than I'd expect.

Looking at the original deepmask data sampler https://github.com/facebookresearch/deepmask/blob/master/DataSampler.lua#L80 it looks like they're choosing canonicalized versions of objects that are scaled appropriately.

(PS I realize that the paper reads "During training, an input patch x_k is considered to contain a ‘canonical’ positive example if an object is precisely centered in the patch and has maximal dimension equal to exactly 128 pixels", but it fails to mention whether objects of different original size are canonicalized. Given that at inference it seems they pass many scales of the same image, https://github.com/facebookresearch/deepmask/blob/master/InferDeepMask.lua#L59, it seems likely this is for recognizing e.g. a 64px object in its canonicalized form when it is upsampled.)

Final Weights

Can i get the final weights of the model so that I can test the complete system without training?

How to get the masked image showing masks for object proposals?

Hi,
Is there a way we can get the intermediate output of the net which shows the masks on the original image representing object proposals? This would help me generate object proposals and I would like to use this on my own classifier net, trained for specific objects.
Thanks.

Interpret the results

Hi,

Two questions,

¿How Can I interpret the results?

¿How Can I see the category that is marked from one input on the network?

Thanks,

Fine Tuning

How to fine tune your model?
I don't have sufficient data to retrain your model from scratch.
I want to fine tune your model on my data which has only two classes ?

tensorflow

What would be needed to use this code with a tensorflow backend?

Loading model problem

I am running the FullNetGenerator to test the model loading, comes with such problem

Using gpu device 1: Tesla K80 (CNMeM is disabled, cuDNN 4007)
Traceback (most recent call last):
File "FullNetGenerator.py", line 40, in
fn = fng.create_full_net()
File "FullNetGenerator.py", line 12, in create_full_net
net = vgg_provider.get_vgg_partial_graph(weights_path=self.weights_path, with_output=False)
File "/home/zaikun/dl/statefarm/NNProject_DeepMask/VggDNetGraphProvider.py", line 60, in get_vgg_partial_graph
model = self.get_vgg_full_graph(weights_path, False)
File "/home/zaikun/dl/statefarm/NNProject_DeepMask/VggDNetGraphProvider.py", line 56, in get_vgg_full_graph
model.load_weights(weights_path)
File "/users/zaikun/anaconda2/lib/python2.7/site-packages/keras/legacy/models.py", line 775, in load_weights
super(Graph, self).load_weights(fname)
File "/users/zaikun/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 2309, in load_weights
str(len(flattened_layers)) + '.')
Exception: You are trying to load a weight file containing 37 layers into a model with 0.

Error allocating device memory

Hi guys,

I got this error when I try to do the training. The Traceback show below:
Error allocating 411041792 bytes of device memory < out of memory >. Driver report 169512960 bytes free and 2147483648 bytes total
Have you encounter this problem before? May I ask what size your GPU memory is?

Regards,
Zhengxu

EndToEnd.py can't find images

I followed the steps given in the readme. Got some examples generated through ExampleGenerator.py. However, it seems that EndToEnd.py can't find them. Sifting through the code, I could gather that ExamplesGenerator.py is writing to Results/pos-train and Results/neg-train whereas EndToEnd.py is trying to read from Predictions/train. Is there a missing step in the readme where we need to move files to the correct folder or is there some other error?

(For details, the error I'm getting in EndToEnd.py is the empty list when trying to stack images.)

p.s. Thanks a lot for putting in the effort to make this code available and for pointing to the pre-trained models.

CreateVggGraphWeights.py error

I am tried to run your code. However, I got the for layer in self.layers: “AttributeError: 'Graph' object has no attribute 'layers' ”error. when I running CreateVggGraphWeights.py,

can not load weights for graph

Try to run python HelperScripts/CreateVggGraphWeights.py
Got error:
Traceback (most recent call last):
File "HelperScripts/CreateVggGraphWeights.py", line 150, in
graph.set_weights(model.get_weights())
File "/users/zaikun/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 841, in set_weights
params = self.trainable_weights + self.non_trainable_weights
File "/users/zaikun/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 1840, in trainable_weights
for layer in self.layers:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.