moabitcoin / holy-edge Goto Github PK
View Code? Open in Web Editor NEWHolistically-Nested Edge Detection
Home Page: https://arxiv.org/pdf/1504.06375.pdf
License: GNU General Public License v3.0
Holistically-Nested Edge Detection
Home Page: https://arxiv.org/pdf/1504.06375.pdf
License: GNU General Public License v3.0
thanks for your sharing. I want to test some of my own images, so, how to test and output my own single image using the pretrained model?
Hi,
I was playing around with different image_width and height values for training and after even setting it back to 480 and 480 train functionality is not working anymore. I am getting the following error.
[10 Jan 2018 10h30m46s][INFO] Model weights loaded from vgg16.npy
[10 Jan 2018 10h30m46s][INFO] Added CONV-BLOCK-1+SIDE-1
[10 Jan 2018 10h30m46s][INFO] Added CONV-BLOCK-2+SIDE-2
[10 Jan 2018 10h30m46s][INFO] Added CONV-BLOCK-3+SIDE-3
[10 Jan 2018 10h30m46s][INFO] Added CONV-BLOCK-4+SIDE-4
[10 Jan 2018 10h30m46s][INFO] Added CONV-BLOCK-5+SIDE-5
[10 Jan 2018 10h30m46s][INFO] Added FUSE layer
[10 Jan 2018 10h30m46s][INFO] Build model finished: 0.1343s
[10 Jan 2018 10h30m46s][INFO] Done initializing VGG-16 model
[10 Jan 2018 10h30m47s][INFO] Training data set-up from /home/pchaudha/hed/hed-data/HED-BSDS/train_pair.lst
[10 Jan 2018 10h30m47s][INFO] Training samples 23040
[10 Jan 2018 10h30m47s][INFO] Validation samples 5760
[10 Jan 2018 10h30m47s][WARNING] Deep supervision application set to True
Traceback (most recent call last):
File "run-hed.py", line 64, in
main(args)
File "run-hed.py", line 38, in main
trainer.run(session)
File "/home/pchaudha/hed/hed/train.py", line 69, in run
run_metadata=run_metadata)
File "/home/pchaudha/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/home/pchaudha/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1093, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "/home/pchaudha/.local/lib/python2.7/site-packages/numpy/core/numeric.py", line 531, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.
Does anyone know why is this happening?
Thank you!
trian.py
train = opt.minimize(self.model.loss)
but the loss in vgg16.py is not a tensor type!
Does it work well?
2018-05-06 17:38:44.599331: W tensorflow/core/framework/op_kernel.cc:1152] Invalid argument: ConcatOp : Dimensions of inputs should match: shape[0] = [1,400,600,1] vs. shape[4] = [1,400,608,1]
[[Node: concat = ConcatV2[N=5, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](side_1/conv2d_transpose, side_2/conv2d_transpose, side_3/conv2d_transpose, side_4/conv2d_transpose, side_5/conv2d_transpose, concat/axis)]]
Traceback (most recent call last):
File "run-hed.py", line 64, in
main(args)
File "run-hed.py", line 44, in main
tester.run(session)
File "/home/mnadeem/research/holy-edge/hed/test.py", line 68, in run
edgemap = session.run(self.model.predictions, feed_dict={self.model.images: [im]})
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1052, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,400,600,1] vs. shape[4] = [1,400,608,1]
[[Node: concat = ConcatV2[N=5, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](side_1/conv2d_transpose, side_2/conv2d_transpose, side_3/conv2d_transpose, side_4/conv2d_transpose, side_5/conv2d_transpose, concat/axis)]]
Caused by op u'concat', defined at:
File "run-hed.py", line 64, in
main(args)
File "run-hed.py", line 43, in main
tester.setup(session)
File "/home/mnadeem/research/holy-edge/hed/test.py", line 37, in setup
self.model = Vgg16(self.cfgs, run='testing')
File "/home/mnadeem/research/holy-edge/hed/models/vgg16.py", line 30, in init
self.define_model()
File "/home/mnadeem/research/holy-edge/hed/models/vgg16.py", line 81, in define_model
self.fuse = self.conv_layer(tf.concat(self.side_outputs, axis=3),
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1034, in concat
name=name)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 519, in _concat_v2
name=name)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1228, in init
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [1,400,600,1] vs. shape[4] = [1,400,608,1]
[[Node: concat = ConcatV2[N=5, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](side_1/conv2d_transpose, side_2/conv2d_transpose, side_3/conv2d_transpose, side_4/conv2d_transpose, side_5/conv2d_transpose, concat/axis)]]
training:
dir: HED-BSDS
list: HED-BSDS/train_pair.lst
#
image_width: 480
image_height: 480
n_channels: 3
testing:
dir: mrl_database
list: mrl_database/files.lst
#
image_width: 600
image_height: 400
n_channels: 3
I just want to use the pre-trained weights.
weighted_cross_entropy_with_logits(
targets,
logits,
pos_weight,
name=None
)
expects logits NOT probability: sigmoid(logits)
Hi
I got the following error when trying to run the command from "Training data & Models":
user@user-desktop:~/DL/holy-edge$ sudo git lfs fetch && git lfs pull
Fetching master
Git LFS: (0 of 2 files) 0 B / 585.83 MB
batch response: This repository is over its data quota. Purchase more data packs to restore access.
error: failed to fetch some objects from 'https://github.com/harsimrat-eyeem/holy-edge.git/info/lfs'
Free memory: 10.76GiB
2018-10-21 22:18:39.485611: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0
2018-10-21 22:18:39.485616: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y
2018-10-21 22:18:39.485627: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:17:00.0)
[21 Oct 2018 22h18m40s][INFO] Model weights loaded from vgg16.npy
[21 Oct 2018 22h18m40s][INFO] Added CONV-BLOCK-1+SIDE-1
[21 Oct 2018 22h18m40s][INFO] Added CONV-BLOCK-2+SIDE-2
[21 Oct 2018 22h18m40s][INFO] Added CONV-BLOCK-3+SIDE-3
[21 Oct 2018 22h18m40s][INFO] Added CONV-BLOCK-4+SIDE-4
[21 Oct 2018 22h18m40s][INFO] Added CONV-BLOCK-5+SIDE-5
[21 Oct 2018 22h18m40s][INFO] Added FUSE layer
[21 Oct 2018 22h18m40s][INFO] Build model finished: 0.1324s
[21 Oct 2018 22h18m40s][INFO] Done initializing VGG-16 model
[21 Oct 2018 22h18m40s][ERROR] Error setting up VGG-16 model, [Errno 13] Permission denied: '/home/code'
Hello! It seems that weight decay(0.0002) in HED paper only appears in holy-edge/hed/configs/hed.yaml, and it is not used when traning data. So is it in fact? Looking forward to your reply.
In the original paper:
beta = |Y_| / |Y|
"|Y_| and |Y+| denote the edge and non-edge ground truth label sets, respectively"
This definition is really counter-intuitive to me.
In "sigmoid_cross_entropy_balanced":
y = tf.cast(label, tf.float32)
count_neg = tf.reduce_sum(1. - y)
count_pos = tf.reduce_sum(y)
# Equation [2]
beta = count_neg / (count_neg + count_pos)
# Equation [2] divide by 1 - beta
pos_weight = beta / (1 - beta)
It seems that "sigmoid_cross_entropy_balanced" function in "losses.py" is wrong.
Hi, when I run git lfs fetch && git lfs pull
,
it outputs
Git LFS: (0 of 2 files) 0 B / 585.83 MB
batch response: This repository is over its data quota. Purchase more data packs to restore access.
Could you upload the model elsewhere?
In data_parser.py ,I find ‘ im -= self.cfgs['mean_pixel_value'] ’,
mean_pixel_value: [103.939, 116.779, 123.68]
I dont understand what is this op mean? Is it for normalization?
Can I use tf.image.per_image_standardization() instead?
I'm trying to install the requirements, I'm facing the following problem while installing functools32:
complete output from command python setup.py egg_info:
This backport is for Python 2.7 only
I tried to download other version than the one existed in requirements.txt, but I'm getting more problems with alot of packages. Can you please provide me with your environment details?
For me, I'm using Windows 10 64x, conda, python 3.7
I just want to inference some images , but I can't open the download link and come up against some problems with git lfs.
As others have pointed out before, the git lfs data quota is exhausted for the pretrained model files.
If someone could re-upload them somewhere I will gladly open a mirror to download them aswell.
Hi, it seems that the hed-model-5000.meta cannot be cloned because of data quota exceeded. Could you provide a new link for this file? Thanks!
The current content is just a placeholder:
version https://git-lfs.github.com/spec/v1 oid sha256:71ef9f6fb10c25654e3f16708da7efc55bdbcc02cae75b84134a7ce051f728f9 size 60853279
Hi,
Thank you very much for the brilliant work. :)
I just wonder why you set the parameters of VGG16 as constants:
Line 163 in 98cf525
According to the paper, the objective function minimizes both W and w, but it seems like you only minimized w
Hi, everyone. I read the code in data_parser.py, found that there is an option for input labels, that is "target regression". If we choose this option, the loaded ground truth will be a matrix consisting of real numbers from 0 to 1 rather than a binary matrix. After that, I checked losses.py and found that there are two lines of codes:
"count_neg = tf.reduce_sum(1. - y)"
"count_pos = tf.reduce_sum(y)"
there two lines of codes seems work well for a binary ground truth, but if they work for a ground truth that consists of real numbers from 0 to 1? I am looking forward to your answers.
help!!!
From the HED paper I understood that we don't need to resize images as the network doesn't have any fully connected layers. So for my own dataset I wanted to change your code to remove this step so it take any size image and also produces edge map of same size as image.
But just removing these lines
im = im.resize((self.cfgs['training']['image_width'], self.cfgs['training']['image_height']))
em = em.resize((self.cfgs['training']['image_width'], self.cfgs['training']['image_height']))
is giving an error.
Is it possible to do this with your code?
Any hint or pointer would be appreciated. Thank you.
Hi, I read your notes in the 'paper-notes.md', have you ever try your idea in the notes. Especially, the side_output_5 is not as good as in paper.
[12 Nov 2017 06h20m27s][ERROR] Error setting up VGG-16 model, 'Tensor' object has no attribute 'shape'
�[32m[08 May 2018 21h43m00s][INFO] [7428/100000] TRAINING loss : 0.1480998396873474�[0m
�[32m[08 May 2018 21h43m02s][INFO] [7429/100000] TRAINING loss : 0.16046211123466492�[0m
�[32m[08 May 2018 21h43m03s][INFO] [7430/100000] TRAINING loss : 0.15747885406017303�[0m
�[32m[08 May 2018 21h43m03s][INFO] [7430/100000] VALIDATION error : 0.2499309927225113�[0m
�[32m[08 May 2018 21h43m04s][INFO] [7431/100000] TRAINING loss : 0.14023230969905853�[0m
�[32m[08 May 2018 21h43m06s][INFO] [7432/100000] TRAINING loss : 0.15643279254436493�[0m
�[32m[08 May 2018 21h43m07s][INFO] [7433/100000] TRAINING loss : 0.1568005532026291�[0m
�[32m[08 May 2018 21h43m09s][INFO] [7434/100000] TRAINING loss : 0.13042421638965607�[0m
�[32m[08 May 2018 21h43m10s][INFO] [7435/100000] TRAINING loss : 0.13672024011611938�[0m
�[32m[08 May 2018 21h43m11s][INFO] [7436/100000] TRAINING loss : 0.16531457006931305�[0m
�[32m[08 May 2018 21h43m12s][INFO] [7437/100000] TRAINING loss : 0.1498943716287613�[0m
�[32m[08 May 2018 21h43m14s][INFO] [7438/100000] TRAINING loss : 0.1395827680826187�[0m
�[32m[08 May 2018 21h43m15s][INFO] [7439/100000] TRAINING loss : 0.16227483749389648�[0m
�[32m[08 May 2018 21h43m16s][INFO] [7440/100000] TRAINING loss : 0.14770230650901794�[0m
�[32m[08 May 2018 21h43m17s][INFO] [7440/100000] VALIDATION error : 0.24861328303813934�[0m
�[32m[08 May 2018 21h43m19s][INFO] [7441/100000] TRAINING loss : 0.13362529873847961�[0m
�[32m[08 May 2018 21h43m20s][INFO] [7442/100000] TRAINING loss : 0.1269095093011856�[0m
�[32m[08 May 2018 21h43m21s][INFO] [7443/100000] TRAINING loss : 0.15405525267124176�[0m
�[32m[08 May 2018 21h43m23s][INFO] [7444/100000] TRAINING loss : 0.1538567692041397�[0m
�[32m[08 May 2018 21h43m25s][INFO] [7445/100000] TRAINING loss : 0.16362397372722626�[0m
�[32m[08 May 2018 21h43m26s][INFO] [7446/100000] TRAINING loss : 0.15053629875183105�[0m
�[32m[08 May 2018 21h43m28s][INFO] [7447/100000] TRAINING loss : 0.1497960239648819�[0m
�[32m[08 May 2018 21h43m29s][INFO] [7448/100000] TRAINING loss : 0.14233416318893433�[0m
�[32m[08 May 2018 21h43m30s][INFO] [7449/100000] TRAINING loss : 0.13796669244766235�[0m
�[32m[08 May 2018 21h43m32s][INFO] [7450/100000] TRAINING loss : 0.13414344191551208�[0m
�[32m[08 May 2018 21h43m32s][INFO] [7450/100000] VALIDATION error : 0.2629392445087433�[0m
�[32m[08 May 2018 21h43m34s][INFO] [7451/100000] TRAINING loss : 0.16665637493133545�[0m
�[32m[08 May 2018 21h43m35s][INFO] [7452/100000] TRAINING loss : 0.14615492522716522�[0m
�[32m[08 May 2018 21h43m37s][INFO] [7453/100000] TRAINING loss : 0.09095730632543564�[0m
�[32m[08 May 2018 21h43m38s][INFO] [7454/100000] TRAINING loss : 0.15941087901592255�[0m
�[32m[08 May 2018 21h43m40s][INFO] [7455/100000] TRAINING loss : 0.15737438201904297�[0m
�[32m[08 May 2018 21h43m42s][INFO] [7456/100000] TRAINING loss : 0.15094870328903198�[0m
�[32m[08 May 2018 21h43m43s][INFO] [7457/100000] TRAINING loss : 0.1470448523759842�[0m
�[32m[08 May 2018 21h43m44s][INFO] [7458/100000] TRAINING loss : 0.15349626541137695�[0m
�[32m[08 May 2018 21h43m46s][INFO] [7459/100000] TRAINING loss : 0.12288802117109299�[0m
�[32m[08 May 2018 21h43m47s][INFO] [7460/100000] TRAINING loss : 0.1600235551595688�[0m
�[32m[08 May 2018 21h43m48s][INFO] [7460/100000] VALIDATION error : 0.2703090310096741�[0m
�[32m[08 May 2018 21h43m49s][INFO] [7461/100000] TRAINING loss : 0.13551127910614014�[0m
�[32m[08 May 2018 21h43m51s][INFO] [7462/100000] TRAINING loss : 0.15077351033687592�[0m
�[32m[08 May 2018 21h43m52s][INFO] [7463/100000] TRAINING loss : 0.13134542107582092�[0m
�[32m[08 May 2018 21h43m53s][INFO] [7464/100000] TRAINING loss : 0.14744725823402405�[0m
�[32m[08 May 2018 21h43m55s][INFO] [7465/100000] TRAINING loss : 0.14557607471942902�[0m
�[32m[08 May 2018 21h43m56s][INFO] [7466/100000] TRAINING loss : 0.1406044214963913�[0m
�[32m[08 May 2018 21h43m58s][INFO] [7467/100000] TRAINING loss : 0.1485893577337265�[0m
�[32m[08 May 2018 21h43m59s][INFO] [7468/100000] TRAINING loss : 0.1595202535390854�[0m
�[32m[08 May 2018 21h44m00s][INFO] [7469/100000] TRAINING loss : 0.16567926108837128�[0m
�[32m[08 May 2018 21h44m01s][INFO] [7470/100000] TRAINING loss : 0.15977224707603455�[0m
�[32m[08 May 2018 21h44m02s][INFO] [7470/100000] VALIDATION error : 0.19350607693195343�[0m
Any idea?
Hi,
I followed steps in description and successfully trained a new model with my images. I would like to convert the output model to the Tensorflow Lite model for Mobile usage. I followed steps here and was able to freeze the HED model to GraphDef (.pb) with output_node_names=predictions. But I don't know how to continue to convert the GraphDef model to Tensorflow Lite model (using toco) by using toco . The reason is that I don't know where to get some parameters: input_arrays, output_arrays, input_shapes, output_node_names . I also ran the tensorboard command and saw some graphs but I don't see information of those parameters.
Can you please share me know how to export the model and use it with the Tensorflow Lite?
Thanks,
Duc
Hi, it seems the requirements were automatically upgraded to Tensorflow 2, the code is written in Python2, but the requirements overall seem to be incompatible with the code. Is this repo being maintained?
How to output the output as in the article , your output is side-output ?
Hi.It is great to find your code. I searched HED articles And codes many days.
But When I clone from github ,I get the message--This repository is over its data quota. Purchase more data packs to restore access.
And I have no idea to fix it. Could you give me some advice?
hello, I am a student.recently,I am reading the HED paper. I found that your code is very interesting,but some error is happened when I try to train the model with my own data.Error message show that Memory error.My computer in-build 32GB memory, would you give me some suggestion.Think you very much.
How do i reduce the iteration count?
I didn‘t find any module named ’wget‘.
I think its used to download HEDdate?
where can I downfind the moduel?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.