Coder Social home page Coder Social logo

torrvision / crfasrnn Goto Github PK

View Code? Open in Web Editor NEW
1.3K 1.3K 457.0 2.98 MB

This repository contains the source code for the semantic image segmentation method described in the ICCV 2015 paper: Conditional Random Fields as Recurrent Neural Networks. http://crfasrnn.torr.vision/

License: Other

Shell 0.79% MATLAB 56.66% Python 42.55%

crfasrnn's People

Contributors

anuragarnab avatar bittnt avatar kant avatar sadeepj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crfasrnn's Issues

CUDA toolkit version

What version do crfasrnn? I am trying to launch code under CUDA Toolkit 7.5, but i got error Check failed: error == cudaSuccess (8 vs. 0) invalid device function.

Is this because of new CUDA version?

Train crfasrnn with 2 classes

Hi all,
I am trying to train my own images with crfasrnn. My classes are background and people.
But my training does not converge, the loss values do not decrease.
And the prediction output of my network is only background (class 0) for all image.
I think that I have a mistake in my training.
Have you ever met the similar case?
Could you give me any suggestion?

Thank you,
Thuan

Ask for training code

Hi Zhengshuai,

I am a first year phd student in stanford called Jingwei Huang. I am very interested in your work and want to use your architecture for further research. Can you tell me how I can train the data?

Ask the brief steps on pixel-wise labeling prediction

Hi, Sadeep Jayasumana,

I had already installed crfasrnn.
As given the trained model (TVG_CRFRNN_COCO_VOC.caffemodel) , how could I perform pixel-wise labeling prediction on test image? Could you list the brief steps?

Thanks in advance!
Milton

error encountered when running demo in python script

I can go through the "make" step, with model downloaded. When running the demo by:

python crfasrnn_demo.py

error encountered. Please tell me how I can fix this.

Traceback (most recent call last):
File "crfasrnn_demo.py", line 38, in
net = caffe.Segmenter(MODEL_FILE, PRETRAINED)
File "../caffe-crfrnn/python/caffe/segmenter.py", line 19, in init
caffe.Net.init(self, model_file, pretrained_file)
Boost.Python.ArgumentError: Python argument types in
Net.init(Segmenter, str, str)
did not match C++ signature:
init(_object*, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)
init(_object*, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)

Could I run crfasrnn with GPU?

After uncommenting 'caffe.set_mode_gpu()' in the python demo script, it complains caffe has no member called set_mode_gpu

and I try to included this function in init.py of caffe-crfasrnn/python, still failed.

So is there any way to use gpu for this project, it just takes too long for one pic (~70s), not like the web demo.

Thx in advance

How low should a test loss be?

Hi!

I tried to train CRF-RNN network with 3 classes: bird (801 images), bottle (747 images) and chair (1071 images). 90 % of data were employed for training and the rest 10 % for testing. After 90 thousand iterations, loss of a model decreased approximately to 82 thousand (seemed stagnating for relatively long time) during test phase, and output from trained network is still very poor.

Questions

  • How low should test loss become in order to receive good segmentation results? Is 80 thousand still too much?
  • Is it possible that performance of network is low because selected class labels occupy often small part of image? Therefore, background class is preferred?
  • In a case, that loss is still not low enough to provide reasonable segmentation results, is it possible to train segmentation successfully with employed training images (number mentioned above)?

Thank you!

Martin

How to work with higher resolution images?

I am getting runtime error when I use higher resolution images.
*self.blobs[in].data[...] = blob
ValueError: could not broadcast input array from shape (1,3,700,700) into shape (1,3,500,500)_
*_
I am having trouble figuring out where the self.blobs[in_].data[...] numpy shape is initialized.
Need suggestions for debugging.

Fine Tuning failed

Hi, everyone! Thank you for your reading my issue at first.
I'm trying to retrian CRFasRNN using other data instead of VOC. I modifiy TVG_CRFRNN_new_traintest.prototxt , and it works well when I train the model with all parameters randomly initialized. However, when I try to fine tune with fcn-8s-pascal.caffemodel , it fails.
Here are some log info:
**I0618 10:43:15.192327 567 caffe.cpp:128] Finetuning from ./models/fcn-8s-pascal.caffemodel
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 537962613
I0618 10:43:16.506494 567 upgrade_proto.cpp:620] Attempting to upgrade input file specified using deprecated V1LayerParameter: ./models/fcn-8s-pascal.caffemodel
I0618 10:43:17.094655 567 upgrade_proto.cpp:628] Successfully upgraded file specified using deprecated V1LayerParameter
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 537962613
I0618 10:43:18.547567 567 upgrade_proto.cpp:620] Attempting to upgrade input file specified using deprecated V1LayerParameter: ./models/fcn-8s-pascal.caffemodel
I0618 10:43:19.128402 567 upgrade_proto.cpp:628] Successfully upgraded file specified using deprecated V1LayerParameter
I0618 10:43:19.280107 567 caffe.cpp:211] Starting Optimization
I0618 10:43:19.280191 567 solver.cpp:293] Solving New_DATA_TRAIN
I0618 10:43:19.280205 567 solver.cpp:294] Learning Rate Policy: fixed
*** Aborted at 1466217832 (unix time) try "date -d @1466217832" if you are using GNU date ***
PC: @ 0x7f614c91f528 caffe::SoftmaxWithLossLayer<>::Backward_cpu()
* SIGSEGV (@0x250db0ac) received by PID 567 (TID 0x7f614d0ba780) from PID 621654188; stack trace: ***
@ 0x7f614b3a9cb0 (unknown)
@ 0x7f614c91f528 caffe::SoftmaxWithLossLayer<>::Backward_cpu()
@ 0x7f614c991dc9 caffe::Net<>::BackwardFromTo()
@ 0x7f614c991ea1 caffe::Net<>::Backward()
@ 0x7f614c8a89e1 caffe::Solver<>::Step()
@ 0x7f614c8a9225 caffe::Solver<>::Solve()
@ 0x408edb train()
@ 0x4068d1 main
@ 0x7f614b394f45 (unknown)
@ 0x406fbd (unknown)
@ 0x0 (unknown)

Dose anyone know how to solve this problem? Thanks a lot!

network saving only default parameter values

Hi,

Thank you sharing the code, the idea of the CRF as RNN is great. I have a question regarding training the model from scratch and on a different type of images.

I ran into a problem when saving parameters into .caffemodel file. The training log shows that the network learns judging by the softmax loss going down, but when I load the snapshot, the parameters are always at the default values.

Have you or anyone experienced anything similar? I have tried to train the basic lenet example on the MNIST and there was no problem.

Thanks in advance.

Adrian Lisko

How to use mac os to Install dependencies?

Ubuntu 14.04 is in my Parellels(virtual machine).My mac's GNU is NVIDIA,so I have to run the codes in my mac,but I don't know how to install dependencies.Does anyone could help me?

How to work with video?

I try to run the demo on video file using cv2.VideoCapture of opencv. I mean that I want the demo to work and analyse each frame separately?
How to do that and run the demo for each frame?

opencv v3.1.0

I'v recently installed new version of cv2, 3.1.0. Now I've got problems install CRFasRNN:

In window_data_layer.cpp I commented

#if CV_VERSION_MAJOR == 3
const int CV_LOAD_IMAGE_COLOR = cv::IMREAD_COLOR;
#endif

which solved the problem of ‘const int CV_LOAD_IMAGE_COLOR’ redeclared as different kind of symbol.

Now when processing compute_image_mean.o I keep getting

undefined reference tocv::imread(cv::String const&, int)'`

Can I use opencv v3 with CRF at all, or should I roll back to 2.4? It seems there's some format incompatibility.

cheers

Object delineation-edge between segmented obects

I was able to extract some objects from a challenging image (http://datascience.stackexchange.com/questions/10809/image-segmentation-with-a-challenging-background): lots of shadows, object/background colors are indistinguishable and so on. Some cows on the image are occluded: there's one cow behind another, although both a well identified.

Here's my question: how do I delineate these 2 objects. If crf-rnn recognizes that they are separate, this should be somehow reflected. I believe, if these were objects of different classes, pixels would have different labels. But what if the occluded object is of the same class?

thanks

Object discrimination

If I understand correctly, the algorithm distinguishes between two objects of the same class (e.g. 2 cows are of the same colour). How do I get the number of objects and the edges, i.e. the border where one object ends and the other begins?

Compatibility with AWS / nvidia Grid GPU

Have you or anyone else tested this code with AWS G2 instances?

I tried testing and even after compiling your version of caffe and running make runtest, the code did not work with GPU (Failed with core dump).

I also replaced the commented out line in python code to enable GPU mode

#caffe.set_mode_gpu()
net.set_mode_gpu()

The CPU code seemed to work, but threw an error when trying to show the image, due to lack of display.
Is this happening due to lack of memory since AWS GPUs have only 4Gb memory?

Thanks,

Classification performance is very slow like half a minute on AWS G2 instance?

I looked through the paper but I could not find any speed benchmark that I can compare with. I tried crfasrnn_demo.py on a single instance of g2.2xlarge Amazon AWS and got unexpectedly slow classification speed and wondering if something is wrong or not.

The code below in crfasrnn_demo.py that I modified to time :

# Get predictions
start = time.time()
segmentation = net.predict([im])
end = time.time()
segmentation2 = segmentation[0:cur_h, 0:cur_w]
output_im = PILImage.fromarray(segmentation2)
output_im.putpalette(pallete)

plt.imshow(output_im)
plt.savefig('output.png')
print(end-start)

The result is 29.7011039257 seconds. What is the average time required for this demo?

Error on building custom caffe version

Hey, I get this error:

$ sudo make
NVCC src/caffe/util/math_functions.cu
src/caffe/util/math_functions.cu(159): error: kernel launches from templates are not allowed in system files

how can I fix this ?
Im running Ubuntu 14.04 with Atlas and Python

Unknown enumeration value of "MULTI_STAGE_MEANFIELD" for field "type"

After running /solve.py from https://github.com/martinkersner/train-CRF-RNN
the following error pop up

I0428 23:40:04.491811 27105 caffe.cpp:183] Using GPUs 0
I0428 23:40:04.587026 27105 solver.cpp:54] Initializing solver from parameters:
test_iter: 261
test_interval: 1333
base_lr: 1e-13
display: 50
max_iter: 100000
lr_policy: "fixed"
momentum: 0.99
weight_decay: 0.0005
snapshot: 1000
snapshot_prefix: "/home/snake/caffe_crfrnn/train-CRF-RNN-master/snapshot/train"
solver_mode: GPU
device_id: 0
net: "/home/snake/caffe_crfrnn/train-CRF-RNN-master/TVG_CRFRNN_COCO_VOC_TRAIN_3_CLASSES.prototxt"
test_initialization: false
I0428 23:40:04.587134 27105 solver.cpp:96] Creating training net from net file: /home/snake/caffe_crfrnn/train-CRF-RNN-master/TVG_CRFRNN_COCO_VOC_TRAIN_3_CLASSES.prototxt
[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 213:3: Unknown enumeration value of "MULTI_STAGE_MEANFIELD" for field "type".
F0428 23:40:04.587803 27105 upgrade_proto.cpp:932] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/snake/caffe_crfrnn/train-CRF-RNN-master/TVG_CRFRNN_COCO_VOC_TRAIN_3_CLASSES.prototxt

The MULTI_STAGE_MEANFIELD is build but there was some warning
CXX src/caffe/layers/eltwise_layer.cpp
src/caffe/layers/multi_stage_meanfield.cpp: In instantiation of ‘void caffe::MultiStageMeanfieldLayer::LayerSetUp(const std::vectorcaffe::Blob&, const std::vectorcaffe::Blob&) [with Dtype = float]’:
src/caffe/layers/multi_stage_meanfield.cpp:258:1: required from here
src/caffe/layers/multi_stage_meanfield.cpp:72:83: warning: format ‘%lf’ expects argument of type ‘double’, but argument 3 has type ‘float’ [-Wformat=]
fscanf(pFile, "%lf", &this->blobs_[0]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp:79:83: warning: format ‘%lf’ expects argument of type ‘double’, but argument 3 has type ‘float’ [-Wformat=]
fscanf(pFile, "%lf", &this->blobs_[1]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp: In member function ‘void caffe::MultiStageMeanfieldLayer::LayerSetUp(const std::vectorcaffe::Blob&, const std::vectorcaffe::Blob&) [with Dtype = float]’:
src/caffe/layers/multi_stage_meanfield.cpp:72:7: warning: ignoring return value of ‘int fscanf(FILE, const char, ...)’, declared with attribute warn_unused_result [-Wunused-result]
fscanf(pFile, "%lf", &this->blobs_[0]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp:79:7: warning: ignoring return value of ‘int fscanf(FILE, const char, ...)’, declared with attribute warn_unused_result [-Wunused-result]
fscanf(pFile, "%lf", &this->blobs_[1]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp: In member function ‘void caffe::MultiStageMeanfieldLayer::LayerSetUp(const std::vectorcaffe::Blob&, const std::vectorcaffe::Blob&) [with Dtype = double]’:
src/caffe/layers/multi_stage_meanfield.cpp:72:7: warning: ignoring return value of ‘int fscanf(FILE, const char, ...)’, declared with attribute warn_unused_result [-Wunused-result]
fscanf(pFile, "%lf", &this->blobs_[0]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp:79:7: warning: ignoring return value of ‘int fscanf(FILE, const char, ...)’, declared with attribute warn_unused_result [-Wunused-result]
fscanf(pFile, "%lf", &this->blobs_[1]->mutable_cpu_data()[i * channels_ + i]);
^

anyidea how to solve it?

TImages

Hii

I had installed the crfasrnn demo and the code worked well.

What is the image database that have been used in order to train the model? Do you use a popular image database like image net or your own data source?
Can I have access to this database?

Thanks

Question about Installation

Dear sir, if i want to use your code, except MKL, CUDA and so on, other tools like protobuf need i to install? Thank you!

How do I train a model using my own data

I use the NYU V2 dataset as my traing set. The labels are created as follows. I first create a picture each pixel corresponding to its label and create labels_lmdb using these pictures by convert_imageset in the dictory caffe-root/tools. images_lmdb is created using images in the dataset by that tool convert_imageset directly.

I have trained my model using Tesla K20m, but the loss of the nueral network seems oscillating all the time after 400 iterations. The whole process of 400 iterations takes about 10 hours. I think that there is something wrong with my configurations. I am wondering whether you could help me to find the problems.

I modified the file "TVG_CRFRNN_COCO_VOC.prototxt" provided by the project of CRFasRNN in the dicrectory crfasrnn/python-scripts. Details as follows.

I added the data layer at the begining of that prototxt file to replace the original input part (line 3-8):

layers {
name: "data"
type: DATA
top: "data"
include {
phase: TRAIN
}
transform_param {
mean_value: 130.4265
mean_value: 111.4584
mean_value: 103.3727
}
data_param {
source: "./indoor_images_lmdb"
batch_size: 1
backend: LMDB
}
}
layers {
name: "label"
type: DATA
top: "label"
include {
phase: TRAIN
}
data_param {
source: "./indoor_labels_lmdb"
batch_size: 1
backend: LMDB
}
}

At the end of that file, I added a SOFTMAX_LOSS layer:

layers { type: SOFTMAX_LOSS name: 'loss' top: 'loss'
bottom: 'pred' bottom: 'label'
loss_param { normalize: false }
}

The file "solver.prototxt" was created as follows:

net: "CRFasRNN_train.prototxt"
test_iter: 500
test_interval: 100000
display: 250
lr_policy: "fixed"
base_lr: 1e-13
momentum: 0.99
weight_decay:0.0016
max_iter: 10000
snapshot: 1000
snapshot_prefix: "train"
test_initialization: false
solver_mode: GPU

The file "solve.py" was created as follows:

caffe_root = '../caffe-crfrnn/'
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
import numpy as np
base_weights = 'TVG_CRFRNN_COCO_VOC.caffemodel'
solver = caffe.SGDSolver('solver.prototxt')
solver.net.copy_from(base_weights)
solver.step(10000)
The file "TVG_CRFRNN_COCO_VOC.caffemodel" is also provided in the directory crfasrnn/python-script.

cudnn v2 required?

After looking up the compile error
src/caffe/layers/cudnn_conv_layer.cu(67): error: argument of type "cudnnAddMode_t" is incompatible with parameter of type "const void *"
it seems to result from using cudnnv4 instead of v2. Is it the case that this code requires cudnnv2?

How to train the model by myself ?

I want to use the crfasrnn model to train on my data, but there is no network for training. Should I just add a data layer to provide label, and add a softmax layer to get loss ? If not, please give some suggestions about it. Thank you.

GPU out of memory

Trying to run the python demo with GPU mode enabled.
Seem to be running of out memory (Have 2GB GPU + cuda 6.5)

What is the minimum memory requirement for this demo. Is there any (simple) way of lowing the requirements?

What is the expected run-time on GPU (on my cpu it takes 7-8 seconds)?

Thanks.

Error when using 'make'

I get this error when I run make

CXX src/caffe/net.cpp
In file included from src/caffe/net.cpp:13:0:
./include/caffe/util/io.hpp:12:18: fatal error: hdf5.h: No such file or directory
#include "hdf5.h"
^
compilation terminated.
Makefile:469: recipe for target '.build_release/src/caffe/net.o' failed
make: *** [.build_release/src/caffe/net.o] Error 1

I did check for the hdf5.h file and found it in this location:

/usr/include/hdf5/serial/hdf5.h
/usr/include/opencv2/flann/hdf5.h

I get the same error when I run 'sudo make'
what could be the issue? ... has it got something to do with the makefile.config file?

Any help would be appreciated. Thanks

MultiStageMeanField GPU implementation seg faults

The filter expects an input image of size Nx3xHxW. When a grayscale/monochrome image is provided, the GPU implementation seg faults (CPU does not). I guess that there is an automatic alloc with zero filling when using the CPU, which does not exist in the CUDA code.

Workaround: concatenate the grayscale image to have three channels. See config below:

layer {
name: "rgb"
bottom: "data"
bottom: "data"
bottom: "data"
top: "rgb"
type: "Concat"
concat_param {
axis: 1
}
}

Getting tags of the segments

Hello. I compiled this code and successfully ran demo. But I have question. For arbitary images, how can I get tags of segment?

Summarizes changes made to Caffe code?

Thanks for the paper! Any chance you could add a summary of the changes made to the Caffe code? That and/or the hash where you started the fork would be great to figure out where the relevant changes are.

Error when build caffe Unknown CMake command "caffe_set_caffe_link"

I download the caffe and put it in the folder of crfasrnn. After that, I go to the caffe folder and run as

mkdir build
cd build
cmake ..

I got the error as follows:

mjohn@mjohn:~/crfasrnn/caffe/build$ cmake ..
-- Boost version: 1.54.0
-- Found the following Boost libraries:
--   system
--   thread
-- Found gflags  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libgflags.so)
-- Found glog    (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so)
-- Found PROTOBUF Compiler: /usr/bin/protoc
-- Found lmdb    (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/liblmdb.so)
-- Found LevelDB (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libleveldb.so)
-- Found Snappy  (include: /usr/include, library: /usr/lib/libsnappy.so)
-- CUDA detected: 7.5
-- Added CUDA NVCC flags for: sm_50
-- OpenCV found (/usr/local/share/OpenCV)
-- Found Atlas (include: /usr/include, library: /usr/lib/libatlas.so)
-- NumPy ver. 1.8.2 found (include: /usr/lib/python2.7/dist-packages/numpy/core/include)
-- Boost version: 1.54.0
-- Found the following Boost libraries:
--   python
CMake Error at CMakeLists.txt:50 (caffe_set_caffe_link):
  Unknown CMake command "caffe_set_caffe_link".


-- Configuring incomplete, errors occurred!
See also "/home/mjohn/crfasrnn/caffe/build/CMakeFiles/CMakeOutput.log".
See also "/home/mjohn/crfasrnn/caffe/build/CMakeFiles/CMakeError.log".

Could you help me to solve it. I set path for pythoncaffe before. Note that, If I used original caffe in the http://caffe.berkeleyvision.org/installation.html, it worked
export PYTHONPATH=${HOME}/crfasrnn/caffe/python:$PYTHONPATH

/multi_stage_meanfield.cpp

Hello ,

I am trying to build caffe with gpu.Initially while compiling an error was displayed saying "could not find spatial.par ....",so I copy pasted "spatial.par" to the path it compiles and again added "bilateral.par" when it threw error on that as well.These I added from python-scripts available for python users.

When I tried to compile using "make all",I get the following warning,but it compiles all other files neatly.

src/caffe/layers/multi_stage_meanfield.cpp: In instantiation of ‘void caffe::MultiStageMeanfieldLayer::LayerSetUp(const std::vectorcaffe::Blob<Dtype_>&, const std::vectorcaffe::Blob<Dtype_>&) [with Dtype = float]’:
src/caffe/layers/multi_stage_meanfield.cpp:254:1: required from here
src/caffe/layers/multi_stage_meanfield.cpp:68:83: warning: format ‘%lf’ expects argument of type ‘double_’, but argument 3 has type ‘float_’ [-Wformat=]
fscanf(pFile, "%lf", &this->blobs_[0]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp:75:83: warning: format ‘%lf’ expects argument of type ‘double_’, but argument 3 has type ‘float_’ [-Wformat=]
fscanf(pFile, "%lf", &this->blobs_[1]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp: In member function ‘void caffe::MultiStageMeanfieldLayer::LayerSetUp(const std::vectorcaffe::Blob<Dtype_>&, const std::vectorcaffe::Blob<Dtype_>&) [with Dtype = float]’:
src/caffe/layers/multi_stage_meanfield.cpp:68:7: warning: ignoring return value of ‘int fscanf(FILE_, const char_, ...)’, declared with attribute warn_unused_result [-Wunused-result]
fscanf(pFile, "%lf", &this->blobs_[0]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp:75:7: warning: ignoring return value of ‘int fscanf(FILE_, const char_, ...)’, declared with attribute warn_unused_result [-Wunused-result]
fscanf(pFile, "%lf", &this->blobs_[1]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp: In member function ‘void caffe::MultiStageMeanfieldLayer::LayerSetUp(const std::vectorcaffe::Blob<Dtype_>&, const std::vectorcaffe::Blob<Dtype_>&) [with Dtype = double]’:
src/caffe/layers/multi_stage_meanfield.cpp:68:7: warning: ignoring return value of ‘int fscanf(FILE_, const char_, ...)’, declared with attribute warn_unused_result [-Wunused-result]
fscanf(pFile, "%lf", &this->blobs_[0]->mutable_cpu_data()[i * channels_ + i]);
^
src/caffe/layers/multi_stage_meanfield.cpp:75:7: warning: ignoring return value of ‘int fscanf(FILE_, const char_, ...)’, declared with attribute warn_unused_result [-Wunused-result]
fscanf(pFile, "%lf", &this->blobs_[1]->mutable_cpu_data()[i * channels_ + i]);

Also during "make runtest" it takes long time to test the file at ,

1 test from MultiStageMeanfieldLayerTest/2, where TypeParam = caffe::FloatGPU
[ RUN ] MultiStageMeanfieldLayerTest/2.TestGradient

Hence I stopped the test.

Once I build this, I will be trying to use the model in torch 7 using torch-caffe-binding.

Sorry for the long post

Regards
srikanth

undefined reference to testing error

Hi I am having trouble compiling the custom caffe code. I keep on getting this errors

../lib/libcaffe.so: undefined reference to testing::internal::MakeAndRegisterTestInfo(char const*, char const*, char const*, char const*, void const*, void (*)(), void (*)(), testing::internal::TestFactoryBase*)' ../lib/libcaffe.so: undefined reference totesting::AssertionSuccess()'
../lib/libcaffe.so: undefined reference to testing::Test::~Test()' ../lib/libcaffe.so: undefined reference totesting::internal::IsTrue(bool)'
../lib/libcaffe.so: undefined reference to typeinfo for testing::Test' ../lib/libcaffe.so: undefined reference totesting::Test::SetUp()'
../lib/libcaffe.so: undefined reference to testing::Test::TearDown()' ../lib/libcaffe.so: undefined reference totesting::Test::Test()'
../lib/libcaffe.so: undefined reference to testing::internal::String::Format(char const*, ...)' ../lib/libcaffe.so: undefined reference totesting::internal::AssertHelper::~AssertHelper()'
../lib/libcaffe.so: undefined reference to testing::internal::AssertHelper::operator=(testing::Message const&) const' ../lib/libcaffe.so: undefined reference totesting::internal::AssertHelper::AssertHelper(testing::TestPartResult::Type, char const_, int, char const_)'
../lib/libcaffe.so: undefined reference to `testing::internal::EqFailure(char const_, char const_, testing::internal::String const&, testing::internal::String const&, bool)'
collect2: error: ld returned 1 exit status
tools/CMakeFiles/extract_features.dir/build.make:120: recipe for target 'tools/extract_features' failed
make[2]: *** [tools/extract_features] Error 1
CMakeFiles/Makefile2:586: recipe for target 'tools/CMakeFiles/extract_features.dir/all' failed
make[1]: *** [tools/CMakeFiles/extract_features.dir/all] Error 2
Makefile:116: recipe for target 'all' failed
make: *** [all] Error 2

Could you explain why a custom Caffe were used?

In the code section or in the paper I can't find any information of what feature is missing in the normal Caffe so you guys needed to build a custom version. Could you explain a bit here?

Thank you!

Out of memory when using batch_size higher than 3 during training

Hi!

I am trying to train CRF-RNN using part of VOC 2012 data but when I set batch_size higher than 3, I receive error == cudaSuccess (2 vs. 0) out of memory. It is quite surprising to me because I am using nVidia Tesla K40, so there should be enough memory. Moreover, in a log there is not required more memory than I possess. I still can train with only 3 images per batch but I expect that the loss is going to drop very slowly.

Questions

  • Did somebody encounter similar issue, or is it just problem on my side?
  • How large batch size was used to create publicly avaliable model? (I did not find any mention about it in paper)

Thank you!

Martin

How to train my own model using my own data

Hi bittnt,

I had already installed crfasrnn and the code worked well.
But the code that you provided just contained the part using the trained model (TVG_CRFRNN_COCO_VOC.caffemodel) to get the result. How could I train a new model using my own data. Could you provide a pipeline about how to train a new model and the prototxt file used in training.
Any response will be appreciated.

Best wishes,
Huayong

Failed to parse NetParameter file: TVG_CRFRNN_COCO_VOC.caffemodel

I am trying to run the python demo, but got the following error:

I1201 20:32:24.720659 13679 net.cpp:221] Network initialization done.
I1201 20:32:24.720666 13679 net.cpp:222] Memory required for data: 1287634208
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large    protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted    for security reasons.  To increase the limit (or to disable these warnings), see    CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was   537967400
F1201 20:32:25.981808 13679 upgrade_proto.cpp:638] Check failed:  ReadProtoFromBinaryFile(param_file, param) Failed to parse NetParameter file:  TVG_CRFRNN_COCO_VOC.caffemodel
*** Check failure stack trace: ***
[1]    13679 abort (core dumped)  python crfasrnn_demo.py

Updating weights with caffe solver

Hello,
Thanks for the source code and the demo, I've succesfully downloaded, compiled and run (also the test passed).
And now I want to start doing updates of the current weights that are into the blobs of the provided caffemodel, but when I load the model prototxt with the caffe.solver:
solver = caffe.SGDSolver(MODEL_FILE)
instead of loading the network for use it:
net = caffe.Segmenter(MODEL_FILE, PRETRAINED)
I have a libprotobuf error:
[libprotobuf ERROR google/protobuf/text_format.cc:290] Error parsing text-format caffe.SolverParameter: 1:6: Message type "caffe.SolverParameter" has no field named "input".
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1122 00:58:02.739593 28227 io.hpp:57] Check failed: ReadProtoFromTextFile(filename, proto)
*** Check failure stack trace: ***
Aborted (core dumped)

Can you give me some advice of what to do, or what should I do in order to be able to load the solver for the model and update the weights?

Thank you very much,

Sebastian

Determining the inference layer parameters

Hello,

In crfasrnn paper, section 6

The compatibility transform parameters of the CRF-RNN were initialized using the Potts model, and kernel width and weight parameters were obtained from a cross-validation process

Can you please shed some more light into the cross-validation process?

Thanks

mean_vec values

mean_vec = np.array([103.939, 116.779, 123.68], dtype=np.float32), this is from the demo. Each value in the array, is it the mean of each array for the specific image?

Cuda versions

Are there cuda versions of this code? I can only see the forward and backward cpu versions and no .cu files. The runtime is very slow.

Instance Segmentaion

As I understand according to the paper and the video-lecture: http://www.robots.ox.ac.uk/~szheng/CRFasRNN.html :
the demo-segmentation can separate multiple object close to each other (e.g separate 3 close boats instead of recognize the as a single big bout), and also it can add Ground Truth layer which gives more accurate result and draws the contour of each object.

when I run the demo I cant see these abilities, how can I add them?

thanks

Error while running crfasrnn_demo.py - No module named loggingnet

Hii, when I run the crfasrnn_demo.py, I tackle the following error:

File "crfasrnn-master/python-scripts/crfasrnn_demo.py", line 22, in
import loggingnet
ImportError: No module named loggingnet

I am new with caffe but I tried the default caffe demo (come with the caffe environment) and it worked correctly, and all the requirements have been installed (I use Anaconda 2 for Python).

Maybe this error occur due to wrong environment settings?
My Environment Variables are:

  1. PYTHONPATH=/home/limor/anaconda2/lib/python2.7:/home/limor/caffe-master/python:
  2. LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:
  3. PATH=/usr/local/cuda-7.5/bin:/home/limor/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
  4. CAFFE_ROOT=/home/limor/caffe-master

Attempt at reconciling caffe-crfrnn with upstream caffe

Mostly trying to compile this on a mac, with CPU_ONLY.
So I thought it would be easier to add the differences on top of the upstream caffe

Here is my fork https://github.com/mtourne/crfasrnn, where caffe is a submodule that points here :
https://github.com/mtourne/caffe/tree/crfasrnn
(this makes it a lot easier to see what diverges if caffe-crfrnn stems off from the actual caffe git).

Everything builds and the tests pass (including meanfield layer one), but import caffe in python will segfault.

Would you have any ideas so I can further this ?
Thank you

Error while compiling crfasrnn code

Hi,

I am trying to compile the code, but getting following error:

./lib/libcaffe.so: undefined reference to testing::internal::MakeAndRegisterTestInfo(char const*, char const*, char const*, char const*, void const*, void (*)(), void (*)(), testing::internal::TestFactoryBase*)' ../lib/libcaffe.so: undefined reference totesting::AssertionSuccess()'
../lib/libcaffe.so: undefined reference to testing::Test::~Test()' ../lib/libcaffe.so: undefined reference totesting::internal::IsTrue(bool)'
../lib/libcaffe.so: undefined reference to typeinfo for testing::Test' ../lib/libcaffe.so: undefined reference totesting::Test::SetUp()'
../lib/libcaffe.so: undefined reference to testing::Test::TearDown()' ../lib/libcaffe.so: undefined reference totesting::Test::Test()'
../lib/libcaffe.so: undefined reference to testing::internal::String::Format(char const*, ...)' ../lib/libcaffe.so: undefined reference totesting::internal::AssertHelper::~AssertHelper()'
../lib/libcaffe.so: undefined reference to testing::internal::AssertHelper::operator=(testing::Message const&) const' ../lib/libcaffe.so: undefined reference totesting::internal::AssertHelper::AssertHelper(testing::TestPartResult::Type, char const_, int, char const_)'
../lib/libcaffe.so: undefined reference to `testing::internal::EqFailure(char const_, char const_, testing::internal::String const&, testing::internal::String const&, bool)'
collect2: error: ld returned 1 exit status
make[2]: *** [tools/caffe] Error 1
make[1]: *** [tools/CMakeFiles/caffe.bin.dir/all] Error 2
make: *** [all] Error 2

I am using Ubuntu 14.04 and followed the steps mentioned in the web page while installation.

It would be great if you can help me in resolving the error

Thanks.

P.S.: I tried to compile caffe and it worked successfully.

Unable to import caffe

When attempting to run the python demo script I am getting the following error, I am having trouble identifying what is left out, I am thinking it could be an EVN variable. Any suggestion on what could be causing this?

File "/var/www/crfasrnn_original/python-scripts/demo_test.py",

line 29, in import caffe File "../caffe-crfrnn/python/caffe/init.py",

line 1, in from .pycaffe import Net, SGDSolver File "../caffe-crfrnn/python/caffe/pycaffe.py",

line 10, in from ._caffe import Net, SGDSolver ImportError: libcudart.so.6.5: cannot open shared object file: No such file or directory

out of memory issue. with GPU 2G and network only 0.5 G

Hi
Im receive this error telling me that my memory is not enough. I already reduce the image size to 228 304 and 10 label only, and batchsize =1. The network says that Memory required for data: 528391700 Im using 680 and i have 2GB video ram. Still it gives out this erro. anyidea why it happening?

I0429 19:27:37.009716 3199 net.cpp:298] Network initialization done.
I0429 19:27:37.009724 3199 net.cpp:299] Memory required for data: 528391700
I0429 19:27:37.009837 3199 solver.cpp:65] Solver scaffolding done.
I0429 19:27:37.009905 3199 caffe.cpp:128] Finetuning from /home/snake/caffe_crfrnn/crfasrnn-master/python-scripts/TVG_CRFRNN_COCO_VOC.caffemodel
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 537968303
I0429 19:27:37.648407 3199 upgrade_proto.cpp:620] Attempting to upgrade input file specified using deprecated V1LayerParameter: /home/snake/caffe_crfrnn/crfasrnn-master/python-scripts/TVG_CRFRNN_COCO_VOC.caffemodel
I0429 19:27:37.919209 3199 upgrade_proto.cpp:628] Successfully upgraded file specified using deprecated V1LayerParameter
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 537968303
I0429 19:27:38.654881 3199 upgrade_proto.cpp:620] Attempting to upgrade input file specified using deprecated V1LayerParameter: /home/snake/caffe_crfrnn/crfasrnn-master/python-scripts/TVG_CRFRNN_COCO_VOC.caffemodel
I0429 19:27:38.930379 3199 upgrade_proto.cpp:628] Successfully upgraded file specified using deprecated V1LayerParameter
I0429 19:27:39.024951 3199 caffe.cpp:211] Starting Optimization
I0429 19:27:39.025022 3199 solver.cpp:293] Solving TVG_CRF_RNN_COCO_VOC_TRAIN_3_CLASSES
I0429 19:27:39.025032 3199 solver.cpp:294] Learning Rate Policy: fixed
F0429 19:27:39.100859 3199 syncedmem.cpp:58] Check failed: error == cudaSuccess (2 vs. 0) out of memory
*** Check failure stack trace: ***
@ 0x7f0134bd7daa (unknown)
@ 0x7f0134bd7ce4 (unknown)
@ 0x7f0134bd76e6 (unknown)
@ 0x7f0134bda687 (unknown)
@ 0x7f01352ef9e1 caffe::SyncedMemory::to_gpu()
@ 0x7f01352eed69 caffe::SyncedMemory::mutable_gpu_data()
@ 0x7f01351e2472 caffe::Blob<>::mutable_gpu_data()
@ 0x7f013523a070 caffe::BaseConvolutionLayer<>::forward_gpu_gemm()
@ 0x7f01352f9f21 caffe::ConvolutionLayer<>::Forward_gpu()
@ 0x7f01351f4751 caffe::Net<>::ForwardFromTo()
@ 0x7f01351f4ac7 caffe::Net<>::ForwardPrefilled()
@ 0x7f013521b279 caffe::Solver<>::Step()
@ 0x7f013521bac5 caffe::Solver<>::Solve()
@ 0x408f3b train()
@ 0x406931 main
@ 0x7f01340e9ec5 (unknown)
@ 0x40701d (unknown)
@ (nil) (unknown)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.