Coder Social home page Coder Social logo

vision_visp's Introduction

ViSP stack for ROS

GPL-2

1. Introduction

ROS 2 vision_visp contains packages to interface ROS 2 with ViSP which is a library designed for visual-servoing and visual tracking applications. This repository contains:

  • visp_bridge: Bridge between ROS 2 image and geometry messages and ViSP image and 3D transformation representation.
  • visp_tracker: ViSP model-based tracker interfaced in ROS 2 and initialized from a client that requires user interaction.
  • visp_auto_tracker: ViSP model-based tracker interfaced in ROS 2 and initialized thanks to a marker (AprilTag, QRcode, flashcode). Recovers when tracking fails.
  • visp_camera_calibration: ViSP based tool to calibrate camera intrinsic parameters.
  • visp_handeye_calibration: ViSP based tool to estimated the robot end-effector to camera geometric transformation.

2. Install dependencies

2.1. Install ROS 2

Firstly, it assumes that the ROS 2 core has already been installed, please refer to ROS 2 installation to get started.

2.2. Install ViSP

Please refer to the official installation guide from ViSP installation tutorials.

3. Build vision_visp

Fetch the latest code and build

$ cd <YOUR_ROS2_WORKSPACE>/src
$ git clone https://github.com/lagadic/vision_visp.git -b rolling
$ cd ..
$ colcon build --symlink-install

If ViSP is not found, use VISP_DIR to point to $VISP_WS/visp-build folder like:

$ colcon build --symlink-install --cmake-args -DVISP_DIR=$VISP_WS/visp-build

4. Usage

  • To run visp_auto_tracker launch:

    $ ros2 launch visp_auto_tracker tutorial_launch.xml
    
  • To run visp_tracker launch:

    $ ros2 launch visp_tracker tutorial_launch.xml
    

vision_visp's People

Contributors

filipnovotny avatar fspindle avatar marcoesposito1988 avatar nickvaras avatar thomas-moulard avatar vrabaud avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vision_visp's Issues

Run visp_tracker tutorial.launch problem

Hello,

When I tried to run visp_tracker tutorial.launch on Ubuntu 12.04&ROS Fuerte, I got the error as following:

"cannot launch node of type [rosbag/rosbag]: can't locate node [rosbag] in package [rosbag]"

Then I found one solution like this:
In Launch file

seems now it works as:

Is this the only solution for the error?

Thank you

visp_auto_tracker does not output pose data and unable to track properly

visp_auto_tracker was working perfectly fine for me until today...when I did 'sudo apt-get upgrade' and ran visp_auto_tracker, I got the message below repeatedly as the webcam tracked the QR code.

*********** Parsing XML for Mb Edge Tracker ************
ecm : mask : size : 5
ecm : mask : nb_mask : 180
ecm : range : tracking : 10
ecm : contrast : threshold 5000
ecm : contrast : mu1 0.5
ecm : contrast : mu2 0.5
sample : sample_step : 4
sample : n_total_sample : 250
klt : Mask Border : 0
klt : Max Features : 10000
klt : Windows Size : 5
klt : Quality : 0.01
klt : Min Distance : 20
klt : Harris Parameter : 0.01
klt : Block Size : 3
klt : Pyramid Levels : 3
face : Angle Appear : 75
face : Angle Disappear : 75
camera : u0 : 325.03 (default)
camera : v0 : 219.983 (default)
camera : px : 523.553 (default)
camera : py : 520.561 (default)
(L0) !! /tmp/buildd/ros-hydro-visp-2.9.0-4precise-20140617-1429/src/camera/vpCameraParameters.cpp: get_K(#403) :
getting K matrix in the case of projection with distortion has no sense
Tracking failed
getting K matrix in the case of projection with distortion has no sense
Tracking done in 33.239 ms

When I check "rostopic echo /visp_auto_tracker/object_position", I saw that I was no longer getting any pose data from the webcam. The tracking markers that were supposed to show up on the debug display window were no longer there as well. I noticed there were only 8 markers at the corners of the QR code labelled: mi1, mi2, mi3, mi4, mo1, mo2, mo3, mo4.

vist_tracker reconfigure produces an OpenCV error

When running

% roslaunch visp_tracker tutorial.launch

and then trying to modifiy the tracker parameters like (sample_step) using

% rosrun rqt_reconfigure rqt_reconfigure

You will get the following OpenCV error:

OpenCV Error: Assertion failed ((npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0) in calcOpticalFlowPyrLK, file /build/opencv-SviWsf/opencv-2.4.9.1+dfsg/modules/video/src/lkpyramid.cpp, line 845

visp_tracker Tutoral Broken?

Hello,

I'm following the Getting started tutorial of the visp_tracker package, and I believe I'm experience some unusual behavior when setting the initial pose from clicking on the UI client. Upon the second left mouse click for specifying the second vertex in the image, the GUI suddenly closes and reopens, where after a new image view renders a flickering pose estimate, presumably resulting from the unfinished pose initialization. The behavior is recorded the the gif bellow:
anim

System info:

  • OS: Ubuntu 14.04 LTS
  • Distro: ROS Indigo
  • Package: ros-indigo-visp-tracker
    • Version: 0.8.1-0trusty-20150806-2245-+0000

vision_visp not building with OpenCV 2.4

Sorry, we had to change some API to make SURF/SIFT part of a non-free module but changes are fairly minor:

diff ViSP-2.6.1/include/visp//vpFernClassifier.h ./ViSP-2.6.1New/include/visp//vpFernClassifier.h
51c51,55
< #if (VISP_HAVE_OPENCV_VERSION >= 0x020101) // Require opencv >= 2.1.1

---
> #if (VISP_HAVE_OPENCV_VERSION >= 0x020400) // Require opencv >= 2.4.0
> #  include <opencv2/imgproc/imgproc.hpp>
> #  include <opencv2/features2d/features2d.hpp>
> #  include <opencv2/legacy/legacy.hpp>
> #elif (VISP_HAVE_OPENCV_VERSION >= 0x020101) // Require opencv >= 2.1.1
Only in ./ViSP-2.6.1New/include/visp/: vpFernClassifier.h~
diff ViSP-2.6.1/include/visp//vpKeyPointSurf.h ./ViSP-2.6.1New/include/visp//vpKeyPointSurf.h
64c64,67
< #if (VISP_HAVE_OPENCV_VERSION >= 0x020101) // Require opencv >= 2.1.1

---
> #if (VISP_HAVE_OPENCV_VERSION >= 0x020400) // Require opencv >= 2.4.0
> #  include <opencv2/features2d/features2d.hpp>
> #  include <opencv2/legacy/compat.hpp>
> #elif (VISP_HAVE_OPENCV_VERSION >= 0x020101) // Require opencv >= 2.1.1
Only in ./ViSP-2.6.1New/include/visp/: vpKeyPointSurf.h~
diff ViSP-2.6.1/include/visp//vpPlanarObjectDetector.h ./ViSP-2.6.1New/include/visp//vpPlanarObjectDetector.h
50c50,55
< #if (VISP_HAVE_OPENCV_VERSION >= 0x020101) // Require opencv >= 2.1.1

---
> #if (VISP_HAVE_OPENCV_VERSION >= 0x020400) // Require opencv >= 2.4.0
> #  include <opencv2/imgproc/imgproc.hpp>
> #  include <opencv2/features2d/features2d.hpp>
> #  include <opencv2/calib3d/calib3d.hpp>
> #  include <opencv2/legacy/legacy.hpp>
> #elif (VISP_HAVE_OPENCV_VERSION >= 0x020101) // Require opencv >= 2.1.1

Please re-release on fuerte.

bug report: visp_auto_tracker/src/node.cpp:135:40: error: ‘vpDetectorQRCode’ does not name a type

Environment: x86_64 ubuntu16.04 ROS kinectic
I'll try to fix it.

[ 97%] Linking CXX shared library /home/rapyuta/rapyuta/rapyuta_ws/devel/lib/libauto_tracker.so
[ 97%] Built target auto_tracker
[ 97%] Building CXX object vision_visp/visp_auto_tracker/CMakeFiles/visp_auto_tracker.dir/src/node.cpp.o
[ 98%] Building CXX object vision_visp/visp_auto_tracker/CMakeFiles/visp_auto_tracker.dir/src/names.cpp.o
[ 98%] Building CXX object vision_visp/visp_auto_tracker/CMakeFiles/visp_auto_tracker.dir/src/main.cpp.o
/home/rapyuta/rapyuta/rapyuta_ws/src/vision_visp/visp_auto_tracker/src/node.cpp: In member function ‘void visp_auto_tracker::Node::spin()’:
/home/rapyuta/rapyuta/rapyuta_ws/src/vision_visp/visp_auto_tracker/src/node.cpp:135:40: error: ‘vpDetectorQRCode’ does not name a type
                         detector = new vpDetectorQRCode;
                                        ^
/home/rapyuta/rapyuta/rapyuta_ws/src/vision_visp/visp_auto_tracker/src/node.cpp:137:40: error: ‘vpDetectorDataMatrixCode’ does not name a type
                         detector = new vpDetectorDataMatrixCode;
                                        ^
vision_visp/visp_auto_tracker/CMakeFiles/visp_auto_tracker.dir/build.make:110: recipe for target 'vision_visp/visp_auto_tracker/CMakeFiles/visp_auto_tracker.dir/src/node.cpp.o' failed
make[2]: *** [vision_visp/visp_auto_tracker/CMakeFiles/visp_auto_tracker.dir/src/node.cpp.o] Error 1
CMakeFiles/Makefile2:5702: recipe for target 'vision_visp/visp_auto_tracker/CMakeFiles/visp_auto_tracker.dir/all' failed
make[1]: *** [vision_visp/visp_auto_tracker/CMakeFiles/visp_auto_tracker.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j4 -l4" failed

edit 2017.01.11
[solved]
Yes, miss $ sudo apt-get install libzbar-dev and then fresh install to clean the cache.
I should read this carefully. http://visp-doc.inria.fr/doxygen/visp-daily/tutorial-install-ubuntu.html
Anyway thanks for @fspindle patient and time. YOLO!

Catkin / Hydro Plans?

Hi, I've a team interested in possibly using Visp with Baxter in ROS Hydro and just wondering if there has already been any work done in catkinizing this stack and if there are any obstacles from doing so. Thanks!

Error with BOOST_FILESYSTEM_VERSION

While trying to package vision_visp for Arch Linux (Boost 1.55), I had the following problem with visp_tracker:

/usr/include/boost/filesystem/config.hpp:16:5: error: #error Compiling Filesystem version 3 file with BOOST_FILESYSTEM_VERSION defined != 3
 #   error Compiling Filesystem version 3 file with BOOST_FILESYSTEM_VERSION defined != 3
     ^

This is caused by this line:

# Make sure Boost.Filesystem v2 is used.
add_definitions(-DBOOST_FILESYSTEM_VERSION=2)

AFAIK, V2 support was removed in Boost 1.50, so this may be worth fixing, even for Ubuntu users.

Some functions were also removed: http://www.boost.org/doc/libs/1_45_0/libs/filesystem/v3/doc/deprecated.html

RGB to BGR

Hi,

Currently I am using the auto tracker. It is working fine, except for the fact that the output is switching red and blue.

I am using
roslaunch uvc_cam test_uvc.launch
and I can view that image is in the right format, red is red ,blue is blue. The image is publies under /camera/image_raw Then I call visp_auto_tracker by following launch file:

 < launch>
  <!-- Launch the tracking node -->
  <node pkg="visp_auto_tracker" type="visp_auto_tracker" name="visp_auto_tracker" output="screen">
    <param name="model_path" type="string" value="$(find visp_auto_tracker)/models" />
    <param name="model_name" type="string" value="pattern" />
    <param name="debug_display" type="bool" value="True" />
    <remap from="/visp_auto_tracker/camera_info" to="/camera/camera_info"/>
    <remap from="/visp_auto_tracker/image_raw" to="/camera/image_raw"/>
  </node>
</launch>

I know this is a simple problem. I would be glad if you can point me towards right direction so that I can solve it. Thanks in advance.

visp_tracker (tracker_viewer) doesn't work with klt

Hi there,

So I was testing the visp_tracker, when I use the default method (mbt) everything work, but when I pass through the launch file the parameter tracker_type as klt as follows:

 <node pkg="nodelet" type="nodelet" name="visp_tracker_client" output="screen"
        args="load visp_tracker/TrackerClient $(arg manager)">
    <param name="model_path" value="package://visp_tracker/models" />
    <param name="tracker_type" value="klt" />
    <param name="model_name" value="laas-box" />

    <!-- Load recommended settings for tracking initialization. They
      will be automatically forwarded to the tracking node if the
      initialization succeed. -->
    <rosparam file="$(find visp_tracker)/models/laas-box/tracker.yaml" />
  </node>

The node get stuck in the waitForImage() method with the periodic message:

waiting for a rectified image...

Any idea why? A quick look to the code shows that in both cases the image_subscriber is subscribing the same topic, so I don't really see why this is happening??
Thanks in advance for your feedback
Cheers!

No bag file

I've installed visp_auto_tracker package and there is no .bag file for ROS testing:
roslaunch visp_auto_tracker tutorial.launch.

visp_hand2eye_calibration documentation -- correct the message types taken in

Hi, quick and easy fix --

The documentation for visp_hand2eye_calibration states that the message types that are subscribed to for world_effector and camera_object are visp_hand2eye_calibration/TransformArray. They are actually geometry_msgs::Transform. This threw me for a bit of a loop, wondering why my node publishers weren't matching up.

I'd like to update this wiki myself but need to wait to get whitelisted for editing ROS wikis. I'll close this issue if/when I change it.

Thanks for making visp_hand2eye_calibration!

/object_position dimension

Hello there,
First of all, I want to thanks your effort by getting this powerful tool for robotics project.
I am having a minor issue, that is, I want to control the head motion of my robot by the information procedding of the visp_tracker node, with a certain .launch which I have in order to get images from my robot cams.
The point is, how could I convert from the values given by /object_position topic to real world values?
I am not sure if I missed something, but I have been reading several pages the last few days, and I haven't found the answer to this main issue for me.
Thanks for your attention.
Best regards.

Is there any way to speed up visp_auto_tracker?

I am using the indigo branch to track a QR code. But the detection and redetection process take a lot of time. So is there any way to optimize the code, such as multi-thread program? And how to make it possible? I am sorry for my poor English and thanks in advance.

Unable to find visp dependency with ViSP 2.7.1

Hi Thomas,

When I build vision_visp\visp package with ViSP 2.7.1 release candidate, other packages are unable to find visp headers and libraries.

$ rosmake visp
ok
$ rosmake visp_tracker
Building CXX object CMakeFiles/visp_tracker.dir/src/libvisp_tracker/tracker.o
/home/fspindle/fuerte_workspace/vision_visp/visp_tracker/src/libvisp_tracker/conversion.cpp:14:26: fatal error: visp/vpImage.h: No such file or directory
compilation terminated.

The issue is in relation with the changes done in visp-config that uses now pkg-config visp --cflags. In our case visp.pc is not in PKG_CONFIG_PATH environment variable:
$ pkg-config visp --cflags
No package 'visp' found

To reproduce the error, edit vision_visp/visp/Makefile and put

VERSION = 2.7.1-rc1
TARBALL = build/ViSP-$(VERSION).tar.gz
TARBALL_URL =
https://gforge.inria.fr/frs/download.php/32728/ViSP-2.7.1-rc1.zip
UNPACK_CMD = unzip
SOURCE_DIR = build/ViSP-$(VERSION)

MD5SUM_FILE = ViSP-$(VERSION).tar.gz.md5sum

roscd visp
make wipe
rosdep install visp
rosmake visp

A workaround is to set PKG_CONFIG_PATH (in my case)
export PKG_CONFIG_PATH=$HOME/fuerte_workspace/vision_visp/visp/install/lib/pkgconfig

then remove visp_tracker/build folder and run a rosmake vision_visp

I don't know how to fix properly this issue. I can revert the changes done in visp-config, but I don't think that it is the best solution.

Any ideas ?

Fabien

Missing zbar/libdmtx dependencies in visp_auto_tracker

While packaging visp_auto_tracker for Arch Linux, I found missing dependencies. Indeed, the following lines can be found in package.xml:

<buildtool_depend>catkin</buildtool_depend>
<build_depend>visp_bridge</build_depend>
<build_depend>visp_tracker</build_depend>
<!-- to fix: add zbar and libdmtx-dev dependencies -->
<!-- do it manually using: sudo apt-get install libzbar-dev libdmtx-dev -->

Why was this just commented and not fixed? It seems that rosdep is up-to-date.

Using live camera stream as Input data for visp_tracker

Greetings,

I try to run the visp_tracker with my own camera and I fail. The problem is, that the GUI is not starting.
visp_auto_tracker is working with my camera.
Exchanging the bag stream with a live camera stream in the visp_tracker worked out.
I used remap in the tutorial.launch with the following lines and removed the play of the bag file:

!-- Launch the tracking node -->
node pkg="visp_tracker" type="tracker" name="tracker_mbt">
param name="camera_prefix" value="/wide_left/camera" />

remap from="/wide_left/camera/image_rect" to="/camera/acA2000_50gc/image_raw" />
remap from="/wide_left/camera/camera_info" to="/camera/acA2000_50gc/camera_info" />

param name="tracker_type" value="mbt+klt" />
/node>`

When I start the launch file the processes are starting but no GUI appears:

process[rosout-1]: started with pid [14521]
started core service [/rosout]
process[tracker_mbt-2]: started with pid [14539]
process[tracker_mbt_client-3]: started with pid [14540]
process[tracker_mbt_viewer-4]: started with pid [14541]
process[base_link_to_camera-5]: started with pid [14542]
process[tf_localization-6]: started with pid [14550]

Does anyone know what could be the problem? I changed nothing else than the launch file. The camera is also running under the new topic, since i can echo it. Is it maybe a problem with the calibration?

Kind regards
Lyon

How to publish QR-code as TF in rviz

Hi

I detect the QR-code when I run roslaunch visp_auto_tracker tracklive_usb.launch
and I can check the QR-code's pose from rostopic echo /visp_auto_acker/object_position

BUT how can I see the QR-code in rviz as TF ?? the reason is I want to do visual servoing with UR5 arm, since I have the TF of the end-effector of the UR5 in rviz, and I am planning somehow to do object tracking.

Thanks

[head_camera] does not match name /camera/image_raw

I have the following launch file to use visp_auto_track with 2 cam's

    <launch>	
      <!-- Launch the tracking node  -->
      <node pkg="visp_auto_tracker" type="visp_auto_tracker" name="visp_auto_tracker" output="screen">
        <param name="model_path" value="$(find visp_auto_tracker)/models" />
        <param name="model_name" value="pattern" />
        <param name="debug_display" value="True" />
      </node>
    
    <!-- camera transforms -->
    	<node pkg="tf" type="static_transform_publisher" 
    		name="left_cam_tf" args="0 0 1.0 -1.570796 0 -1.570796 /map /left_camera 100"/>
    	<node pkg="tf" type="static_transform_publisher" 
    		name="right_cam_tf" args="0.0 -0.25 1.0 -1.570796 0 -1.570796 /map /right_camera 100"/>
    
     <!-- left cam-->
      <!-- Launch the usb camera acquisition node -->
    <group ns="left_cam">
      <node name="usb_cam_left" pkg="usb_cam" type="usb_cam_node"  output="screen">      
        <param name="image_width" value="640" />
        <param name="image_height" value="480" />
        <param name="video_device" value="/dev/video1" />      
        <param name="pixel_format" value="yuyv" />
        <param name="framerate" value="30" />
        <param name="camera_frame_id" value="left_camera" />
        <param name="camera_info_url" value="package://visp_auto_tracker/models/calibration_left.ini" type="string" />
      </node>
      <node pkg="visp_auto_tracker" type="visp_auto_tracker" name="visp_auto_tracker" output="screen">
        <param name="model_path" value="$(find visp_auto_tracker)/models" />
        <param name="model_name" value="pattern" />
        <param name="debug_display" value="True" />
    	<remap from="/visp_auto_tracker/image_raw" to="/left_cam/usb_cam_left/image_raw"/>
        	<param name="autosize" value="true" />
      	<remap from="/visp_auto_tracker/camera_info" to="/left_cam/usb_cam_left/camera_info"/>
      </node>
     </group>
    
    
    <!-- Right cam-->
      <!-- Launch the usb camera acquisition node -->
    <group ns="right_cam">
      <node name="usb_cam_right" pkg="usb_cam" type="usb_cam_node"  output="screen">           
        <param name="image_width" value="640" />
        <param name="image_height" value="480" />
        <param name="video_device" value="/dev/video2" />      
        <param name="pixel_format" value="yuyv" />
        <param name="framerate" value="30" />
        <param name="camera_frame_id" value="right_camera" />
        <param name="camera_info_url" value="package://visp_auto_tracker/models/calibration_right.ini" type="string" />
      </node>
      <node pkg="visp_auto_tracker" type="visp_auto_tracker" name="visp_auto_tracker" output="screen">
        <param name="model_path" value="$(find visp_auto_tracker)/models" />
        <param name="model_name" value="pattern" />
        <param name="debug_display" value="True" />
    	<remap from="/visp_auto_tracker/image_raw" to="/right_cam/usb_cam_right/image_raw"/>
        	<param name="autosize" value="true" />
      	<remap from="/visp_auto_tracker/camera_info" to="/right_cam/usb_cam_right/camera_info"/>
      </node>
    </group>
    
    <!-- Launch visualizer -->
    	<node name="rviz_node" pkg="rviz" type="rviz" />
      
    </launch>

However, when I run this, I get the warnings:

[ INFO] [1515587518.127209475]: camera calibration URL: package://visp_auto_tracker/models/calibration_right.ini
  [ WARN] [1515587518.127562984]: [head_camera] does not match name /camera/image_raw in file /home/jeroen/catkin_ws/src/vision_visp/visp_auto_tracker/models/calibration_right.ini
  [ INFO] [1515587518.127615190]: Starting 'head_camera' (/dev/video2) at 640x480 via mmap (yuyv) at 30 FPS
  [ INFO] [1515587518.217836910]: camera calibration URL: package://visp_auto_tracker/models/calibration_left.ini
  [ WARN] [1515587518.218137831]: [head_camera] does not match name /camera/image_raw in file /home/jeroen/catkin_ws/src/vision_visp/visp_auto_tracker/models/calibration_left.ini
  [ INFO] [1515587518.218179513]: Starting 'head_camera' (/dev/video1) at 640x480 via mmap (yuyv) at 30 FPS

Illegal instruction when launching visp_hand2eye_calibration_calibrator

Don't know if this is the right place to place this bug, so excuse me if I am wrong. This applies to the .deb variant, not the git one (will try that one next).

I try to run the demo for visp_hand2eye_calibration_calibrator, but I get an error for an illegal instruction. With Gdb:

Starting program: /opt/ros/fuerte/stacks/vision_visp/visp_hand2eye_calibration/bin/visp_hand2eye_calibration_client
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/i386-linux-gnu/libthread_db.so.1".
[New Thread 0xb0225b40 (LWP 3551)]
[New Thread 0xafa24b40 (LWP 3552)]
[New Thread 0xaf0ffb40 (LWP 3553)]
[New Thread 0xae4ffb40 (LWP 3558)]
[ INFO] [1370699784.837251767]: Waiting for topics...

Program received signal SIGILL, Illegal instruction.
0xb7eb0a4c in vpRotationVector::init (this=0xbfffe948, size=3)
at /tmp/buildd/ros-fuerte-vision-visp-0.5.0/debian/ros-fuerte-vision-visp/opt/ros/fuerte/stacks/vision_visp/visp/build/ViSP-2.6.2/src/math/transformation/vpRotationVector.cpp:117
117 /tmp/buildd/ros-fuerte-vision-visp-0.5.0/debian/ros-fuerte-vision-visp/opt/ros/fuerte/stacks/vision_visp/visp/build/ViSP-2.6.2/src/math/transformation/vpRotationVector.cpp: No such file or directory.

visp_auto_tracker cannot open model file when I run with external camera

Hi,
I'm trying to use visp_auto_tracker but I'm stuck on this issue:
It has worked when roslaunch visp_auto_tracker tracklive_usb.launch by my computer's camera.
But when I run with external camera of MAV ,I encountered some problems.

Then I roslaunch visp_auto_tracker tracklive_usb.launch ,here are always some problems like:

... logging to /home/ubuntu/.ros/log/fb150e98-36e3-11e6-a395-1dc4271d6692/roslaunch-tegra-ubuntu-7507.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://tegra-ubuntu:44260/

SUMMARY

PARAMETERS

  • /dji_sdk_read_cam/camera_info_url: package://visp_au...
  • /dji_sdk_read_cam/camera_name: /camera/image_raw
  • /dji_sdk_read_cam/image_height: 480
  • /dji_sdk_read_cam/image_width: 640
  • /dji_sdk_read_cam/pixel_format: yuyv
  • /dji_sdk_read_cam/video_device: /dev/video0
  • /rosdistro: indigo
  • /rosversion: 1.11.16
  • /visp_auto_tracker/debug_display: True
  • /visp_auto_tracker/model_name: pattern
  • /visp_auto_tracker/model_path: /home/ubuntu/dutR...

NODES
/
dji_sdk_read_cam (dji_sdk_read_cam/dji_sdk_read_cam)
visp_auto_tracker (visp_auto_tracker/visp_auto_tracker)

auto-starting new master
process[master]: started with pid [7518]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to fb150e98-36e3-11e6-a395-1dc4271d6692
process[rosout-1]: started with pid [7531]
started core service [/rosout]
/opt/ros/indigo/lib/python2.7/dist-packages/roslib/packages.py:447: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
if resource_name in files:
process[visp_auto_tracker-2]: started with pid [7548]
process[dji_sdk_read_cam-3]: started with pid [7549]
0
[ INFO] [1466426349.298328831]: model full path=/home/ubuntu/dutRacing/src/vision_visp/visp_auto_tracker/models/pattern
[ INFO] [1466426349.302621247]: Model content=#VRML V2.0 utf8

DEF fst_0 Group {
children [

Object "cube"

Shape {

geometry DEF cube IndexedFaceSet {

coord Coordinate {
point [
-0.0765 -0.0765 0.000
0.0765 -0.0765 0.000
0.0765 0.0765 0.000
-0.0765 0.0765 0.000
-0.03825 -0.03825 0.000
0.03825 -0.03825 0.000
0.03825 0.03825 0.000
-0.03825 0.03825 0.000
]
}

coordIndex [
0,1,2,3,-1,
4,5,6,7,-1,
]}
}

]
}

starting tracker
*********** Parsing XML for Mb Edge Tracker ***********
ecm : mask : size : 5
ecm : mask : nb_mask : 180
ecm : range : tracking : 10
ecm : contrast : threshold 5000
ecm : contrast : mu1 0.5
ecm : contrast : mu2 0.5
sample : sample_step : 4
sample : n_total_sample : 250
klt : Mask Border : 0
klt : Max Features : 10000
klt : Windows Size : 5
klt : Quality : 0.05
klt : Min Distance : 20
klt : Harris Parameter : 0.01
klt : Block Size : 3
klt : Pyramid Levels : 3
face : Angle Appear : 75
face : Angle Disappear : 75
camera : u0 : 192 (default)
camera : v0 : 144 (default)
camera : px : 600 (default)
camera : py : 600 (default)
lod : use lod : 0 (default)
lod : min line length threshold : 50 (default)
lod : min polygon area threshold : 2500 (default)
(L0) !! /ViSP/src/tracking/mbt/vpMbTracker.cpp: loadVRMLModel(#1115) : coin not detected with ViSP, cannot load model : /home/ubuntu/dutRacing/src/vision_visp/visp_auto_tracker/models/pattern.wrl
terminate called after throwing an instance of 'vpException'
what(): coin not detected with ViSP, cannot load model
[visp_auto_tracker-2] process has died [pid 7548, exit code -6, cmd /home/ubuntu/dutRacing/devel/lib/visp_auto_tracker/visp_auto_tracker /visp_auto_tracker/camera_info:=/dji_sdk/camera_info /visp_auto_tracker/image_raw:=/dji_sdk/image_raw __name:=visp_auto_tracker __log:=/home/ubuntu/.ros/log/fb150e98-36e3-11e6-a395-1dc4271d6692/visp_auto_tracker-2.log].
log file: /home/ubuntu/.ros/log/fb150e98-36e3-11e6-a395-1dc4271d6692/visp_auto_tracker-2
.log

in addition ,I install ViSP from source

groovy builds are failing

On of the Jenkins jobs shows the build error:
http://jenkins.ros.org/view/GbinP32/job/ros-groovy-vision-visp_binarydeb_precise_amd64/

It looks like a previously existing file which is downloaded in the build process is gone:
https://github.com/downloads/laas/visp_tracker/tutorial-static-box.bag

Please address the problem soon to avoid that the existing Debian package will be removed from the public repo with the next upcoming sync (around mid of the week):
http://www.ros.org/debbuild/groovy.html?q=regression

very sensitive tracking

So I did one test with kinect v2, I made some changes in the launch file, and the trackers codes, besides adding my own model to track. You can see all those modifications here then I test the whole thing and it starts fine but after moving the object around the tracker start to get lost and then it crashes, it was rather fast, you can check the video to asses the performance.

So my question is that normal? Or I need to tune some parameters to have better and robust tracking?
Cheers!

tracker run slow

Hi
i have successfully run the auto tracker, but it seems like run much slower than your example videos shown in ros_wiki. I uploaded my video so you can see that:
https://youtu.be/8PRvELsXeYI

the frequency of /image_raw is 20 Hz, image_height x image_width is 640x480, when i run the

rostopic hz /visp_auto_tracker/code_messgae

i got about 8Hz, when i increase the height and width od image, the code_message's publish frequency decreased. Is my CPU not enough (I7 CPU but run ubuntu 14.04 in VMware)? I thought 640x480 is very small image witch can process very fast.

and when I echo the /visp_auto_tracker/code_messgae I got (dose it strange?):
image

much thanks if you can give some suggestions!

object_position not published

Hi
I am trying to estimate the pose of an object using visp_tracker. When i run the tutorial.launch and echo the object_position topic can see the estimated pose. But when i try to do the same using my own webcam, i don't see anything being published on the topic. tf only the transform between the camera and base_link.

I am running the vision_visp compiled from ROS package and i'm using Ubuntu 16.04 with ROS Kinetic.

`ros@ros-e7470:~$ roslaunch visp_tracker tutorial-test.launch
... logging to /home/ros/.ros/log/98874c36-3bf6-11e8-87cf-34f39aca2f65/roslaunch-ros-e7470-15997.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://ros-e7470:40513/

SUMMARY

PARAMETERS

  • /rosdistro: kinetic
  • /rosversion: 1.12.13
  • /tf_localization/object_translation_qw: 0.87906866
  • /tf_localization/object_translation_qx: 0.04655744
  • /tf_localization/object_translation_qy: -0.12974845
  • /tf_localization/object_translation_qz: -0.45632887
  • /tf_localization/object_translation_x: 1.84421063
  • /tf_localization/object_translation_y: -0.00836844
  • /tf_localization/object_translation_z: 0.52310595
  • /tracker_mbt/camera_prefix: /camera
  • /tracker_mbt/tracker_type: mbt+klt
  • /tracker_mbt_client/mask_size: 3
  • /tracker_mbt_client/model_name: box
  • /tracker_mbt_client/model_path: file:///home/ros/...
  • /tracker_mbt_client/mu1: 0.5
  • /tracker_mbt_client/mu2: 0.5
  • /tracker_mbt_client/n_mask: 180
  • /tracker_mbt_client/ntotal_sample: 800
  • /tracker_mbt_client/range: 10
  • /tracker_mbt_client/sample_step: 3.0
  • /tracker_mbt_client/threshold: 2000.0
  • /tracker_mbt_client/tracker_type: mbt+klt
  • /use_sim_time: False

NODES
/
base_link_to_camera (tf/static_transform_publisher)
tf_localization (visp_tracker/tf_localization.py)
tracker_mbt (visp_tracker/tracker)
tracker_mbt_client (visp_tracker/visp_tracker_client)
tracker_mbt_viewer (visp_tracker/visp_tracker_viewer)

ROS_MASTER_URI=http://localhost:11311

process[tracker_mbt-1]: started with pid [16017]
process[tracker_mbt_client-2]: started with pid [16018]
process[tracker_mbt_viewer-3]: started with pid [16019]
process[base_link_to_camera-4]: started with pid [16020]
process[tf_localization-5]: started with pid [16032]
Cannot get the number of circles. Defaulting to zero.
`

visp_auto_tracker crashes on detection

I'm trying to start using visp_auto_tracker to track a simple QR code (the one in the wiki). I can start it using roslaunch visp_auto_tracker tracklive_usb.launch, everything loads correct and I can see a window with the video of /image_raw.

The problem is that when I show the QR code to the webcam, all crashes with the following error:

[visp_auto_tracker-2] process has died [pid 12529, exit code -11, cmd /home/albert/catkin_ws/devel/lib/visp_auto_tracker/visp_auto_tracker /visp_auto_tracker/camera_info:=/usb_cam/camera_info /visp_auto_tracker/image_raw:=/usb_cam/image_raw __name:=visp_auto_tracker __log:=/home/albert/.ros/log/a987a6aa-4cee-11e6-87fa-b8ee651e75ac/visp_auto_tracker-2.log].
log file: /home/albert/.ros/log/a987a6aa-4cee-11e6-87fa-b8ee651e75ac/visp_auto_tracker-2*.log

Any help?

visp_auto_tracker - distance issue and multi qr code detection

I tried using a different qr code than the one on the git page and the pose is incorrect. I'm getting correct values if i use the qr code from the git page but the range is too small. I see that the pattern.xml has something to do with configuring the qr code How do i do detect a different qr code properly?

Also, does the marker need to have the black padding?

Also, is there a way to detect multiple qr codes?

Thanks in advance.

Tracker Viewer is very slow

My external camera is publishing @20fps but the tracker viewer is about @<1fps. Beside this I get a warning on launch (see below). My system is Ubuntu 16.04. with ROS Kinetic. Thanks for any clue!

Best, Sebastian

--
SUMMARY

PARAMETERS

  • /rosdistro: kinetic
  • /rosversion: 1.12.7
  • /tracker_mbt_viewer/camera_prefix: /nerian_sp1
  • /tracker_mbt_viewer/frame_size: 0.1

NODES
/
tracker_mbt_viewer (visp_tracker/visp_tracker_viewer)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
process[tracker_mbt_viewer-1]: started with pid [30631]
[ INFO] [1500902279.955658100]: Initializing nodelet with 4 worker threads.
[ INFO] [1500902281.062193557]: Model loaded from the parameter server.
[ WARN] [1500902281.066428453]: The input topic '/object_position' is not yet advertised
[ INFO] [1500902281.107535683]: waiting for a rectified image...
[ INFO] [1500902282.107593573]: waiting for a rectified image...
[ INFO] [1500902283.207609391]: waiting for a rectified image...
[ INFO] [1500902284.307594005]: waiting for a rectified image...
[ INFO] [1500902284.463450238]: dst is 0x0 but src size is 800x592, resizing.
[ WARN] [1500902284.509482919]: No tracker has been found with the default name value "tracker_mbt/angle_appear".
Tracker name parameter (tracker_name) should be provided for this node (tracker_viewer).
Polygon visibility might not work well in the viewer window.

catkin building error for groovy-dev branch?

Hello,

Thank you for great work and devotions guys.

I am trying to build groovy-dev vision_visp version using catkin_make but confronted the following error.

It seems to my building environment issue and looks for math library (-lm).

I could build my src directory without vision_visp through.

Any recommendations would be great.

Regards,

CMake Error at /home/enddl22/workspace/groovy_catkin_ws/devel/share/visp_bridge/cmake/visp_bridgeConfig.cmake:127 (message): Project 'visp_camera_calibration' tried to find library '-lm'. The library is neither a target nor built/installed properly. Did you compile project 'visp_bridge'? Did you find_package() it before the subdirectory containing its code is included? Call Stack (most recent call first): /opt/ros/groovy/share/catkin/cmake/catkinConfig.cmake:72 (find_package) vision_visp/visp_camera_calibration/CMakeLists.txt:4 (find_package)

Ball Detection!

Hello everyone !
Can someone please help me in the design of an init file to detect a ball? i have my cao file ready but my initialization fails .

Thank you !

Size of QR Code

Hey there,
I am planning to use the VISP Auto Tracker to detect a Printed Circuit Board in the CPU Chassis. But my problem is the size of the QR Code is too big. Is it possible to somehow reduce the size of QR Code to say 2X2 cm and still be able to detect it using the same code ?
I am using ROS Hydro. Please advise.

vision_visp build error

Hi, I use ubuntu 16.04 and ros kinetic.
I tried to install and build the whole vision_visp meta package
(I followed the instruction of http://wiki.ros.org/vision_visp)

And when I do
$ catkin_make -j4 -DCMAKE_BUILD_TYPE=Release
my computer can go any further after
[ 93%] Built target visp_hand2eye_calibration_client

It just stop. I waited for quite long time but It doesn't seem to get better.
Is there anything that I have to do more?
Thank you very much in advance!

juna@juna-800G5M-800G5W:~/catkin_ws$ catkin_make -j4 -DCMAKE_BUILD_TYPE=Release
Base path: /home/juna/catkin_ws
Source space: /home/juna/catkin_ws/src
Build space: /home/juna/catkin_ws/build
Devel space: /home/juna/catkin_ws/devel
Install space: /home/juna/catkin_ws/install

Running command: "cmake /home/juna/catkin_ws/src -DCMAKE_BUILD_TYPE=Release -DCATKIN_DEVEL_PREFIX=/home/juna/catkin_ws/devel -DCMAKE_INSTALL_PREFIX=/home/juna/catkin_ws/install -G Unix Makefiles" in "/home/juna/catkin_ws/build"

-- Using CATKIN_DEVEL_PREFIX: /home/juna/catkin_ws/devel
-- Using CMAKE_PREFIX_PATH: /home/juna/catkin_ws/devel;/opt/ros/kinetic
-- This workspace overlays: /home/juna/catkin_ws/devel;/opt/ros/kinetic
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/juna/catkin_ws/build/test_results
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.7.6
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 34 packages in topological order:
-- ~~ - dynamixel_driver
-- ~~ - dynamixel_motor (metapackage)
-- ~~ - dynamixel_tutorials
-- ~~ - hector_slam (metapackage)
-- ~~ - hector_slam_launch
-- ~~ - joy_teleop
-- ~~ - dynamixel_controllers
-- ~~ - mouse_teleop
-- ~~ - dynamixel_msgs
-- ~~ - key_teleop
-- ~~ - hector_map_tools
-- ~~ - hector_nav_msgs
-- ~~ - teleop_tools (metapackage)
-- ~~ - teleop_tools_msgs
-- ~~ - vision_visp (metapackage)
-- ~~ - dynamixel_sdk
-- ~~ - hector_geotiff
-- ~~ - hector_geotiff_plugins
-- ~~ - hector_marker_drawing
-- ~~ - ros_tutorials_service
-- ~~ - ros_tutorials_topic
-- ~~ - hector_compressed_map_transport
-- ~~ - rplidar_ros
-- ~~ - my_dynamixel_tutorial
-- ~~ - hector_imu_attitude_to_tf
-- ~~ - hector_imu_tools
-- ~~ - hector_map_server
-- ~~ - hector_trajectory_server
-- ~~ - hector_mapping
-- ~~ - visp_bridge
-- ~~ - visp_camera_calibration
-- ~~ - visp_hand2eye_calibration
-- ~~ - visp_tracker
-- ~~ - visp_auto_tracker
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'dynamixel_driver'
-- ==> add_subdirectory(dynamixel_motor/dynamixel_driver)
-- +++ processing catkin metapackage: 'dynamixel_motor'
-- ==> add_subdirectory(dynamixel_motor/dynamixel_motor)
-- +++ processing catkin package: 'dynamixel_tutorials'
-- ==> add_subdirectory(dynamixel_motor/dynamixel_tutorials)
-- +++ processing catkin metapackage: 'hector_slam'
-- ==> add_subdirectory(hector_slam/hector_slam)
-- +++ processing catkin package: 'hector_slam_launch'
-- ==> add_subdirectory(hector_slam/hector_slam_launch)
-- +++ processing catkin package: 'joy_teleop'
-- ==> add_subdirectory(teleop_tools/joy_teleop)
-- +++ processing catkin package: 'dynamixel_controllers'
-- ==> add_subdirectory(dynamixel_motor/dynamixel_controllers)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- dynamixel_controllers: 0 messages, 9 services
-- +++ processing catkin package: 'mouse_teleop'
-- ==> add_subdirectory(teleop_tools/mouse_teleop)
-- +++ processing catkin package: 'dynamixel_msgs'
-- ==> add_subdirectory(dynamixel_motor/dynamixel_msgs)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- dynamixel_msgs: 3 messages, 0 services
-- +++ processing catkin package: 'key_teleop'
-- ==> add_subdirectory(teleop_tools/key_teleop)
-- +++ processing catkin package: 'hector_map_tools'
-- ==> add_subdirectory(hector_slam/hector_map_tools)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
CMake Warning at /opt/ros/kinetic/share/catkin/cmake/catkin_package.cmake:166 (message):
catkin_package() DEPENDS on 'Eigen' but neither 'Eigen_INCLUDE_DIRS' nor
'Eigen_LIBRARIES' is defined.
Call Stack (most recent call first):
/opt/ros/kinetic/share/catkin/cmake/catkin_package.cmake:102 (_catkin_package)
hector_slam/hector_map_tools/CMakeLists.txt:51 (catkin_package)

-- +++ processing catkin package: 'hector_nav_msgs'
-- ==> add_subdirectory(hector_slam/hector_nav_msgs)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- hector_nav_msgs: 0 messages, 5 services
-- +++ processing catkin metapackage: 'teleop_tools'
-- ==> add_subdirectory(teleop_tools/teleop_tools)
-- +++ processing catkin package: 'teleop_tools_msgs'
-- ==> add_subdirectory(teleop_tools/teleop_tools_msgs)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- Generating .msg files for action teleop_tools_msgs/Increment /home/juna/catkin_ws/src/teleop_tools/teleop_tools_msgs/action/Increment.action
-- teleop_tools_msgs: 7 messages, 0 services
-- +++ processing catkin metapackage: 'vision_visp'
-- ==> add_subdirectory(vision_visp/vision_visp)
-- +++ processing catkin package: 'dynamixel_sdk'
-- ==> add_subdirectory(dynamixel_sdk)
-- +++ processing catkin package: 'hector_geotiff'
-- ==> add_subdirectory(hector_slam/hector_geotiff)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- +++ processing catkin package: 'hector_geotiff_plugins'
-- ==> add_subdirectory(hector_slam/hector_geotiff_plugins)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- +++ processing catkin package: 'hector_marker_drawing'
-- ==> add_subdirectory(hector_slam/hector_marker_drawing)
-- +++ processing catkin package: 'ros_tutorials_service'
-- ==> add_subdirectory(ros_tutorials_service)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- ros_tutorials_service: 0 messages, 1 services
-- +++ processing catkin package: 'ros_tutorials_topic'
-- ==> add_subdirectory(ros_tutorials_topic)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- ros_tutorials_topic: 1 messages, 0 services
-- +++ processing catkin package: 'hector_compressed_map_transport'
-- ==> add_subdirectory(hector_slam/hector_compressed_map_transport)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- +++ processing catkin package: 'rplidar_ros'
-- ==> add_subdirectory(rplidar_ros)
-- +++ processing catkin package: 'my_dynamixel_tutorial'
-- ==> add_subdirectory(my_dynamixel_tutorial)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- +++ processing catkin package: 'hector_imu_attitude_to_tf'
-- ==> add_subdirectory(hector_slam/hector_imu_attitude_to_tf)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- +++ processing catkin package: 'hector_imu_tools'
-- ==> add_subdirectory(hector_slam/hector_imu_tools)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- +++ processing catkin package: 'hector_map_server'
-- ==> add_subdirectory(hector_slam/hector_map_server)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- +++ processing catkin package: 'hector_trajectory_server'
-- ==> add_subdirectory(hector_slam/hector_trajectory_server)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- +++ processing catkin package: 'hector_mapping'
-- ==> add_subdirectory(hector_slam/hector_mapping)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- thread
-- signals
-- chrono
-- system
-- date_time
-- atomic
-- hector_mapping: 2 messages, 0 services
-- +++ processing catkin package: 'visp_bridge'
-- ==> add_subdirectory(vision_visp/visp_bridge)
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- filesystem
-- program_options
-- system
-- +++ processing catkin package: 'visp_camera_calibration'
-- ==> add_subdirectory(vision_visp/visp_camera_calibration)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- visp_camera_calibration: 4 messages, 1 services
-- +++ processing catkin package: 'visp_hand2eye_calibration'
-- ==> add_subdirectory(vision_visp/visp_hand2eye_calibration)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- visp_hand2eye_calibration: 1 messages, 3 services
-- +++ processing catkin package: 'visp_tracker'
-- ==> add_subdirectory(vision_visp/visp_tracker)
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- filesystem
-- thread
-- system
-- chrono
-- date_time
-- atomic
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
-- visp_tracker: 7 messages, 1 services
-- +++ processing catkin package: 'visp_auto_tracker'
-- ==> add_subdirectory(vision_visp/visp_auto_tracker)
-- Boost version: 1.58.0
-- Found the following Boost libraries:
-- filesystem
-- system
-- signals
-- regex
-- date_time
-- program_options
-- thread
-- chrono
-- atomic
-- Configuring done
-- Generating done
-- Build files have been written to: /home/juna/catkin_ws/build

Running command: "make -j4" in "/home/juna/catkin_ws/build"

[ 0%] Built target _dynamixel_controllers_generate_messages_check_deps_SetComplianceSlope
[ 0%] Built target _dynamixel_controllers_generate_messages_check_deps_SetSpeed
[ 0%] Built target _dynamixel_controllers_generate_messages_check_deps_StopController
[ 0%] Built target _dynamixel_controllers_generate_messages_check_deps_SetCompliancePunch
[ 0%] Built target _dynamixel_controllers_generate_messages_check_deps_StartController
[ 0%] Built target _dynamixel_controllers_generate_messages_check_deps_TorqueEnable
[ 0%] Built target _dynamixel_controllers_generate_messages_check_deps_SetTorqueLimit
[ 0%] Built target _dynamixel_controllers_generate_messages_check_deps_SetComplianceMargin
[ 0%] Built target std_msgs_generate_messages_nodejs
[ 0%] Built target _dynamixel_controllers_generate_messages_check_deps_RestartController
[ 0%] Built target _dynamixel_msgs_generate_messages_check_deps_MotorStateList
[ 0%] Built target _dynamixel_msgs_generate_messages_check_deps_JointState
[ 0%] Built target std_msgs_generate_messages_cpp
[ 0%] Built target std_msgs_generate_messages_py
[ 0%] Built target std_msgs_generate_messages_eus
[ 0%] Built target std_msgs_generate_messages_lisp
[ 0%] Built target _dynamixel_msgs_generate_messages_check_deps_MotorState
[ 0%] Built target nav_msgs_generate_messages_py
[ 0%] Built target _hector_nav_msgs_generate_messages_check_deps_GetRobotTrajectory
[ 0%] Built target _hector_nav_msgs_generate_messages_check_deps_GetDistanceToObstacle
[ 0%] Built target _hector_nav_msgs_generate_messages_check_deps_GetRecoveryInfo
[ 0%] Built target nav_msgs_generate_messages_lisp
[ 0%] Built target nav_msgs_generate_messages_eus
[ 0%] Built target _hector_nav_msgs_generate_messages_check_deps_GetNormal
[ 0%] Built target nav_msgs_generate_messages_cpp
[ 0%] Built target _hector_nav_msgs_generate_messages_check_deps_GetSearchPosition
[ 0%] Built target nav_msgs_generate_messages_nodejs
[ 0%] Built target actionlib_msgs_generate_messages_cpp
[ 0%] Built target _teleop_tools_msgs_generate_messages_check_deps_IncrementGoal
[ 0%] Built target _teleop_tools_msgs_generate_messages_check_deps_IncrementFeedback
[ 0%] Built target _teleop_tools_msgs_generate_messages_check_deps_IncrementAction
[ 0%] Built target _teleop_tools_msgs_generate_messages_check_deps_IncrementActionResult
[ 0%] Built target actionlib_msgs_generate_messages_nodejs
[ 0%] Built target _teleop_tools_msgs_generate_messages_check_deps_IncrementActionFeedback
[ 0%] Built target _teleop_tools_msgs_generate_messages_check_deps_IncrementResult
[ 0%] Built target actionlib_msgs_generate_messages_lisp
[ 0%] Built target _teleop_tools_msgs_generate_messages_check_deps_IncrementActionGoal
[ 0%] Built target actionlib_msgs_generate_messages_py
[ 0%] Built target actionlib_msgs_generate_messages_eus
[ 0%] Built target rosgraph_msgs_generate_messages_py
[ 0%] Built target roscpp_generate_messages_lisp
[ 0%] Built target roscpp_generate_messages_nodejs
[ 0%] Built target roscpp_generate_messages_eus
[ 0%] Built target roscpp_generate_messages_cpp
[ 0%] Built target rosgraph_msgs_generate_messages_nodejs
[ 0%] Built target roscpp_generate_messages_py
[ 0%] Built target rosgraph_msgs_generate_messages_eus
[ 0%] Built target rosgraph_msgs_generate_messages_cpp
[ 0%] Built target rosgraph_msgs_generate_messages_lisp
[ 0%] Built target _catkin_empty_exported_target
[ 0%] Built target geometry_msgs_generate_messages_eus
[ 0%] Built target geometry_msgs_generate_messages_cpp
[ 0%] Built target geometry_msgs_generate_messages_lisp
[ 0%] Built target geometry_msgs_generate_messages_nodejs
[ 0%] Built target geometry_msgs_generate_messages_py
[ 2%] Built target rplidarNode
[ 2%] Built target map_to_image_node
[ 2%] Built target _ros_tutorials_service_generate_messages_check_deps_SrvTutorial
[ 2%] Built target _ros_tutorials_topic_generate_messages_check_deps_MsgTutorial
[ 2%] Built target rplidarNodeClient
[ 2%] Built target tf2_msgs_generate_messages_py
[ 3%] Built target imu_attitude_to_tf_node
[ 4%] Built target pose_and_orientation_to_imu_node
[ 4%] Built target visualization_msgs_generate_messages_eus
[ 4%] Built target visualization_msgs_generate_messages_lisp
[ 4%] Built target visualization_msgs_generate_messages_nodejs
[ 4%] Built target visualization_msgs_generate_messages_cpp
[ 4%] Built target visualization_msgs_generate_messages_py
[ 4%] Built target sensor_msgs_generate_messages_eus
[ 4%] Built target sensor_msgs_generate_messages_py
[ 4%] Built target sensor_msgs_generate_messages_lisp
[ 4%] Built target actionlib_generate_messages_nodejs
[ 4%] Built target sensor_msgs_generate_messages_nodejs
[ 4%] Built target sensor_msgs_generate_messages_cpp
[ 4%] Built target actionlib_generate_messages_cpp
[ 4%] Built target actionlib_generate_messages_lisp
[ 4%] Built target actionlib_generate_messages_py
[ 4%] Built target actionlib_generate_messages_eus
[ 4%] Built target tf_generate_messages_nodejs
[ 4%] Built target tf_generate_messages_lisp
[ 4%] Built target tf_generate_messages_py
[ 4%] Built target tf_generate_messages_eus
[ 4%] Built target tf2_msgs_generate_messages_eus
[ 4%] Built target tf2_msgs_generate_messages_nodejs
[ 4%] Built target tf_generate_messages_cpp
[ 4%] Built target tf2_msgs_generate_messages_cpp
[ 4%] Built target tf2_msgs_generate_messages_lisp
[ 5%] Built target _hector_mapping_generate_messages_check_deps_HectorDebugInfo
[ 5%] Built target _hector_mapping_generate_messages_check_deps_HectorIterData
[ 5%] Built target visp_bridge
[ 5%] Built target _visp_camera_calibration_generate_messages_check_deps_ImageAndPoints
[ 5%] Built target _visp_camera_calibration_generate_messages_check_deps_CalibPointArray
[ 5%] Built target _visp_camera_calibration_generate_messages_check_deps_CalibPoint
[ 5%] Built target _visp_camera_calibration_generate_messages_check_deps_ImagePoint
[ 5%] Built target visp_camera_calibration_common
[ 6%] Built target visp_hand2eye_calibration_common
[ 6%] Built target _visp_camera_calibration_generate_messages_check_deps_calibrate
[ 6%] Built target _visp_hand2eye_calibration_generate_messages_check_deps_compute_effector_camera_quick
[ 6%] Built target _visp_hand2eye_calibration_generate_messages_check_deps_reset
[ 7%] Built target visp_tracker_gencfg
[ 7%] Built target image_proc_gencfg
[ 7%] Built target _visp_hand2eye_calibration_generate_messages_check_deps_compute_effector_camera
[ 7%] Built target _visp_hand2eye_calibration_generate_messages_check_deps_TransformArray
[ 7%] Built target _visp_tracker_generate_messages_check_deps_TrackerSettings
[ 7%] Built target _visp_tracker_generate_messages_check_deps_MovingEdgeSettings
[ 7%] Built target _visp_tracker_generate_messages_check_deps_MovingEdgeSite
[ 7%] Built target _visp_tracker_generate_messages_check_deps_MovingEdgeSites
[ 7%] Built target _visp_tracker_generate_messages_check_deps_KltPoint
[ 7%] Built target _visp_tracker_generate_messages_check_deps_KltPoints
[ 7%] Built target _visp_tracker_generate_messages_check_deps_Init
[ 7%] Built target _visp_tracker_generate_messages_check_deps_KltSettings
[ 8%] Built target visp_tracker_viewer
[ 11%] Built target dynamixel_controllers_generate_messages_nodejs
Checking md5sum on /home/juna/catkin_ws/devel/share/visp_tracker/bag/tutorial-static-box.bag
WARNING: md5sum mismatch (bb7158bd50d7241daa27c9c3b3c2ba70 != 1578dedd48d3f9f5515a8737845ae882); re-downloading file /home/juna/catkin_ws/devel/share/visp_tracker/bag/tutorial-static-box.bag
Downloading https://github.com/lagadic/vision_visp/releases/download/vision_visp-0.5.0/tutorial-static-box.bag to /home/juna/catkin_ws/devel/share/visp_tracker/bag/tutorial-static-box.bag...Checking md5sum on /home/juna/catkin_ws/devel/share/visp_auto_tracker/bag/tutorial-qrcode.bag
WARNING: md5sum mismatch (0be28bfe0383ac87e462d1d4c7906bc5 != 0f80ceea2610b8400591ca7aff764dfa); re-downloading file /home/juna/catkin_ws/devel/share/visp_auto_tracker/bag/tutorial-qrcode.bag
Downloading https://github.com/lagadic/vision_visp/releases/download/vision_visp-0.5.0/tutorial-qrcode.bag to /home/juna/catkin_ws/devel/share/visp_auto_tracker/bag/tutorial-qrcode.bag...[ 14%] Built target dynamixel_controllers_generate_messages_cpp
[ 16%] Built target dynamixel_controllers_generate_messages_py
[ 18%] Built target dynamixel_controllers_generate_messages_lisp
[ 21%] Built target dynamixel_controllers_generate_messages_eus
[ 22%] Built target dynamixel_msgs_generate_messages_nodejs
[ 23%] Built target dynamixel_msgs_generate_messages_cpp
[ 24%] Built target dynamixel_msgs_generate_messages_py
[ 25%] Built target dynamixel_msgs_generate_messages_eus
[ 27%] Built target dynamixel_msgs_generate_messages_lisp
[ 28%] Built target hector_nav_msgs_generate_messages_py
[ 29%] Built target hector_nav_msgs_generate_messages_eus
[ 31%] Built target hector_nav_msgs_generate_messages_lisp
[ 33%] Built target hector_nav_msgs_generate_messages_cpp
[ 34%] Built target hector_nav_msgs_generate_messages_nodejs
[ 36%] Built target teleop_tools_msgs_generate_messages_nodejs
[ 38%] Built target teleop_tools_msgs_generate_messages_cpp
[ 42%] Built target teleop_tools_msgs_generate_messages_lisp
[ 42%] Built target teleop_tools_msgs_generate_messages_eus
[ 44%] Built target teleop_tools_msgs_generate_messages_py
[ 47%] Built target dynamixel_sdk
[ 48%] Built target ros_tutorials_service_generate_messages_py
[ 49%] Built target geotiff_writer
[ 50%] Built target ros_tutorials_service_generate_messages_eus
[ 50%] Built target ros_tutorials_service_generate_messages_lisp
[ 50%] Built target ros_tutorials_service_generate_messages_cpp
[ 50%] Built target ros_tutorials_service_generate_messages_nodejs
[ 51%] Built target ros_tutorials_topic_generate_messages_nodejs
[ 52%] Built target ros_tutorials_topic_generate_messages_cpp
[ 52%] Built target ros_tutorials_topic_generate_messages_py
[ 52%] Built target ros_tutorials_topic_generate_messages_lisp
[ 52%] Built target ros_tutorials_topic_generate_messages_eus
[ 53%] Built target hector_map_server
[ 53%] Built target hector_trajectory_server
[ 54%] Built target hector_mapping_generate_messages_cpp
[ 54%] Built target hector_mapping_generate_messages_nodejs
[ 55%] Built target hector_mapping_generate_messages_lisp
[ 56%] Built target hector_mapping_generate_messages_py
[ 56%] Built target hector_mapping_generate_messages_eus
[ 57%] Built target visp_bridge_convert_camera_parameters
[ 59%] Built target visp_camera_calibration_generate_messages_lisp
[ 61%] Built target visp_camera_calibration_generate_messages_eus
[ 62%] Built target visp_camera_calibration_generate_messages_cpp
[ 64%] Built target visp_camera_calibration_generate_messages_py
[ 65%] Built target visp_camera_calibration_generate_messages_nodejs
[ 65%] Built target visp_camera_calibration_gencpp
[ 66%] Built target visp_hand2eye_calibration_generate_messages_py
[ 67%] Built target visp_hand2eye_calibration_generate_messages_eus
[ 68%] Built target visp_hand2eye_calibration_generate_messages_nodejs
[ 70%] Built target visp_hand2eye_calibration_generate_messages_lisp
[ 71%] Built target visp_hand2eye_calibration_generate_messages_cpp
[ 73%] Built target visp_tracker_generate_messages_cpp
[ 75%] Built target visp_tracker_generate_messages_nodejs
[ 78%] Built target visp_tracker_generate_messages_eus
[ 81%] Built target visp_tracker_generate_messages_py
[ 83%] Built target visp_tracker_generate_messages_lisp
[ 83%] Built target dynamixel_controllers_generate_messages
[ 83%] Built target dynamixel_msgs_generate_messages
[ 83%] Built target hector_nav_msgs_generate_messages
[ 83%] Built target teleop_tools_msgs_generate_messages
[ 84%] Built target geotiff_node
[ 84%] Built target geotiff_saver
[ 84%] Built target hector_geotiff_plugins
[ 84%] Built target service_server2
[ 85%] Built target service_server
[ 85%] Built target ros_tutorials_service_generate_messages
[ 87%] Built target service_client
[ 87%] Built target topic_subscriber
[ 87%] Built target ros_tutorials_topic_generate_messages
[ 88%] Built target topic_publisher
[ 88%] Built target hector_mapping_generate_messages
[ 89%] Built target hector_mapping
[ 89%] Built target visp_camera_calibration_generate_messages
[ 90%] Built target visp_camera_calibration_calibrator
[ 91%] Built target visp_camera_calibration_image_processing
[ 91%] Built target visp_hand2eye_calibration_generate_messages
[ 92%] Built target visp_camera_calibration_camera
[ 92%] Built target visp_hand2eye_calibration_gencpp
[ 92%] Built target visp_tracker_gencpp
[ 92%] Built target visp_tracker_generate_messages
[ 92%] Built target visp_tracker_client
[ 92%] Built target tracker
[ 93%] Built target visp_hand2eye_calibration_calibrator
[ 93%] Built target visp_hand2eye_calibration_client

git submodules issue?

I tried to clone this repo using --recursive but the visp_auto_tracker module fails to clone.

$ git clone --recursive [email protected]:laas/vision_visp.git

Cloning into 'vision_visp'...
remote: Counting objects: 422, done.
remote: Compressing objects: 100% (200/200), done.
remote: Total 422 (delta 210), reused 420 (delta 209)
Receiving objects: 100% (422/422), 54.40 KiB, done.
Resolving deltas: 100% (210/210), done.
Submodule 'visp_auto_tracker' (git://github.com/lagadic/visp_auto_tracker.git) registered for path 'visp_auto_tracker'
Submodule 'visp_bridge' (git://github.com/lagadic/visp_bridge.git) registered for path 'visp_bridge'
Submodule 'visp_camera_calibration' (git://github.com/lagadic/visp_camera_calibration.git) registered for path 'visp_camera_calibration'
Submodule 'visp_hand2eye_calibration' (git://github.com/lagadic/visp_hand2eye_calibration.git) registered for path 'visp_hand2eye_calibration'
Submodule 'visp_tracker' (git://github.com/laas/visp_tracker.git) registered for path 'visp_tracker'
Cloning into 'visp_auto_tracker'...

fatal: unable to connect to github.com:
github.com[0: 192.30.252.130]: errno=Connection timed out

Clone of 'git://github.com/lagadic/visp_auto_tracker.git' into submodule path 'visp_auto_tracker' failed

howto run with kinect one (V2) as camera stream ?

hi,
i want to use kinect one (V2) camera stream and camera_info for visp_auto_tracker, but dont know right howto do it.

  • do i need to insert a node which starts the kinect2, or can i simply use the available streams from a already started kinect2 ? there are kinect2/hd/image_color and kinect2/hd/camera_info topics to which i could subscribe , but how? remapping, from inside which node ?
  • what do i need to insert into the launch file then ?

if some more infos needed i will provide them

visp_tracker viewer

When trying to view the image using:
$ run visp_tracker viewer _camera_prefix:=/wide_left/camera/

the following error is obtained
fatal error: VRML model .wrl is not a regular file

visp_auto_tracker cannot open model file

Hi,
I'm trying to use visp_auto_tracker but I'm stuck on this issue:

Coin read error: Could not find '��u' in any of the following directories (from cwd '/home/formica/.ros'):
''
'.'

(L0) !! /tmp/buildd/ros-hydro-visp-2.9.0-4raring-20140819-1830/src/tracking/mbt/vpMbTracker.cpp: loadVRMLModel(#774) : can't open file to load model
terminate called after throwing an instance of 'vpException'
what(): can't open file to load model
[visp_auto_tracker-2] process has died [pid 27735, exit code -6, cmd /home/formica/ros/catkin_ws/devel/lib/visp_auto_tracker/visp_auto_tracker /visp_auto_tracker/camera_info:=/usb_cam/camera_info /visp_auto_tracker/image_raw:=/usb_cam/image_raw __name:=visp_auto_tracker __log:=/home/formica/.ros/log/cccbbb7e-52c2-11e4-b0ce-e811326b35e2/visp_auto_tracker-2.log].
log file: /home/formica/.ros/log/cccbbb7e-52c2-11e4-b0ce-e811326b35e2/visp_auto_tracker-2*.log

Any idea?
Thanks in advance

Hand2Eye Calibrator return first matrix that was sent to it

When i send data and ask Hand2Eye Calibrator to compute it return just the first matrices
Also i saw that the same happen when you use Client example

is calibrator is just an example? can i use as is for computing my eye2hand transfrom?

im conducting an expriment when i only move in Z direction and i p

I place an object on the end of the arm and i recodring with camera from the base

my data as wrriten in record file (Arm data acuired from encoders, Obj Data Aqcuired from Camera):

XArm YArm ZArm RXArm RYArm RZArm XObj YObj Zobj RXObj RYObj RObj
0 1 0.050 0 0 0 0.0219116 -0.00346060 0.183634 0.02077090 3.09210 -0.00843524
0 1 0.100 0 0 0 0.0223035 -0.00149662 0.233661 0.02169330 3.09055 -0.01862820
0 1 0.150 0 0 0 0.0228440 4.04948E-05 0.283102 0.02165850 3.06555 -0.02642330
0 1 0.200 0 0 0 0.0233103 0.001705890 0.333236 0.02033780 3.07090 -0.03269140
0 1 0.250 0 0 0 0.0239772 0.003746710 0.383682 0.02088440 3.07675 -0.03525030
0 1 0.300 0 0 0 0.0245928 0.005414250 0.433303 0.01893250 3.06094 -0.04659980
0 1 0.350 0 0 0 0.0254822 0.007065170 0.483543 0.01902080 3.06146 -0.02645310
0 1 0.400 0 0 0 0.0262967 0.008849140 0.534785 0.01721250 3.07937 -0.05460800
0 1 0.450 0 0 0 0.0267376 0.010349200 0.583546 0.01583760 3.04599 -0.05025220
0 1 0.500 0 0 0 0.0277891 0.012845000 0.633612 0.01212500 3.02253 -0.04751030
0 1 0.550 0 0 0 0.0283095 0.014789700 0.685370 0.00780469 3.06154 -0.02021860

I use ArUco to detect postion of the using ArUco_Test exectuable

In order to send data i modified client example to load data from file as following:

#include "client.h"
#include <geometry_msgs/Transform.h>
#include "visp_hand2eye_calibration/TransformArray.h"
#include <visp_bridge/3dpose.h>
#include "names.h"

#include <visp/vpCalibration.h>
#include <visp/vpExponentialMap.h>

//read csv
#include <iostream>
#include <fstream>
#include <string>
using namespace std;

namespace visp_hand2eye_calibration
{
Client::Client()
{
  camera_object_publisher_
      = n_.advertise<geometry_msgs::Transform> (visp_hand2eye_calibration::camera_object_topic, 1000);
  world_effector_publisher_
      = n_.advertise<geometry_msgs::Transform> (visp_hand2eye_calibration::world_effector_topic, 1000);

  reset_service_
      = n_.serviceClient<visp_hand2eye_calibration::reset> (visp_hand2eye_calibration::reset_service);
  compute_effector_camera_service_
      = n_.serviceClient<visp_hand2eye_calibration::compute_effector_camera> (
                                                                                      visp_hand2eye_calibration::compute_effector_camera_service);
  compute_effector_camera_quick_service_
      = n_.serviceClient<visp_hand2eye_calibration::compute_effector_camera_quick> (
                                                                                            visp_hand2eye_calibration::compute_effector_camera_quick_service);
}

void Client::initAndSimulate()
{
  ROS_INFO("Waiting for topics...");
  ros::Duration(1.).sleep();
  while(!reset_service_.call(reset_comm)){
    if(!ros::ok()) return;
    ros::Duration(1).sleep();
  }


  // We want to calibrate the hand to eye extrinsic camera parameters from 6 couple of poses: cMo and wMe

  // Input: six couple of poses used as input in the calibration proces
  vpHomogeneousMatrix cMo; // eye (camera) to object transformation. The object frame is attached to the calibrartion grid
  vpHomogeneousMatrix wMe; // world to hand (end-effector) transformation
//  vpHomogeneousMatrix eMc; // hand (end-effector) to eye (camera) transformation

  // Initialize an eMc transformation used to produce the simulated input transformations cMo and wMe
//  vpTranslationVector etc(0.3, 0.06, 0.01);
//  vpThetaUVector erc;
//  erc[0] = vpMath::rad(30);
//  erc[1] = vpMath::rad(60);
//  erc[2] = vpMath::rad(90);

//  eMc.buildFrom(etc, erc);
//  ROS_INFO("1) GROUND TRUTH:");

//  ROS_INFO_STREAM("hand to eye transformation: " <<std::endl<<visp_bridge::toGeometryMsgsTransform(eMc)<<std::endl);

  ifstream CalibFile("/home/calib_info.csv");
  const int NumOfRow = 11;
  const int NumOfCol = 12;
  double CalibTextRow[NumOfCol];
  string getcell;
  string::size_type sz; // needed for String to double conversion
  getline(CalibFile,getcell); // get rid of titles line
  //each line sent to processing
  for (int RowNum = 0; RowNum < NumOfRow; RowNum++)
  {
    for (int ColNum=0; ColNum < NumOfCol; ColNum++) {
      if (ColNum<NumOfCol-1){
        getline(CalibFile, getcell, ' ');
      }
      else
      {
        getline(CalibFile, getcell, '\n'); //end of line
      }
      CalibTextRow[ColNum]= stod (getcell,&sz); // convert to double
    }

    //build poses matrix
    wMe.buildFrom(CalibTextRow[0], CalibTextRow[1], CalibTextRow[2], CalibTextRow[3], CalibTextRow[4], CalibTextRow[5]);
    cMo.buildFrom(CalibTextRow[6], CalibTextRow[7], CalibTextRow[8], CalibTextRow[9], CalibTextRow[10], CalibTextRow[11]);

    geometry_msgs::Transform pose_w_e;
    pose_w_e = visp_bridge::toGeometryMsgsTransform(wMe);
    ROS_INFO_STREAM(" world to hand transformation: " <<std::endl<<pose_w_e<<std::endl);

    geometry_msgs::Transform pose_c_o;
    pose_c_o = visp_bridge::toGeometryMsgsTransform(cMo);
    ROS_INFO_STREAM(" eye to object transformation: " <<std::endl<<pose_c_o<<std::endl);

    camera_object_publisher_.publish(pose_c_o);
    world_effector_publisher_.publish(pose_w_e);
    emc_quick_comm.request.camera_object.transforms.push_back(pose_c_o);
    emc_quick_comm.request.world_effector.transforms.push_back(pose_w_e);

  }
  ros::Duration(1.).sleep();

}

void Client::computeUsingQuickService()
{
  vpHomogeneousMatrix eMc;
  vpThetaUVector erc;
  ROS_INFO("2) QUICK SERVICE:");
  if (compute_effector_camera_quick_service_.call(emc_quick_comm))
  {
    ROS_INFO_STREAM("hand_camera: "<< std::endl << emc_quick_comm.response.effector_camera);
  }
  else
  {
    ROS_ERROR("Failed to call service");
  }
}

void Client::computeFromTopicStream()
{
  vpHomogeneousMatrix eMc;
  vpThetaUVector erc;
  ROS_INFO("3) TOPIC STREAM:");
  if (compute_effector_camera_service_.call(emc_comm))
  {
    ROS_INFO_STREAM("hand_camera: " << std::endl << emc_comm.response.effector_camera);
  }
  else
  {
    ROS_ERROR("Failed to call service");
  }

}
}

/*
 * Local variables:
 * c-basic-offset: 2
 * End:
 */
`

Or Hirshfeld
Control SW Engineer

/tracker_mbt/result always return zeros

After initializing visp_tracker (with the rosbag demo but also with a custom model) although the tracker seems that is following the object /tracker_mbt/result always return zeros (translation & rotation)

I didn't want to write publisher?so I just modify the client.cpp,is it ok?

Hi,
I didn't want to write publisher?so I just modify the client.cpp ,is it ok?
// vpColVector v_c(6); // camera velocity used to produce 6 simulated poses
// for (int i = 0; i < N; i++)
// {
// v_c = 0;
// if (i == 0)
// {
// // Initialize first poses
// cMo.buildFrom(0, 0, 0.5, 0, 0, 0); // z=0.5 m
// wMe.buildFrom(0, 0, 0, 0, 0, 0); // Id
// }
// else if (i == 1)
// v_c[3] = M_PI / 8;
// else if (i == 2)
// v_c[4] = M_PI / 8;
// else if (i == 3)
// v_c[5] = M_PI / 10;
// else if (i == 4)
// v_c[0] = 0.5;
// else if (i == 5)
// v_c[1] = 0.8;

// vpHomogeneousMatrix cMc; // camera displacement
// cMc = vpExponentialMap::direct(v_c); // Compute the camera displacement due to the velocity applied to the camera
// if (i > 0)
// {
// // From the camera displacement cMc, compute the wMe and cMo matrixes
// cMo = cMc.inverse() * cMo;
// wMe = wMe * eMc * cMc * eMc.inverse();

// }
// geometry_msgs::Transform pose_c_o;
// pose_c_o = visp_bridge::toGeometryMsgsTransform(cMo);
// geometry_msgs::Transform pose_w_e;
// pose_w_e = visp_bridge::toGeometryMsgsTransform(wMe);
// camera_object_publisher_.publish(pose_c_o);
// world_effector_publisher_.publish(pose_w_e);
// emc_quick_comm.request.camera_object.transforms.push_back(pose_c_o);
// emc_quick_comm.request.world_effector.transforms.push_back(pose_w_e);

// }
geometry_msgs::Transform pose_c_o;
geometry_msgs::Transform pose_w_e;

pose_c_o .translation.x= -0.2352;
pose_c_o .translation.y= -0.4028;
pose_c_o .translation.z= 1.0600;
pose_c_o .rotation.x= 0.000990816769666054;
pose_c_o .rotation.y= -0.339129557981157;
pose_c_o .rotation.z= 0.832646383526645;
pose_c_o .rotation.w= -0.437820913190016;

pose_w_e .translation.x=-0.440312403375319;
pose_w_e .translation.y=0.193086003862725;
pose_w_e .translation.z=-0.269996674854836;
pose_w_e .rotation.x=0.35248;
pose_w_e .rotation.y=-0.13503;
pose_w_e .rotation.z=0.9258;
pose_w_e .rotation.w=0.020663;

camera_object_publisher_.publish(pose_c_o);
world_effector_publisher_.publish(pose_w_e);
emc_quick_comm.request.camera_object.transforms.push_back(pose_c_o);
emc_quick_comm.request.world_effector.transforms.push_back(pose_w_e);

pose_c_o .translation.x= 0.2019;
pose_c_o .translation.y= -0.1199;
pose_c_o .translation.z= 0.7585;
pose_c_o .rotation.x= 0.144595808799497;
pose_c_o .rotation.y= -0.160463581297842;
pose_c_o .rotation.z= 0.853932081608368;
pose_c_o .rotation.w=-0.473456857644354;

pose_w_e .translation.x=-0.595595551756106;
pose_w_e .translation.y=0.0267552735221165;
pose_w_e .translation.z=0.212462952686786;
pose_w_e .rotation.x=0.33571;
pose_w_e .rotation.y=-0.31476;
pose_w_e .rotation.z=0.87863;
pose_w_e .rotation.w=-0.12741;

camera_object_publisher_.publish(pose_c_o);
world_effector_publisher_.publish(pose_w_e);
emc_quick_comm.request.camera_object.transforms.push_back(pose_c_o);
emc_quick_comm.request.world_effector.transforms.push_back(pose_w_e);

Can not use visp properly

Hello,

I want to use the instrument visp to track the object,however,I encountered some problems which make me crazy.

  1. I just download the by Source: git https://github.com/lagadic/vision_visp.git (branch: hydro) in opt/ros/hydro/share. Then I try to launch the file in visp_tracker/tutorial.launch, however,there are always some problems like:
    ERROR: cannot launch node of type [visp_tracker/tracker]: can't locate node [tracker] in package [visp_tracker]
    ERROR: cannot launch node of type [visp_tracker/visp_tracker_client]: can't locate node [visp_tracker_client] in package [visp_tracker]
    ERROR: cannot launch node of type [visp_tracker/visp_tracker_viewer]: can't locate node [visp_tracker_viewer] in package [visp_tracker]

Even I uninstall the linux and Ros ,it is all the same.
2.I try to install the visp from source, when I try to catkin_make the visp_auto_tracker in my catkin_ws,there is also problems like:
In file included from /home/chaoqun/catkin_ws/src/vision_visp/visp_auto_tracker/flashcode_mbt/detectors/qrcode/detector.cpp:1:0:
/home/chaoqun/catkin_ws/src/vision_visp/visp_auto_tracker/flashcode_mbt/detectors/qrcode/detector.h:9:18: fatal error: zbar.h: No such file or directory
compilation terminated.
make[2]: *** [vision_visp/visp_auto_tracker/CMakeFiles/visp_auto_tracker_qrcode_detector.dir/flashcode_mbt/detectors/qrcode/detector.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [vision_visp/visp_auto_tracker/CMakeFiles/visp_auto_tracker_datamatrix_detector.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
make[1]: *** [vision_visp/visp_auto_tracker/CMakeFiles/visp_auto_tracker_qrcode_detector.dir/all] Error 2
Linking CXX shared library /home/chaoqun/catkin_ws/devel/lib/libvisp_auto_tracker_cmd_line.so
[ 90%] Built target visp_auto_tracker_cmd_line
make: *** [all] Error 2
Invoking "make" failed

I just do not know why,I really want to use this instrument,can you give me some hint?Thank you in advance!

Error process has died

This is the message I get:

[visp_auto_tracker-1] process has died [pid 3675, exit code -11, cmd /home/ruben/catkin_ws/devel/lib/visp_auto_tracker/visp_auto_tracker /visp_auto_tracker/camera_info:=/usb_cam/camera_info /visp_auto_tracker/image_raw:=/usb_cam/image_raw __name:=visp_auto_tracker __log:=/home/ruben/.ros/log/12c6b6d2-cf76-11e7-bf56-303a6428194b/visp_auto_tracker-1.log].
log file: /home/ruben/.ros/log/12c6b6d2-cf76-11e7-bf56-303a6428194b/visp_auto_tracker-1*.log

This massage appear when the camera see a QR-code and then stop running.
How can I solve this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.