Coder Social home page Coder Social logo

nvidia-isaac-ros / isaac_ros_visual_slam Goto Github PK

View Code? Open in Web Editor NEW
752.0 10.0 116.0 263 KB

Visual odometry package based on hardware-accelerated NVIDIA Elbrus library with world class quality and performance.

Home Page: https://developer.nvidia.com/isaac-ros-gems

License: Apache License 2.0

CMake 3.51% C++ 96.49%
ros visual-odometry perception slam robotics localization ros2-humble ros2 gpu jetson

isaac_ros_visual_slam's Introduction

Isaac ROS Visual SLAM

Hardware-accelerated, simultaneous localization and mapping (SLAM) using stereo visual inertial odometry (SVIO).

image

Webinar Available

Learn how to use this package by watching our on-demand webinar: Pinpoint, 250 fps, ROS 2 Localization with vSLAM on Jetson


Overview

Isaac ROS Visual SLAM provides a high-performance, best-in-class ROS 2 package for VSLAM (visual simultaneous localization and mapping). This package uses a stereo camera with an IMU to estimate odometry as an input to navigation. It is GPU accelerated to provide real-time, low-latency results in a robotics application. VSLAM provides an additional odometry source for mobile robots (ground based) and can be the primary odometry source for drones.

VSLAM provides a method for visually estimating the position of a robot relative to its start position, known as VO (visual odometry). This is particularly useful in environments where GPS is not available (such as indoors) or intermittent (such as urban locations with structures blocking line of sight to GPS satellites). This method is designed to use left and right stereo camera frames and an IMU (inertial measurement unit) as input. It uses input stereo image pairs to find matching key points in the left and right images; using the baseline between the left and right camera, it can estimate the distance to the key point. Using two consecutive input stereo image pairs, VSLAM can track the 3D motion of key points between the two consecutive images to estimate the 3D motion of the camera-which is then used to compute odometry as an output to navigation. Compared to the classic approach to VSLAM, this method uses GPU acceleration to find and match more key points in real-time, with fine tuning to minimize overall reprojection error.

Key points depend on distinctive features in the left and right camera image that can be repeatedly detected with changes in size, orientation, perspective, lighting, and image noise. In some instances, the number of key points may be limited or entirely absent; for example, if the camera field of view is only looking at a large solid colored wall, no key points may be detected. If there are insufficient key points, this module uses motion sensed with the IMU to provide a sensor for motion, which, when measured, can provide an estimate for odometry. This method, known as VIO (visual-inertial odometry), improves estimation performance when there is a lack of distinctive features in the scene to track motion visually.

SLAM (simultaneous localization and mapping) is built on top of VIO, creating a map of key points that can be used to determine if an area is previously seen. When VSLAM determines that an area is previously seen, it reduces uncertainty in the map estimate, which is known as loop closure. VSLAM uses a statistical approach to loop closure that is more compute efficient to provide a real time solution, improving convergence in loop closure.

image

There are multiple methods for estimating odometry as an input to navigation. None of these methods are perfect; each has limitations because of systematic flaws in the sensor providing measured observations, such as missing LIDAR returns absorbed by black surfaces, inaccurate wheel ticks when the wheel slips on the ground, or a lack of distinctive features in a scene limiting key points in a camera image. A practical approach to tracking odometry is to use multiple sensors with diverse methods so that systemic issues with one method can be compensated for by another method. With three separate estimates of odometry, failures in a single method can be detected, allowing for fusion of the multiple methods into a single higher quality result. VSLAM provides a vision- and IMU-based solution to estimating odometry that is different from the common practice of using LIDAR and wheel odometry. VSLAM can even be used to improve diversity, with multiple stereo cameras positioned in different directions to provide multiple, concurrent visual estimates of odometry.

To learn more about VSLAM, refer to the cuVSLAM SLAM documentation.

Accuracy

VSLAM is a best-in-class package with the lowest translation and rotational error as measured on KITTI Visual Odometry / SLAM Evaluation 2012 for real-time applications.

Method Runtime Translation Rotation Platform
VSLAM 0.007s 0.94% 0.0019 deg/m Jetson AGX Xavier aarch64
ORB-SLAM2 0.06s 1.15% 0.0027 deg/m 2 cores @ >3.5 GHz x86_64

In addition to results from standard benchmarks, we test loop closure for VSLAM on sequences of over 1000 meters, with coverage for indoor and outdoor scenes.

Performance

Sample Graph

Input Size

AGX Orin

Orin NX

Orin Nano 8GB

x86_64 w/ RTX 4060 Ti

Visual SLAM Node



720p



228 fps


40 ms

127 fps


74 ms

113 fps


65 ms

456 fps


37 ms


Documentation

Please visit the Isaac ROS Documentation to learn how to use this repository.


Packages

Latest

Update 2023-10-18: Improved stability.

isaac_ros_visual_slam's People

Contributors

hemalshahnv avatar hguillen avatar jaiveersinghnv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

isaac_ros_visual_slam's Issues

Could not find a package configuration file provided by "vpi"

Using Ubuntu 20.04.4, and ROS2 Foxy. Following the steps in the readme, but build fails with this error:

colcon build --symlink-install
Starting >>> isaac_ros_common
Starting >>> isaac_ros_test
Starting >>> isaac_ros_nvengine_interfaces
Starting >>> isaac_ros_visual_slam_interfaces
Finished <<< isaac_ros_test [0.93s]                                  
--- stderr: isaac_ros_common                                         
CMake Error at CMakeLists.txt:31 (find_package):
  By not providing "Findvpi.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "vpi", but
  CMake did not find one.

  Could not find a package configuration file provided by "vpi" with any of
  the following names:

    vpiConfig.cmake
    vpi-config.cmake

  Add the installation prefix of "vpi" to CMAKE_PREFIX_PATH or set "vpi_DIR"
  to a directory containing one of the above files.  If "vpi" provides a
  separate development package or SDK, be sure it has been installed.


"Tracker is lost" on KITTI rosbag

    I prepared the KITTI dataset in the same way as @DaddyWesker  and ran the sample code. However, I frequently get the message "Tracker is lost". rviz confirmed that there are no sudden jumps in the image sequence.

Originally posted by @Corufa in #43 (comment)

Bag file can not be used

Hi.

With recorded bag files including camera info topics and stereo images, the SLAM does not work.

It generates following error
[ERROR]: Transform is impossible. canTransform(camera_infra1_frame->camera_link) returns false

With live topics, the SLAM works super well.

Thanks.

Error on vGPU

I got below err when I launch vslam stack. My environment is on vGPU in VMware.
I tested nvblox and it works well in the same VM.
Any guess what's wrong?
The VM has:

  • 8x vCPUs
  • 64G RAM
  • 2x A10
  • 256G SSD
  • OS: Ubuntu 20.04.4 LTS
  • dGPU 470.129.06
admin@omn-u20-mkomuro:/workspaces/isaac_ros-dev/ros_ws$ ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam_isaac_sim.launch.py
[INFO] [launch]: All log files can be found below /home/admin/.ros/log/2022-06-15-06-07-12-816997-omn-u20-mkomuro-10633
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container-1]: process started with pid [10646]
[component_container-1] [INFO] [1655273233.262291079] [visual_slam_launch_container]: Load Library: /workspaces/isaac_ros-dev/ros_ws/install/isaac_ros_visual_slam/lib/libvisual_slam_node.so
[ERROR] [component_container-1]: process has died [pid 10646, exit code -4, cmd '/opt/ros/foxy/lib/rclcpp_components/component_container --ros-args -r __node:=visual_slam_launch_container -r __ns:=/'].

Code is not head branch but it works in my local workstation (RTX2080Ti with the same OS and dGPU version).

git clone -b hemalshahNV-patch-1 --recurse-submodules https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone -b hotfix_1 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
git clone -b release-ea3-hotfix1 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam

Can't build with realsense "failed to create symbolic link"

Trying to follow the guide at https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common to configure run_dev.sh to build ros2 humble and realsense-ros on my jetson xavier (jetpack 5.0.2). I added the file ".isaac_ros_common_config" with just the following line:

CONFIG_IMAGE_KEY = humble.nav2.realsense

This did nothing.

I then modified the run_dev.sh script changing line 108 from "IMAGE_KEY=humble.nav2" to "IMAGE_KEY=humble.nav2.realsense".
Now librealsense builds, but when trying to call:

colcon build --symlink-install

I get the following error message:

--- stderr: realsense2_camera_msgs                              
failed to create symbolic link '/workspaces/isaac_ros-dev/build/realsense2_camera_msgs/ament_cmake_python/realsense2_camera_msgs/realsense2_camera_msgs' because existing path cannot be removed: Is a directory
make[2]: *** [CMakeFiles/ament_cmake_python_symlink_realsense2_camera_msgs.dir/build.make:70: CMakeFiles/ament_cmake_python_symlink_realsense2_camera_msgs] Error 1
make[1]: *** [CMakeFiles/Makefile2:421: CMakeFiles/ament_cmake_python_symlink_realsense2_camera_msgs.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
make: *** [Makefile:146: all] Error 2
---
Failed   <<< realsense2_camera_msgs [13.1s, exited with code 2]

Any solution for this? And since I'm obviously not using the ".isaac_ros_common_config" file correctly, how am i supposed to do this? All I want to do is use my D435I camera.

EDIT

Okay so i tried to just delete the folder which was causing the problem at ~/workspaces/isaac_ros-dev/build/realsense2_camera_msgs/ament_cmake_python/realsense2_camera_msgs/realsense2_camera_msgs
and was able to build the ros wrapper, which seems like it's working as intended for now.

I still really want to try to understand the syntax of the ".isaac_ros_common_config" file, as this seems much smarter than my current solution of just editing the run_dev.sh file every time.

Is it possible to use Multiple Cameras in Elbrus VSLAM?

Hi,
I looked through the elbrus.h file and in the Elbrus initialization we pass the ELBRUS_CameraRig which in turn contains the ELBRUS_Camera and the number of cameras in the rig.

/*       
 * Use this to initialize Elbrus
 *
 * ELBRUS_TrackerHandle will remember number of cameras
 * from rig. Elbrus supports only Mono and Stereo rigs.
 */
ELBRUS_API
ELBRUS_Status ELBRUS_CreateTracker(ELBRUS_TrackerHandle* tracker,
                                   const struct ELBRUS_CameraRig* rig,
                                   const struct ELBRUS_Configuration* cfg);

Also in the code where we pass the images to start the tracking, it is mentioned we need to pass as many images as the number of cameras.

 * images - is a pointer to single image in case of mono or
 * array of two images in case of stereo.
 *
 * You should use the same number of images for tracker equal to
 * rig->num_cameras in ELBRUS_CreateTracker.
 */
ELBRUS_API
ELBRUS_Status ELBRUS_Track(ELBRUS_TrackerHandle tracker,
                           const ELBRUS_Image *images,
                           const ELBRUS_Pose* predicted_pose,
                           ELBRUS_PoseEstimate* pose_estimate);
  • So is it possible to use Multiple Cameras in Elbrus SLAM? If yes, how is the multiple camera stream used to build the pose graph?

IMU integration

Hello,
I'm trying to test isaac_ros_visual_slam with IMU (the one inside realsense d455 module) using realsense launch example.

I edited realsense example like this:

Edited launch
visual_slam_node = ComposableNode(
      name='visual_slam_node',
      package='isaac_ros_visual_slam',
      plugin='isaac_ros::visual_slam::VisualSlamNode',
      parameters=[{
                  'enable_rectified_pose': True,
                  'denoise_input_images': False,
                  'rectified_images': True,
                  'enable_debug_mode': False,
                  'debug_dump_path': '/tmp/elbrus',
                  'enable_slam_visualization': True,
                  'enable_landmarks_view': True,
                  'enable_observations_view': True,
                  'map_frame': 'map',
                  'odom_frame': 'odom',
                  'base_frame': 'camera_link',
                  'enable_imu': True,
                  'input_left_camera_frame': 'camera_infra1_frame',
                  'input_right_camera_frame': 'camera_infra2_frame',
                  'input_imu_frame': 'camera_accel_frame',
                  'gravitational_force': [-0.22555294, -9.83607, 0.049]
                  }],
      remappings=[('stereo_camera/left/image', 'infra1/image_rect_raw'),
                  ('stereo_camera/left/camera_info', 'infra1/camera_info'),
                  ('stereo_camera/right/image', 'infra2/image_rect_raw'),
                  ('stereo_camera/right/camera_info', 'infra2/camera_info'),
                  ('visual_slam/imu', '/imu')]
  )
  • 'gravitational_force': [-0.22555294, -9.83607, 0.049] -> gravitational_force was changed from default value as stated here [-0.22555294, -9.83607, 0.049] is reading from IMU at the start of trajectory estimation

  • 'input_imu_frame': 'camera_accel_frame' -> input_imu_frame was set to available one from tf tree following this

Visualized here

tf_elbrus_d455

My issue:
Although integrator_states changes to 3 (at launch it as 2 and later 3) I cannot see any improvements in estimated trajectory. (IMU measurements seems to be relatively in order)

In a experiment, where I simulate lost of image (the device is without movement) estimated trajectory does drift (not small drift caused by noise integration but a big one).

Questions:
I suspect that my input_imu_frame or gravitational_force parameters are not in order, could you provide some more insight how to set these parameters?

What should I expect from IMU integration? Overall better trajectory estimation? Or IMU is used only in situations where image is invalid?

Is there a simple way how to test whether IMU integration is in order?

Maximum distance to distinguishable features

Hi,

We're evaluating Visual SLAM for large warehouse areas where the distance to the nearest visible objects can be up to 100 meters.

What is the limitation for the distance of the features detected on camera? In other words, how far can a distinguishable feature be in order for the Visual SLAM to work?

Unfortunately, this information doesn't seem to be available anywhere.

Can't Access RealSense D435i

I am running the Docker container from isaac_ros_common on Ubuntu 20.04 on my desktop not on a jetson. When running the launch file I get failed to set power state errors.

ros2 launch isaac_ros_visual_odometry isaac_ros_visual_odometry_realsense.launch.py
[INFO] [launch]: All log files can be found below /home/admin/.ros/log/2022-01-08-00-40-09-432131-pcmamba-2323
[INFO] [launch]: Default logging verbosity is set to INFO
/workspaces/isaac_ros-dev/install/isaac_ros_visual_odometry/share/isaac_ros_visual_odometry/launch/isaac_ros_visual_odometry_realsense.launch.py:16: UserWarning: The parameter 'node_executable' is deprecated, use 'executable' instead
realsense_camera_node = Node(
[INFO] [component_container-1]: process started with pid [2336]
[INFO] [realsense2_camera_node-2]: process started with pid [2338]
[realsense2_camera_node-2] [INFO] [1641602409.565854679] [RealSenseCameraNode]: RealSense ROS v3.2.3
[realsense2_camera_node-2] [INFO] [1641602409.565879440] [RealSenseCameraNode]: Built with LibRealSense v2.50.0
[realsense2_camera_node-2] [INFO] [1641602409.565883781] [RealSenseCameraNode]: Running with LibRealSense v2.50.0
[realsense2_camera_node-2] [WARN] [1641602409.575876964] [RealSenseCameraNode]: Device 1/1 failed with exception: failed to set power state
[realsense2_camera_node-2] [ERROR] [1641602409.575892514] [RealSenseCameraNode]: The requested device with is NOT found. Will Try again.
[component_container-1] [INFO] [1641602409.757698050] [visual_odometry_launch_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_visual_odometry/lib/libvisual_odometry_node.so
[component_container-1] [INFO] [1641602409.789745607] [visual_odometry_launch_container]: Found class: rclcpp_components::NodeFactoryTemplate<isaac_ros::visual_odometry::VisualOdometryNode>
[component_container-1] [INFO] [1641602409.789775598] [visual_odometry_launch_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<isaac_ros::visual_odometry::VisualOdometryNode>
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/visual_odometry_node' in container '/visual_odometry_launch_container'
[realsense2_camera_node-2] [WARN] [1641602415.585547627] [RealSenseCameraNode]: Device 1/1 failed with exception: failed to set power state
[realsense2_camera_node-2] [ERROR] [1641602415.585565458] [RealSenseCameraNode]: The requested device with is NOT found. Will Try again.

Working with Isaac Sim does not work well

I walked through the following link instruction

But it does not work well

ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam_isaac_sim.launch.py

The following error happened. And I did not confirm the ros message.

[INFO] [launch]: All log files can be found below /home/admin/.ros/log/2022-04-30-04-43-54-457934-omn-a40-ubuntu-2718564
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container-1]: process started with pid [2718584]
[component_container-1] [INFO] [1651293834.889712235] [visual_slam_launch_container]: Load Library: /workspaces/isaac_ros-dev/ros_ws2/install/isaac_ros_visual_slam/lib/libvisual_slam_node.so
[ERROR] [component_container-1]: process has died [pid 2718584, exit code -4, cmd '/opt/ros/foxy/lib/rclcpp_components/component_container --ros-args -r __node:=visual_slam_launch_container -r __ns:=/'].

Could you tell me how to solve this problem?

VSLAM got poor performance tracker is lost

Hi I'm using the VSLAM as a state estimator in nvblox
with D435i as camera, x86 with nvidia GPU(Ubuntu 20.04.5LTS, ROS2 foxy)
and using realsense vslam examples in launch folder of VSLAM and
https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-nvblox-vslam-realsense.md

but it seems after DP2.0 release update, the VSLAM always show "tracker is lost"

image

there are very few feature points, but as I use VSLAM release DP1.2, there are many feature point output in rviz

I've followed the trouble shooting suggestions:

Fast motion causing the motion blur in the frames.
Low-lighting conditions.
The wrong camerainfo is being published.

my stereo input is 60fps, and I use realsense D435i and splitted the camera image so there are no light dots in it
As to the camera info, I'm not sure what is the "right" camera info, what I only knew is the camera info can be seen via rostopic echo.

What I want to ask is that is there any difference for DP2.0 to DP1.2 which would affect the performance?

and is there any ways to tune the VSLAM aside from the trouble shooting? Or for the elbrus VSLAM, can I tune configurations such as feature point number like ORBSLAM3?

And is that involving D435’s IMU worth a try?

Thanks.

some basic questions

1-Hi, is it "odom" TF static against "base_link_somooth/rectified" TFs, like could be "map" TF compared with "camera_link" in realsenses?.

2-Can both TFs "base_link_somooth/rectified" be published simultaneosly working?
I mean will activate the rectified disconnect the smooth?

3-the odometry topic /visual_odometry/tf_stamped refer always to base_link_smooth TF?

4- recitified always appear in my my TF tree in Rviz2 even set in false, is this correct?

5- rectified include or means with loop closure?

thanks by share it is a great peace of software, smart people

How can I evaluate with the KITTI dataset?

Currently I was able to try the ros bag file provided with Quickstart.
I want to evaluate with KITTI, but I don't how to handle it.
How can I do a isaac_ros_visual_slam with KITTI?

Thank you!

"Tracker is lost" on KITTI rosbag

Alright, i was able to launch example code using docker. It works. But I'm still getting nowhere trying to launch one of the kitti rosbags using ros2 bag play ../datasets/kitti/rosbags/ros2/kitti_2011_09_29_drive_0071_synced/ --remap /kitti/camera_gray_right/image_raw:=/stereo_camera/right/image /kitti/camera_gray_left/image_raw:=/stereo_camera/left/image /kitti/camera_gray_left/camera_info:=/camera_info_left /kitti/camera_gray_right/camera_info:=/camera_info_right command. Still "Tracker is lost" issue.

Originally posted by @DaddyWesker in #37 (comment)

Failed <<< isaac_ros_visual_odometry

Run git lfs pull in each Isaac ROS repository you have checked out, especially isaac_ros_common, to ensure all of the large binary files have been downloaded.

THAT INCLUDE isaac_ros_visual_odometry

Parameter `input_base_frame` not working: base -> camera transform not taken into account?

Hi, first of all many thanks for sharing this great repository! Unfortunately I am having some issues obtaining odometry messages that account for the transformation between the robot and camera coordinate frames.

I am currently working with a ground robot and trying to track its position using a Realsense camera positioned on top of the robot, looking slightly down as so:
frames

From the documentation, I seem to understand that your implementation is designed to allow tracking successive poses of the robot (i.e. frame base_link), despite computing these poses in the camera frame (i.e. frame front_camera_link). Namely, the Visual Odometry node parameter input_base_frame seems to be responsible for this:

input_base_frame: Name of the frame (baselink) to calculate transformation between the baselink and the left camera. Default is empty, which means the value of the base_frame will be used. If input_base_frame and base_frame are both empty, the left camera is assumed to be in the robot's center.

However, by launching the node with 'input_base_frame': 'base_link' as follows:

    visual_slam_node = ComposableNode(
        name='visual_slam_node',
        namespace='front_camera',
        package='isaac_ros_visual_slam',
        plugin='isaac_ros::visual_slam::VisualSlamNode',
        parameters=[{
                    'enable_rectified_pose': True,
                    'denoise_input_images': False,
                    'rectified_images': True,
                    'enable_debug_mode': False,
                    'debug_dump_path': '/tmp/elbrus',
                    'enable_slam_visualization': True,
                    'enable_landmarks_view': True,
                    'enable_observations_view': True,
                    'publish_tf': False,
                    'map_frame': 'map',
                    'odom_frame': 'odom',
                    'base_frame': 'base_link',
                    'input_base_frame': 'base_link',
                    'input_left_camera_frame': 'front_camera_infra1_frame',
                    'input_right_camera_frame': 'front_camera_infra2_frame',
                    }],
        remappings=[('stereo_camera/left/image', 'infra1/image_rect_raw'),
                    ('stereo_camera/left/camera_info', 'infra1/camera_info'),
                    ('stereo_camera/right/image', 'infra2/image_rect_raw'),
                    ('stereo_camera/right/camera_info', 'infra2/camera_info')]
    )

the camera position with respect to the robot frame does not seem to be taken into account. As a matter of fact, unless I am mistaken it seems that the ROS parameter input_base_frame is not used at all in the source code.

Here is an example where I start the VO tracking, and move the hand-held camera straight forward (positive X direction in the front_camera_link frame):
vo
Overlooking the jitter caused by my hand motion, I would expect the odometry arrows to be directed "into to ground" / co-linear to the front_camera_frame's X axis. Instead, the arrows seem to be directed along the X axis of the base_link frame i.e. the VO node considers the camera to be position at the robot's center?

Have I misunderstood the usage of the node's parameters or am I overlooking something? How can I track the motion of the robot instead of the camera's? Any help would be greatly appreciate! Thanks in advance.

Installation on x86_64

Hi I am trying to install the software on a x86_64 machine. But I encountered compilation errors

--- stderr: isaac_ros_image_proc      
In file included from /home/ubuntu/isaac_perception_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:33:
/home/ubuntu/isaac_perception_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp: In function ‘cv::Mat {anonymous}::GetConvertedMat(VPIImageImpl*&, VPIImageImpl*&, VPIStreamImpl*&, const cv::Mat&, uint32_t, std::string, std::string)’:
/home/ubuntu/isaac_perception_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:68:5: error: ‘vpiImageCreateOpenCVMatWrapper’ was not declared in this scope; did you mean ‘vpiImageCreateWrapper’?
   68 |     vpiImageCreateOpenCVMatWrapper(
      |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/isaac_perception_ws/install/isaac_ros_common/include/isaac_ros_common/vpi_utilities.hpp:20:25: note: in definition of macro ‘CHECK_STATUS’
   20 |     VPIStatus status = (STMT); \
      |                         ^~~~
/home/ubuntu/isaac_perception_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:81:61: error: too many arguments to function ‘VPIStatus vpiImageLock(VPIImage, VPILockMode)’
   81 |   CHECK_STATUS(vpiImageLock(output, VPI_LOCK_READ, &out_data));
      |                                                             ^
/home/ubuntu/isaac_perception_ws/install/isaac_ros_common/include/isaac_ros_common/vpi_utilities.hpp:20:25: note: in definition of macro ‘CHECK_STATUS’
   20 |     VPIStatus status = (STMT); \
      |                         ^~~~
In file included from /home/ubuntu/isaac_perception_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:28:
/opt/nvidia/vpi2/include/vpi/Image.h:556:22: note: declared here
  556 | VPI_PUBLIC VPIStatus vpiImageLock(VPIImage img, VPILockMode mode);
      |                      ^~~~~~~~~~~~
/home/ubuntu/isaac_perception_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:82:31: error: ‘VPIImageData’ {aka ‘struct VPIImageDataRec’} has no member named ‘planes’
   82 |   cv::Mat output_mat{out_data.planes[0].height, out_data.planes[0].width,
      |                               ^~~~~~
/home/ubuntu/isaac_perception_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:82:58: error: ‘VPIImageData’ {aka ‘struct VPIImageDataRec’} has no member named ‘planes’
   82 |   cv::Mat output_mat{out_data.planes[0].height, out_data.planes[0].width,
      |                                                          ^~~~~~
/home/ubuntu/isaac_perception_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:83:54: error: ‘VPIImageData’ {aka ‘struct VPIImageDataRec’} has no member named ‘planes’
   83 |     g_str_to_channels.at(encoding_desired), out_data.planes[0].data,
      |                                                      ^~~~~~
/home/ubuntu/isaac_perception_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:84:34: error: ‘VPIImageData’ {aka ‘struct VPIImageDataRec’} has no member named ‘planes’
   84 |     static_cast<size_t>(out_data.planes[0].pitchBytes)};
      |                                  ^~~~~~
/home/ubuntu/isaac_perception_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:84:55: error: no matching function for call to ‘cv::Mat::Mat(<brace-enclosed initializer list>)’
   84 |     static_cast<size_t>(out_data.planes[0].pitchBytes)};
      |                                                       ^
In file included from /usr/include/opencv4/opencv2/core/mat.hpp:3724,
                 from /usr/include/opencv4/opencv2/core.hpp:59,
                 from /usr/include/opencv4/opencv2/core/core.hpp:48,
                 from /opt/ros/foxy/include/cv_bridge/cv_bridge.h:43,
                 from /home/ubuntu/isaac_perception_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:20:
/usr/include/opencv4/opencv2/core/mat.inl.hpp:1380:1: note: candidate: ‘cv::Mat::Mat(cv::Mat&&)’
 1380 | Mat::Mat(Mat&& m)
      | ^~~
/usr/include/opencv4/opencv2/core/mat.inl.hpp:1380:1: note:   candidate expects 1 argument, 5 provided

It seems this software depends on a specific VPI version. Could you tell me what is a proper way to setup installation environment?

Currently I am using a Dockerfile based on nvidia/cuda:11.4.0-devel-ubuntu20.04 and install VPI libraries following https://docs.nvidia.com/vpi/installation.html

RUN apt-get update && apt-get install -y software-properties-common gnupg  && \
    apt-key adv --fetch-key https://repo.download.nvidia.com/jetson/jetson-ota-public.asc 
RUN add-apt-repository 'deb https://repo.download.nvidia.com/jetson/x86_64/focal r34.1 main' && \
    apt update && \
    apt-get install -y libnvvpi2 vpi2-dev vpi2-samples git-lfs

My Nvidia driver version is 510 but I think this issue is not related to GPU driver.

Unclear transformations, possibly a bug

Hello,
I am somewhat confused about your tracking messages. You provide 3 pose estimators, odometry, vo_pose and vo_pose_covariance, although, I cannot find out how these three differ. If I look at the messages with echo command in command line, they all give me same position. Naturally, vo_pose does not provide covariance. But in RVIZ at the beginning, odometry arrow points out of input_left_camera_frame and vo_pose and vo_pose_covariance arrow points out of base frame.

This can be seen from following picture:
elbrus_start

However, in time, odometry arrow vanishes completely and vo_pose and vo_pose_covariance arrow slides elsewhere, where the topic echo messages show it is.

To showcase this behavior I am appending you echo output and rviz screen shot:

header:
stamp:
sec: 1655833101
nanosec: 443741631
frame_id: map
pose:
pose:
position:
x: -0.026482975110411644
y: 1.1688748598098755
z: -0.07859078794717789
orientation:
x: 0.06656732322526643
y: -0.0011859150693369661
z: -0.26026932096369937
w: 0.9632379164775666
covariance: [0.0015559596940875053, -1.4628109283876256e-06, 0.0003669109137263149, 7.136500789783895e-05, -0.002161735901609063, -0.0004413025744725019, -1.4623600463892217e-06, 0.0011519042309373617, 5.1062728744000196e-05, 0.0020212936215102673, 0.00013677265087608248, -0.0008255475549958646, 0.0003669107973109931, 5.106258686282672e-05, 0.000697773473802954, 0.0002427915169391781, -0.0004011623386759311, -0.0001721317821647972, 7.136560452636331e-05, 0.0020212936215102673, 0.00024279169156216085, 0.004053641576319933, 0.00015628525579813868, -0.00145142269320786, -0.0021617363672703505, 0.0001367734366795048, -0.0004011626588180661, 0.00015628631808795035, 0.003537524025887251, 0.0003253195609431714, -0.00044130286551080644, -0.0008255474385805428, -0.00017213186947628856, -0.0014514223439618945, 0.00032532005570828915, 0.0019263405120000243]

elbrus_weirdness
From the picture you can clearly see, that the vo_pose arrow reflects the values from echo output, but it is disconnected from the robot body. Also I would like to highlight the fact, that base link is very close to origin, as I have placed the camera back to its original position after some arbitrary moves. From this I gather that positioning works fine, however the outputs might be somehow confused, possibly my configuration is causing all this trouble. I would generally expect odometry to describe position of the base_link using SLAM and vo_poses to describe the same thing without the closing of the loop. Is my assumption correct? Why is the odometry arrow pointing out of the camera frame, while maintaining values of vo_pose topics?

My launch configuration is following:

`
import launch
def generate_launch_description():

visual_slam_node = ComposableNode(
    name='visual_slam_node',
    package='isaac_ros_visual_slam',
    plugin='isaac_ros::visual_slam::VisualSlamNode',
    parameters=[{
                'enable_rectified_pose': True,
                'enable_localization_n_mapping': True,
                'denoise_input_images': False,
                'rectified_images': True,
                'enable_debug_mode': False,
                'debug_dump_path': '/tmp/elbrus',
                'enable_slam_visualization': False,
                'enable_landmarks_view': False,
                'enable_observations_view': False,
                'enable_imu': True,
                'input_imu_frame': 'elbrus_mag_link',
                'map_frame': 'map',
                'odom_frame': 'elbrus_vis_odom',
                'base_frame': 'base_link2',
                'input_left_camera_frame': 'elbrus_left_camera_frame',
                'input_right_camera_frame': 'elbrus_right_camera_frame',
                'path_max_size': 1000
                }],
    remappings=[('stereo_camera/left/image', '/zed2/zed_node/left/image_rect_color'),
                ('stereo_camera/left/camera_info', '/zed2/zed_node/left/camera_info'),
                ('stereo_camera/right/image', '/zed2/zed_node/right/image_rect_color'),
                ('stereo_camera/right/camera_info', '/zed2/zed_node/right/camera_info'),
                ('visual_slam/imu', '/zed2/zed_node/imu/data')]
)

visual_slam_launch_container = ComposableNodeContainer(
    name='visual_slam_launch_container',
    namespace='',
    package='rclcpp_components',
    executable='component_container',
    composable_node_descriptions=[
        visual_slam_node
    ],
    output='screen'
)

return launch.LaunchDescription([visual_slam_launch_container])

`

I have static TF tree node, that builds my relation between base_link2 and input_camera_frames. Also I have found 'enable_rectified_pose': True in your launch files, however it is not documented anywhere. What is it's purpose?

Hardware setup

Nvidia Jetson Xavier AGX
Jetpack 4.6.1
Ubuntu 18.04
ROS2 Foxy
Elbrus version 10.0
ZED2 camera, ZED SDK 3.7
Docker image l4t-base:r32.7.1

Repository not working

The ament_cmake can not be found.
I have already tried to source the opt/ros/humble/setup.bash

Screenshot from 2022-10-17 20-51-51

does the last release (DP) of isaac_ros_visual_slam still support galactic or just Humble?

Hello!

I started using the isaac_ros_visual_slam package around 2 months ago, before the latest release that supports Humble. My application still uses ROS2 Galactic at its core and so, the release I used at the time (v0.9.3-ea3) has been working well for me. Since then I didn't have time to check whether the latest stable release also supports Galactic, and in the documentation, there is no mention of this. So, does it support it or not?

Thank you!

Evaluation the result & GT pose

Thank you for your great work!

And then how did you evaluate your result?
Did you get the gt pose in the isaac sim?

I want to use yours using our own dataset generated in isaac sim, we couldn't get the gt pose in isaac sim now.
because /tf & /odom is not gt (based on AMCL) and we couldn't find any alternatives.

If you did the evaluation your algorithm, how did you get the GT pose in isaac sim?

Thank you!

cannot build docker image

Following the instructions in the repository to setup the environment, the following error appears:

isaac_ros_dev not specified, assuming /home/neathle/workspaces/isaac_ros-dev
Building x86_64.humble.nav2.user base as image: isaac_ros_dev-x86_64 using key x86_64.humble.nav2.user
Using base image name not specified, using ''
Using docker context dir not specified, using Dockerfile directory
Resolved the following Dockerfiles for target image: x86_64.humble.nav2.user
/home/neathle/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/../docker/Dockerfile.user
/home/neathle/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/../docker/Dockerfile.x86_64.humble.nav2
Building /home/neathle/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/../docker/Dockerfile.x86_64.humble.nav2 as image: x86_64-humble-nav2-image with base: 
[+] Building 0.4s (3/3) FINISHED                                                              
 => [internal] load build definition from Dockerfile.x86_64.humble.nav2                  0.1s
 => => transferring dockerfile: 573B                                                     0.0s
 => [internal] load .dockerignore                                                        0.1s
 => => transferring context: 2B                                                          0.0s
 => ERROR [internal] load metadata for nvcr.io/nvidia/isaac/ros:x86_64-humble-nav2_5bd6  0.2s
------
 > [internal] load metadata for nvcr.io/nvidia/isaac/ros:x86_64-humble-nav2_5bd606e569673db8cb1f78f393a5c46b:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to authorize: rpc error: code = Unknown desc = failed to fetch anonymous token: unexpected status: 401 Unauthorized
Failed to build base image: isaac_ros_dev-x86_64, aborting.
~/workspaces/isaac_ros-dev/src/isaac_ros_common

My system:

  • Ubuntu 20.04
  • Docker: 20.10.21

Note: The log was much longer the first time, but I am unable to remove the cached result and it fails instantly now with the error above.

isaac_ros_visual_slam as simulation return [CUDA] failure

Hi,
I am working with isaac_ros_visual_slam package and Gazebo.
Screenshot from 2022-09-05 15-08-04

My error:
[component_container-1] [INFO] [1662379434.366503061] [visual_slam_node]: Use use_gpu: true
[component_container-1] [ERROR] 140409917632320 [CUDA] failure: 13, file: /home/akorovko/Code/elbrus/src/modules/cuda_modules/src/lk_tracker.cpp, line: 9

Also, i use pytorch environment with cuda.

Thanks for reply.

wget: unable to resolve host address 'developer.download.nvidia.com'

--2022-07-03 15:27:56-- https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
Resolving developer.download.nvidia.com (developer.download.nvidia.com)... failed: Temporary failure in name resolution.
wget: unable to resolve host address 'developer.download.nvidia.com'
The command '/bin/bash -c apt-key del 7fa2af80 && mkdir -p /tmp && cd /tmp && wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb && dpkg -i cuda-keyring_1.0-1_all.deb && rm cuda-keyring_1.0-1_all.deb && add-apt-repository --remove 'deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /'' returned a non-zero code: 4
Failed to build base image: isaac_ros_dev-x86_64, aborting.
~/your_ws/src/isaac_ros_common

Test errors

Hello. Thanks for sharing your code.

I've tried to install this isaac slam repo using ros buils. I've installed foxy and required components. But after colcon build, when i'm trying to run colcon test, i'm getting this.

Starting >>> isaac_ros_common
Starting >>> isaac_ros_test
Starting >>> isaac_ros_nvengine_interfaces
Starting >>> isaac_ros_visual_slam_interfaces
Finished <<< isaac_ros_nvengine_interfaces [2.24s]                                                                            
Finished <<< isaac_ros_visual_slam_interfaces [2.24s]
Finished <<< isaac_ros_common [2.98s]                                              
--- stderr: isaac_ros_test                     

=============================== warnings summary ===============================
../../../../.local/lib/python3.8/site-packages/_pytest/nodes.py:633
  Warning: The (fspath: py.path.local) argument to Package is deprecated. Please use the (path: pathlib.Path) argument instead.
  See https://docs.pytest.org/en/latest/deprecations.html#fspath-argument-for-node-constructors-replaced-with-pathlib-path

../../../../.local/lib/python3.8/site-packages/_pytest/nodes.py:146
../../../../.local/lib/python3.8/site-packages/_pytest/nodes.py:146
  Warning: <class 'launch_testing_ros.pytest.hooks.LaunchROSTestModule'> is not using a cooperative constructor and only takes {'parent', 'fspath'}.
  See https://docs.pytest.org/en/stable/deprecations.html#constructors-of-custom-pytest-node-subclasses-should-take-kwargs for more details.

../../../../.local/lib/python3.8/site-packages/_pytest/nodes.py:633
../../../../.local/lib/python3.8/site-packages/_pytest/nodes.py:633
  Warning: The (fspath: py.path.local) argument to LaunchROSTestModule is deprecated. Please use the (path: pathlib.Path) argument instead.
  See https://docs.pytest.org/en/latest/deprecations.html#fspath-argument-for-node-constructors-replaced-with-pathlib-path

../../../../../../opt/ros/foxy/lib/python3.8/site-packages/ament_flake8/main.py:26
../../../../.local/lib/python3.8/site-packages/setuptools/_distutils/version.py:346
test/test_flake8.py::test_flake8
test/test_flake8.py::test_flake8
test/test_pep257.py::test_pep257
test/test_pep257.py::test_pep257
  Warning: distutils Version classes are deprecated. Use packaging.version instead.

../../../../../../opt/ros/foxy/lib/python3.8/site-packages/launch_testing/pytest/hooks.py:179
../../../../../../opt/ros/foxy/lib/python3.8/site-packages/launch_testing/pytest/hooks.py:179
../../../../../../opt/ros/foxy/lib/python3.8/site-packages/launch_testing/pytest/hooks.py:179
  Warning: The (fspath: py.path.local) argument to Module is deprecated. Please use the (path: pathlib.Path) argument instead.
  See https://docs.pytest.org/en/latest/deprecations.html#fspath-argument-for-node-constructors-replaced-with-pathlib-path

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
---
Finished <<< isaac_ros_test [15.7s]
Starting >>> isaac_ros_image_proc
Starting >>> isaac_ros_stereo_image_proc
Starting >>> isaac_ros_nvengine                                                                      
Finished <<< isaac_ros_nvengine [11.7s]                                                                                             
--- stderr: isaac_ros_stereo_image_proc                                                                
Errors while running CTest
Output from these tests are in: /home/daddywesker/ros2_ws/build/isaac_ros_stereo_image_proc/Testing/Temporary/LastTest.log
Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.
---

Finished <<< isaac_ros_stereo_image_proc [25.8s]	[ with test failures ]
--- stderr: isaac_ros_image_proc                     
Errors while running CTest
Output from these tests are in: /home/daddywesker/ros2_ws/build/isaac_ros_image_proc/Testing/Temporary/LastTest.log
Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.
---
Finished <<< isaac_ros_image_proc [35.1s]	[ with test failures ]
Starting >>> isaac_ros_image_pipeline
Starting >>> isaac_ros_visual_slam
Finished <<< isaac_ros_image_pipeline [1.45s]                                                      
Finished <<< isaac_ros_visual_slam [27.5s]                 

Summary: 9 packages finished [1min 19s]
  3 packages had stderr output: isaac_ros_image_proc isaac_ros_stereo_image_proc isaac_ros_test
  2 packages had test failures: isaac_ros_image_proc isaac_ros_stereo_image_proc

I was able to move further by installing

sudo apt-get install ros-foxy-stereo-image-proc (probably you should add it to the requirements?)

But I'm still have this:


Summary: 9 packages finished [1min 20s]
  2 packages had stderr output: isaac_ros_image_proc isaac_ros_test
  1 package had test failures: isaac_ros_image_proc

Here is the log file for isaac_ros_image_proc

Currently, i cant beat it.

colcon build error

Hi I'm using the VSLAM,
but got this error when colcon building:

admin@ubuntu-Default-string:/workspaces/isaac_ros-dev$ colcon build --symlink-install
Starting >>> nvblox_msgs
Starting >>> realsense2_camera_msgs
Starting >>> nvblox
Starting >>> isaac_ros_test
Starting >>> isaac_ros_visual_slam_interfaces
Starting >>> nvblox_cpu_gpu_tools
Starting >>> nvblox_performance_measurement_msgs
Starting >>> isaac_ros_apriltag_interfaces
Starting >>> isaac_ros_bi3d_interfaces
Starting >>> isaac_ros_common
Starting >>> isaac_ros_tensor_list_interfaces
Starting >>> nvblox_isaac_sim
Finished <<< nvblox_cpu_gpu_tools [3.66s]
Finished <<< nvblox_isaac_sim [3.61s]
Finished <<< isaac_ros_test [3.82s]
Finished <<< isaac_ros_bi3d_interfaces [14.3s]
Finished <<< isaac_ros_apriltag_interfaces [15.2s]
Finished <<< realsense2_camera_msgs [15.3s]
Starting >>> realsense2_camera
Starting >>> realsense_splitter
Starting >>> realsense2_description
Finished <<< isaac_ros_tensor_list_interfaces [15.6s]
Finished <<< isaac_ros_common [15.7s]
Finished <<< nvblox_performance_measurement_msgs [15.8s]
Finished <<< nvblox_msgs [16.2s]
Starting >>> nvblox_nav2
Starting >>> nvblox_rviz_plugin
Finished <<< realsense2_description [2.10s]
Finished <<< isaac_ros_visual_slam_interfaces [21.3s]
Starting >>> isaac_ros_visual_slam
--- stderr: isaac_ros_visual_slam
CMake Error at /opt/ros/humble/install/share/ament_cmake_core/cmake/index/ament_index_get_resource.cmake:62 (message):
ament_index_get_resource() called with not existing resource ('elbrus'
'isaac_ros_nitros')
Call Stack (most recent call first):
CMakeLists.txt:50 (ament_index_get_resource)


Failed <<< isaac_ros_visual_slam [8.06s, exited with code 1]
Aborted <<< nvblox_nav2 [29.0s]
Aborted <<< realsense2_camera [35.9s]
Aborted <<< nvblox_rviz_plugin [35.2s]
Aborted <<< realsense_splitter [43.0s]
Aborted <<< nvblox [2min 47s]

Summary: 12 packages finished [2min 48s]
1 package failed: isaac_ros_visual_slam
5 packages aborted: nvblox nvblox_nav2 nvblox_rviz_plugin realsense2_camera realsense_splitter
1 package had stderr output: isaac_ros_visual_slam
4 packages not processed
admin@ubuntu-Default-string:/workspaces/isaac_ros-dev$

Problem reproduction:
I followed the
https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox
and
https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-nvblox-vslam-realsense.md
which needs to install nvblox and vslam
I think both of my branch is DP2.0 since I reinstall these today.

I did build successfully before I reinstall the system(x8664, Ubuntu20.04.5LTS ROS2 foxy outside the container), so I used
git clone --depth=1
to retrive the last commit, but it didn't work either.

Is there anything go wrong?

What happens after "Tracker is lost" warning?

Hello,

I guess the title is self-explaining. Just to add to my question - is it possible to re-gain tracking pose somehow after this warning is sent? Because I get enormously big numbers after tracker is lost in rectified_pose. Let's say 4x10^27 in translation axes.

testing methodology

Hello,

I am trying to understand how Visual odometry package was validated on robot. I mean a working robot ROS2 based having visual SLAM and navigation working , in which isaac_ros_visual_odometry package is integrated and tested.
Can you please help me understand
a) on which ROS2 based moving reference robot it was tested internally ? (I assume carter 2 is limited to ROS1 Melodic currently)
b) can you share the performance / benchmarking details in terms of GPU utilization , memory consumed (apart from drift accuracy) ? if not, can you please let me know how we can check this ?

I understand this package is built on top of https://docs.nvidia.com/isaac/isaac/packages/visual_slam/doc/elbrus_visual_slam.html using Carter 2 which seems limited to ROS1 Melodic (please correct me) , but i am more interested to know how it is validated on ROS2 and which reference platform was used to test it ? And how much GPU performance figures were achieved on AGX Xavier or Xavier NX ?

Thanks
gurpreet

Quickstart doesnt work on Ubuntu 22.04

Following the quickstart guide I get the following (related ?) error messages.

When running ./scripts/run_dev.sh I get:

docker: Error response from daemon: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/moby/21f1770a617988ea5c6e1cc787167812ac103ea7c8541681c11b0f8b05c87252/log.json: no such file or directory): exec: "nvidia-container-runtime": executable file not found in $PATH: <nil>: unknown. ~/Workspace/isaac_ros-dev/src/isaac_ros_common

Ignoring this and trying colcon build I get:

CMake Error at CMakeLists.txt:31 (find_package):
  By not providing "Findvpi.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "vpi", but
  CMake did not find one.

  Could not find a package configuration file provided by "vpi" with any of
  the following names:

    vpiConfig.cmake
    vpi-config.cmake

  Add the installation prefix of "vpi" to CMAKE_PREFIX_PATH or set "vpi_DIR"
  to a directory containing one of the above files.  If "vpi" provides a
  separate development package or SDK, be sure it has been installed.


it won't build on Jetson JP4.6.1 but for isaac common script

-- stderr: isaac_ros_nvengine                                                                                                
/usr/bin/ld:/workspaces/isaac_ros-dev/isaac_ros_nvengine/gxf/lib/gxf_jetpack46_1/core/libgxf_core.so: file format not recognized; treating as linker script
/usr/bin/ld:/workspaces/isaac_ros-dev/isaac_ros_nvengine/gxf/lib/gxf_jetpack46_1/core/libgxf_core.so:1: syntax error
collect2: error: ld returned 1 exit status
make[2]: *** [libgxe_node.so] Error 1
make[1]: *** [CMakeFiles/gxe_node.dir/all] Error 2
make: *** [all] Error 2
---
Failed   <<< isaac_ros_nvengine [22.3s, exited with code 2]

after performing the isaac common image script I added src then cloned the repository; then executed colcon build
it won't work

Strange Gravity Vector

Hi,

I'm running with an OakD S2 stereo camera, which has it's IMU mounted in such a way that the rotation from IMU -> base_link is:

0, 0, -1
0, -1, 0
-1, 0, 0

My configuration seems to be accurate, because RViz is showing the coordinate systems of imu and base_link in the correct orientation. Apologies for the cluttered image, it's hard to separate the text for incident coordinate systems
image

A typical stationary IMU measurement will look like X = -9.8; Y=0; Z=0

However, the gravity vector quickly converges to the wrong axis. It moves to point in the +X direction of base_link, where it should be pointing in -Z. The gravity vector is shown as the grey line in the zoomed in picture.

It looks like my configuration for imu orientation is correct, Any idea why the gravity vector is behaving this way? Also, does the gravity vector play a significant role in the VO & SLAM algorithm?

Failed building in docker container on Jetson

On stock Os jetson, without Host PC or prior IsaaC SDK/ROS.
after starting the docker container wit hthe script provided then cloning the github repository,

scripts/run_dev.sh 
mkdir src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_odometry
cd ..
colcon build --symlink-install && source install/setup.bash

it seems some steps are missed still?

admin@nvidia-desktop:/workspaces/isaac_ros-dev/src$ cd ..
admin@nvidia-desktop:/workspaces/isaac_ros-dev$ colcon build --symlink-install && source install/setup.bash
Starting >>> isaac_ros_test
Starting >>> isaac_ros_nvengine_interfaces
Starting >>> isaac_ros_visual_odometry_interfaces
Starting >>> isaac_ros_common
Finished <<< isaac_ros_common [1.66s]                                                                          
Finished <<< isaac_ros_test [2.50s]                                                                             
Finished <<< isaac_ros_visual_odometry_interfaces [2.88s]                                
Starting >>> isaac_ros_visual_odometry
Finished <<< isaac_ros_nvengine_interfaces [3.08s]                                                                            
Starting >>> isaac_ros_nvengine
--- stderr: isaac_ros_nvengine                                                                                      
/usr/bin/ld:/workspaces/isaac_ros-dev/isaac_ros_nvengine/gxf/lib/gxf_jetpack46/core/libgxf_core.so: file format not recognized; treating as linker script
/usr/bin/ld:/workspaces/isaac_ros-dev/isaac_ros_nvengine/gxf/lib/gxf_jetpack46/core/libgxf_core.so:1: syntax error
collect2: error: ld returned 1 exit status
make[2]: *** [libgxe_node.so] Error 1
make[1]: *** [CMakeFiles/gxe_node.dir/all] Error 2
make: *** [all] Error 2
---
Failed   <<< isaac_ros_nvengine [0.38s, exited with code 2]
Aborted  <<< isaac_ros_visual_odometry [39.1s]                                  

Summary: 4 packages finished [42.6s]
  1 package failed: isaac_ros_nvengine
  1 package aborted: isaac_ros_visual_odometry
  1 package had stderr output: isaac_ros_nvengine

Could you extend the following statement, please?

You can either provide an optional path to mirror in your host ROS workspace with Isaac ROS packages, which will be made available in the container as /workspaces/isaac_ros-dev, or you can setup a new workspace in the container.

As to the Host it reffers to x86_64? or Jetson systemwide OS? both, any, neither of the two?
By default the container starts with /workspaces/isaac_ros-dev that has plenty of files/folders.
What do you mean saying or you can setup a new workspace in the container
I tried cloning to some newly created frolder ,but it won't build either. What are the implicit steps in order to run the dockerized implemetnation on Jetson?
Is it like it won't work until the Elbrus is deployed rfom host PC? Isn't it standalone solution? Once the Elbrus is deployed will the error persist or PATH will need to be setup or some other steps?
Thanks

Is libelbrus.so library closed source?

Hi Team,

Thank you for open-sourcing this package. In some workshops related to nvidia visual slam, I was under the impression that Nvidia Elbrus is open-source, but we noticed the backend main library libelbrus.so is still closed source.

I was wondering if the library itself could be open-source as well. We would like to contribute, make some modifications, add some features, and use it in our production robot autonomy stack or if it is already open-sourced I appreciate it if you can point me to it.

Thank you,
Ali

Robot Self-shadowing

Hey!

This is more of a question rather than an issue, but I couldn't seem to find a forum for this.

I have not used this package yet, but with ORB_SLAM2, I have this issue where if the shadow of my robot falls within the image FOV, features from the shadow cause ORB_SLAM2 to think that the robot isn't moving as much as it actually does. This has been a deal breaker and I am curious to know if this package will perform better in such situations. One key difference is that ORB_SLAM2 does not use IMU inertial inputs so maybe that is better handled here. Has this situation been encountered in any of your tests?

Could not find the resource 'realsense2_camera

Hello,

Trying to run the example with D435 but it is not found

Resloved: We have to run git lfs pull in each Isaac ROS repository
Just spent 6 hours finding out... :-(

PLEASE ADD TO THE INSTALLATION

release update for Jetson JP5.0/ zed launchfile

it won't build systemwide on just flashed 20.04 OS with just installed Ros Foxy?

will it?

It seems building of packages would stuck on opencv pre-requisite


will there be a zed launchfile?

errors

--- stderr: isaac_ros_image_proc             
In file included from /home/nvidia/zed_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:33:
/home/nvidia/zed_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp: In function ‘cv::Mat {anonymous}::GetConvertedMat(VPIImageImpl*&, VPIImageImpl*&, VPIStreamImpl*&, const cv::Mat&, uint32_t, std::string, std::string)’:
/home/nvidia/zed_ws/src/isaac_ros_image_pipeline/isaac_ros_image_proc/src/image_format_converter_node.cpp:68:5: error: ‘vpiImageCreateOpenCVMatWrapper’ was not declared in this scope; did you mean ‘vpiImageCreateWrapper’?
   68 |     vpiImageCreateOpenCVMatWrapper(
      |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/nvidia/zed_ws/install/isaac_ros_common/include/isaac_r

more errs

/usr/include/opencv4/opencv2/core/mat.hpp:826:5: note:   candidate expects 2 arguments, 5 provided
/usr/include/opencv4/opencv2/core/mat.hpp:818:5: note: candidate: ‘cv::Mat::Mat(int, int, int)’
  818 |     Mat(int rows, int cols, int type);
      |     ^~~
/usr/include/opencv4/opencv2/core/mat.hpp:818:5: note:   candidate expects 3 arguments, 5 provided
/usr/include/opencv4/opencv2/core/mat.hpp:810:5: note: candidate: ‘cv::Mat::Mat()’
  810 |     Mat() CV_NOEXCEPT;
      |     ^~~
/usr/include/opencv4/opencv2/core/mat.hpp:810:5: note:   candidate expects 0 arguments, 5 provided
make[2]: *** [CMakeFiles/image_format_converter_node.dir/build.make:63: CMakeFiles/image_format_converter_node.dir/src/image_format_converter_node.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:168: CMakeFiles/image_format_converter_node.dir/all] Error 2
make: *** [Makefile:141: all] Error 2

Thanks
AV

ros2 topic cannot echo odometry topic

ros2 topic echo /visual_odometry/tf_stamped
Cannot echo topic '/visual_odometry/tf_stamped', as it contains more than one type: [isaac_ros_visual_odometry_interfaces/msg/VisualOdometryTransformStamped, nav_msgs/msg/Path]

Container build fail

I followed the quickstart Instruction
and
when i launched the docker container using the run_dev.sh script,
It failed with message below

Starting >>> interactive_markers
Starting >>> nav2_util
Starting >>> image_rotate
Finished <<< image_proc [1min 17s]
Starting >>> stereo_image_proc
Finished <<< image_view [1min 19s]
Finished <<< tf2_eigen [1min 1s]
Starting >>> depth_image_proc
Starting >>> geometry2
Finished <<< geometry2 [14.1s]
Finished <<< image_rotate [38.9s]
--- stderr: interactive_markers
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
make[2]: *** [CMakeFiles/test_interactive_marker_server.dir/build.make:76: CMakeFiles/test_interactive_marker_server.dir/test/interactive_markers/test_interactive_marker_server.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:242: CMakeFiles/test_interactive_marker_server.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
---
Failed   <<< interactive_markers [1min 58s, exited with code 2]
[Processing: depth_image_proc, interactive_markers, nav2_util, robot_state_publisher, rosbag2_transport, rviz_common, stereo_image_proc]
[Processing: depth_image_proc, interactive_markers, nav2_util, robot_state_publisher, rosbag2_transport, rviz_common, stereo_image_proc]
Aborted  <<< stereo_image_proc [2min 1s]
Aborted  <<< robot_state_publisher [4min 5s]
Aborted  <<< rosbag2_transport [4min 27s]
Aborted  <<< nav2_util [4min 7s]
Aborted  <<< depth_image_proc [3min 57s]
Aborted  <<< rviz_common [4min 42s]

I'm using Ubuntu 18.04 WSL.
Help me.

Goal was rejected when saving map

ros2 action send_goal /visual_slam/save_map isaac_ros_visual_slam_interfaces/action/SaveMap "{map_url: /workspaces/isaac_ros-dev/map}"
Waiting for an action server to become available...
Sending goal:
map_url: /workspaces/isaac_ros-dev/map

Goal was rejected.

Using only VIO without mapping & loop closures

Hi, I am building the docker file but it is taking an extremely long time to compile the Foxy dependencies such as rviz and ompl. If I only intend to use VIO, do i still need to install nav2, rviz, ompl, velodyne, triton, tao converter, pytorch, onnx in the docker container?

Thanks

Few errors when using realsense camera

I'm using docker and realsense d435i.
After setting up everyting, launched isaac_ros_visual_slam_realsense.launch.py and there were few erros.



error 1. parameter type error

[realsense2_camera_node-2] what(): parameter 'stereo_module.emitter_enabled' has invalid type: expected [integer] got [bool]
image


edit for error1. type change

I changed 'stereo_module.emitter_enabled' from 'False' to '0', after changing it's fine
image



error 2. In the launch file, there is no declaration about namesapce, so when launching the file, the tf tree looks like this
image


edit1 for error 2. added namesapce declaration(failed)
image
image


edit2 for error 2. edited the name of frame, it worked for me
image
image


As a result, I think in this case(error 2) adding namespace is the right way, but it didn't work well
So Is there anything what I need to check or miss?

RTPS_TRANSPORT_SHM Error

2022-07-07 04:11:59.303 [RTPS_TRANSPORT_SHM Error] Failed init_port fastrtps_port7412: open_and_lock_file failed -> Function open_port_internal
2022-07-07 04:11:59.303 [RTPS_TRANSPORT_SHM Error] Failed init_port fastrtps_port7414: open_and_lock_file failed -> Function open_port_internal
2022-07-07 04:11:59.303 [RTPS_TRANSPORT_SHM Error] Failed init_port fastrtps_port7416: open_and_lock_file failed -> Function open_port_internal
2022-07-07 04:11:59.304 [RTPS_TRANSPORT_SHM Error] Failed init_port fastrtps_port7413: open_and_lock_file failed -> Function open_port_internal
2022-07-07 04:11:59.304 [RTPS_TRANSPORT_SHM Error] Failed init_port fastrtps_port7415: open_and_lock_file failed -> Function open_port_internal
2022-07-07 04:11:59.304 [RTPS_TRANSPORT_SHM Error] Failed init_port fastrtps_port7417: open_and_lock_file failed -> Function open_port_internal

In the isaac sim, when pressing play, the error occurs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.