Coder Social home page Coder Social logo

cra-ros-pkg / robot_localization Goto Github PK

View Code? Open in Web Editor NEW
1.3K 1.3K 856.0 15.87 MB

robot_localization is a package of nonlinear state estimation nodes. The package was developed by Charles River Analytics, Inc. Please ask questions on answers.ros.org.

Home Page: http://www.cra.com

License: Other

C++ 91.10% Shell 1.81% CMake 2.12% Python 4.96%

robot_localization's Introduction

robot_localization

robot_localization is a package of nonlinear state estimation nodes. The package was developed by Charles River Analytics, Inc.

Please see documentation here: http://wiki.ros.org/robot_localization

robot_localization's People

Contributors

asimay avatar ayrton04 avatar essinger avatar firemark avatar gerkey avatar iwanders avatar jacobperron avatar jbler avatar jspricke avatar klintan avatar kphil avatar lukx19 avatar mabelzhang avatar mbharatheesha avatar mikaelarguedas avatar mikepurvis avatar miquelmassot avatar muhrix avatar okermorgant avatar paudrow avatar paulverhoeckx avatar pavloblindnology avatar pmukherj avatar reinzor avatar rohita83 avatar stevemacenski avatar tgreier avatar timple avatar tscho1l avatar zygfrydw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

robot_localization's Issues

Handle non-neutral IMU orientation in navsat_transform_node

navsat_transform_node currently exposes only magnetic declination and a generic "yaw offset" to allow users to correct for non-neutral IMU mounting on their robots. While this works, it is not consistent with the behavior of the state estimation nodes, which use the base_link->imu transform to correct the IMU data. The same logic should be applied within navsat_transform_node.

Handling of base_link->sensor tranforms for world-referenced measurements

This is a general issue that applies to any sensor whose pose data is reported in a world-fixed frame, yet is mounted at some location on the vehicle other than its centroid. Fundamentally, the issue is that we define the sensor's offset from the vehicle (via URDFs or static_transform_publisher) as a transform from the vehicle origin (e.g., base_link) to the sensor frame (e.g., depth_sensor_frame), even if that sensor is measuring data in a world-fixed frame.

At first, my reaction to this problem was that the transform from the sensor to the world frame (e.g., depth_sensor_frame->base_link->odom) would suffice, but this is incorrect. If we applied the transform from the sensor straight to the world frame, we would continually be adding the vehicle's full position to each sensor measurement. For example, if we had a depth sensor that measured 10 meters, and we applied the transform from depth->base_link->odom, and the vehicle was already at 10 meters of depth, then the final measurement would read 20. What we really need in this case is the offset from the vehicle's centroid (as defined by the base_link->depth transform), rotated by the vehicle's current orientation.

In my own work, I have gotten around this by simply defining the odom->depth_sensor_frame directly as a static transform, but that fails to handle the effect vehicle's current orientation on the offset.

As a more in-depth example, let's say we have a depth sensor on the rear end of a UUV. The depth sensor is measuring data in the world frame (odom or map). Right now, the poseCallback function attempts to transform pose data from the reported message frame_id to the odom (or whatever your world_frame parameter is set to) frame. So for our depth sensor, you would need to define a transform from odom to the depth_sensor_frame that technically has the transform from base_link->depth_sensor_frame in it. This is wrong, as it won't handle the vehicle's orientation. What we really want to do is this:

First, we define the transform of the sensor from base_link->depth_sensor_frame statically as usual. When we get a new measurement for the depth sensor, we need to rotate the base_link->depth_sensor_frame transform by the odom->base_link orientation (but we should not apply the world position!). This will give us the rotated offset of the depth sensor w.r.t. the the vehicle center.
We then apply that transform to the depth sensor measurement itself, and fuse it.

So, for example, if we have a depth sensor that has a X offset of -1.5 meters from the vehicle center, and a +Z offset of 0.5 meters, then when the vehicle is level, then (assuming our depth sensor reads negative values as it gets deeper, so -Z is down), we just subtract the Z offset from the raw measurement to get the vehicle center measurement. However, if the vehicle is pitched down, then the "true" vertical offset in the odom frame is a function of both the X offset and the Z offset (this is obviously true when the vehicle is level as well, but in that case the pitch is 0, and so the trigonometry would eliminate the X offset's effect). Therefore, we need a combination of the base_link->depth_sensor_frame transform and the vehicle's current orientation, which is specified by part of the odom->base_link transform.

Build error with tf2 changes

navsat_transform.cpp:113:50: error: no matching function for call to ‘fromMsg(const _pose_type&, tf2::Transform&)’
     tf2::fromMsg(msg->pose.pose, latestWorldPose_);

latestWorldPose_ is intended to be a pose or transform? Someone halfway through working on this part?

Unexpected behavior when using absolute orientation and angular velocities

I have two instances of ekf_localization_node. The first, used for odometry, reads the x velocity (in the base_link frame) from the wheel encoder odometry topic and publishes the odom-to-base_link transform. The second node, used for absolute orientation, reads the x velocity in the same manner as the first node, but also reads the roll, pitch, and yaw angles from the solar compass topic (a PoseWithCovarianceStamped message). The absolute orientation node publishes the map-to-odom transform. Furthermore, the absolute orientation node does not receive poses from the solar compass when the compass is in shadow.

What I am doing:

  1. Drive the robot forward until it is in shadow.
  2. Make a 90-degree right turn.
  3. Drive the robot forward until it is out of shadow.

I am using RViz to display odometry/filtered (published by the odometry ekf_localization node), with the fixed frame set to "map".

What I expect:

  1. The odometry display draws arrows that correspond to the robot's path as it moves into shadow.

  2. The 90-degree right turn adds no arrows to the odometry path. This is due to the fact that the robot is not translating or, from the absolute orientation node's point of view, rotating. Since the solar compass is in shadow, it does not publish any poses that would be received by the absolute orientation node.

  3. When I resume driving the robot forward, arrows should continue to be added to the odometry path, as if the right turn never took place. When the robot moves out of shadow, arrows added after leaving the shadow should be oriented 90 degrees right, relative to the earlier arrows. The new arrows should form a straight line pointing right.

  4. From above, the result should look like this:

    X>>>>>>>>>>
    ^
    ^
    ^
    R->->->->->->
    ^
    ^
    E
    ^
    ^
    ^

E is the point where the robot enters the shadow, R is where the right turn is made, and X is the point where the robot leaves the shadow. The "->" symbols show the actual path of the robot after the right turn.

What actually happens:

X
^
^
^
R->->->->->->
^
^
E______Y>>>>>>>>>>
^
^
^

The underscores are supposed to be spaces, but I couldn't get non-breaking spaces to work.

When the robot exits the shadow at point X, it is then translated to point Y. It appears that the robot's path between E and X is rotated, instead of just the robot itself. I suspect that the sum of all the position changes caused by x velocity messages since the last pose (orientation) message are being rotated around the position that was current when the last pose message was received. Is this expected behavior? I am using the robot_localization package from the ROS Indigo binary release.

Generate more unit and integration tests

We have a lot of new functionality, and we need a lot of new tests to match. One thing we really need is a set of "default value" tests wherein we intentionally fail to specify certain parameter values in the .test files, and then ensure that the test results don't change.

We could also use more bag file tests, though we're already pushing the limit of what I think we should be storing in the repo itself. We may need to find some public space to store the bag files and use the catkin macros for downloading them before running tests.

Add parameter to make estimation wait for all sensors

For some robots, there may be a delay before certain sensor data is available. These delays may result from sensor startup time, launch order, or by design. For example, if a robot starts receiving IMU data immediately but only begins reporting odometry data when it starts moving for the first time, then the estimate error covariance matrix values for the vehicle's position variables will start to explode, as no measurements are arriving for those variables. In this case, what we want is for the filter to only begin estimating once all sensor data is available. It should not, however, stop estimating if said sensor drops out in the middle of a run.

This feature should be exposed as a per-sensor parameter, and it should default to false for backwards compatibility.

Optical Flow measurement support

Hi,
I would like to add support for the PX4Flow and similar sources of angular optical flow to the filter.
We need to get the proper way of getting this done vetted out, as I want the code to totally match your coding style, etc. so that we can PR it later.
Can I please get a contact from you guys? Email/Skype?
Kabir

Determine best way to handle GPS offsets

A GPS device could be mounted anywhere on a vehicle. The standard practice when a sensor is placed somewhere on a vehicle is to define a transform from base_link->sensor. Ordinarily, this works well, as sensor measurements are actually retrieved in the sensor frame (e.g., a LIDAR returns points whose distances are w.r.t. the sensor itself).

GPS devices obviously do not work this way. They are mounted at some location on the vehicle, but report data in a world-fixed frame. This means that if our GPS sensor's offset from the base_link frame is defined by a base_link->gps transform, we cannot simply apply that transform to transform a GPS reading and expect it to actually gives the GPS location of the vehicle's center point.

As I see it, we have two options:

  1. Let users define a base_link->gps transform, but instead of applying it in the typical fashion (i.e., via tf transformPose function calls), just take the X/Y/Z offset as defined in the transform, rotate it by the vehicle's current map frame orientation, and remove that offset from the latest GPS measurement coming from navsat_transform_node (which will already have been transformed into the map frame).
  2. Add a parameter to navsat_transform_node that defines only the X/Y/Z offset from the vehicle center (orientation of the GPS sensor is irrelevant). Use that data in the same fashion as in (1).

The former lets people use the tf tree as they would expect, but may cause confusion. The latter adds new parameters that have to be filled out, and it may be unclear to users as to why we're not use existing ROS functionality.

I'd welcome comments/suggestions...

navsat_tranform_node to publish world_frame->utm transform

It would be nice if the navsat_transform_node publised the world_frame->utm transform, as was done by the utm_transform_node.

That would be very useful for transforming positions between the UTM grid and the robot's frame of reference (e.g. transforming Lat/Long waypoints coming from external sources).

Reset lastMessageTimes on RosFilter::setPoseCallback

Tom,

would you please consider adding the following line to setPoseCallback?

/// Clear last message times
lastMessageTimes_.clear();

it happens that I can't really reset the filter by sending a set_pose message and rerun a bag file unless the lastMessageTimes_ map is cleared. Without it, the measurements from the bag file are identified as old and discarded.

Hernan

Turn set_pose into service

We have some external mapping software that would love to use this as a general filter, but I think it would be worth it to have the set_pose feature as a service so that we don't send multiple requests. Any thoughts?

If this makes sense, we can submit a pull request.

Option to future date map->odom transformation

Would it be possible to add an option to future date the resulting transformation from the ekf_localization_node?

I'm trying to use robot_localization along with move_base, but move_base requires that the map->odom transform to be future dated. Global positioning packages like gmapping or AMCL do future date this transformation.

It would be interesting to add the posibility to optionally future date the obtained transformation when using robot_localization as a source for global localization. It would greatly help it's integration with the standar ROS navigation stack.

Thank you and best regards.

Add IMU reference frame

See this ROS answers question for some background:

http://answers.ros.org/question/50870/what-frame-is-sensor_msgsimuorientation-relative-to/

EDIT: Just updating this to reflect the issue as I see it now. We only have a Microstrain 3DM-GX2 in house (and we're using a slightly modified driver), but the issue is that the absolute orientation is reported in a world-fixed frame, but the angular velocities and linear accelerations are reported in the sensor frame. Since the ROS IMU message type has only one frame_id, we have to be careful defining the transform from the IMU frame to the vehicle frame. Defining a transform from the vehicle body frame to the IMU frame will work for transforming the IMU's angular velocities and linear accelerations, but may not work for the absolute orientation. But:

  1. What I just said could be wrong, and using the same transform on the absolute orientation may actually "undo" the sensor's mounting rotation. I just have to check the math. Transforming an orientation is not the same as transforming an angular velocity. In the latter, if you lay the IMU on its side and rotate it, you'll still see pitch velocity changing, but what you really want is yaw velocity. The way robot_localization applies the transform would handle this situation.
  2. Even if it the absolute orientation is wrong, since the orientation is reported in a world-fixed frame, all the user has to do is turn on differential integration for the IMU, and this becomes a moot point.

Right now, the robot_localization will transform the IMU data - absolute orientation, angular velocity, and linear acceleration - into the frame specified by the base_link_frame parameter. This may be sufficient for most users' needs. I'll let this issue stand while I verify a few things, but I may end up just leaving it as-is and documenting how the package behaves when it comes to IMU transformations.

Inflated motion when using differential integration in non-target frames

When using differential integration for a given sensor, the measurements, if first transformed from another frame, produce inflated positions/rotations. An example:

Consider a case where you are fusing IMU and odometry data. You have the EKF or UKF localization node configured so that its world_frame is map, and so the node is producing the map->odom transform. The odometry data that you are fusing, however, has a frame_id of odom. If the robot begins moving forward and/or turning, then at each time step, the robot will begin to gain an increasing amount of false motion, e.g., if the robot's raw odometry says it went 5 meters, the filter may report 10. The issue is more pronounced when a second sensor is fused, regardless of what kind of data it's producing.

Burst of transform extrapolation errors on startup (tf MessageFilter)

This is probably related to #25, but I get brief spew of transform warnings on startup of the localization node:

[ WARN] [1409771773.231198484]: Could not obtain transform from imu_link to base_link. Error was Lookup would require extrapolation into the past.  Requested time 1409771773.242071479 but the earliest data is at time 1409771773.375394781, when looking up transform from frame [imu_link] to frame [base_link]
[ WARN] [1409771773.250072689]: Could not obtain transform from imu_link to base_link. Error was Lookup would require extrapolation into the past.  Requested time 1409771773.263071479 but the earliest data is at time 1409771773.375394781, when looking up transform from frame [imu_link] to frame [base_link]
[ WARN] [1409771773.250474660]: Could not obtain transform from imu_link to base_link. Error was Lookup would require extrapolation into the past.  Requested time 1409771773.263071479 but the earliest data is at time 1409771773.375394781, when looking up transform from frame [imu_link] to frame [base_link]
[ WARN] [1409771773.270072091]: Could not obtain transform from imu_link to base_link. Error was Lookup would require extrapolation into the past.  Requested time 1409771773.284071479 but the earliest data is at time 1409771773.375394781, when looking up transform from frame [imu_link] to frame [base_link]

After 8-10 of these, it calms down and operates as expected.

Perhaps this is because the transforms from robot_state_publisher come through? Anyway, I'm wondering if there should be some kind of wait-for behaviour at node startup. It's mostly an aesthetic concern, so not especially high priority, but just wanted you to be aware.

Initial Estimate Error Covariance

Currently, the initial covariance for the entire state is initialized I times some small epsilon on the order of 1e-6. This can indicate that we are very certain of our initial pose (probably (0, 0, 0) and with no rotation). Unfortunately, this can also cause the filter to be sluggish in accepting the first few measurements from a given sensor, especially if those measurements carry errors which are orders-of-magnitude larger than the initial covariance for the measured variables.

One solution is to go the other way: make the filter start with extremely large variances, and then the first measurement for each measured variable will be more or less completely accepted. This also has drawbacks, however: if the user isn't measuring all variables in the system, those variables' variances will start our large and get larger. Additionally, if some other variables are being measured, then there will be some massive values along the diagonal of the covariance matrix, and some very small values. This can lead to an ill-conditioned matrix, which will eventually become singular and break the filter.

As a middle ground, the initial covariance should be parameterizable. This will allow users to modify values as they see fit for their application. The default value for the parameter should remain the same as the current initial value so as to retain compatibility.

Redesign debug output

ROS already has some powerful means of logging data, and it provides some handy functionality that we should use.

ekf_localization_node frame clarification

I'm writing visual odometry and solar compass nodes that provide input to ekf_localization_node, and I'd like to get some clarification regarding frames. My understanding of ekf_localization_node's use of frames is described below. Please let me know if there are any errors.

nav_msgs/Odometry
odom_msg.pose specifies the transform from <odom_msg.header.frame_id> to <odom_msg.child_frame_id>.
odom_msg.twist specifies angular velocities and linear accelerations in <odom_msg.child_frame_id>.

Does ekf_localization_node find and apply the transform from <odom_msg.child_frame_id> to the frame specified by the base_link_frame parameter, or does ekf_localization_node act as if odom_msg.child_frame_id is the same as base_link_frame, even when it isn't? My visual odometry node sets odom_msg.header.frame_id to "odom" and odom_msg.child_frame_id to "top_mast_camera_optical", which is translated and rotated relative to "base_link".

sensor_msgs/Imu
After reading #22, this is my understanding.
imu_msg.orientation specifies the orientation transform from the ENU frame to <imu_msg.header.frame_id>.
imu_msg.angular_velocity specifies angular velocities in <imu_msg.frame_id>.
imu_msg.linear_acceleration specifies linear accelerations in <imu_msg.frame_id>.

geometry_msgs/PoseWithCovarianceStamped
pose_msg.pose specifies the transform from <pose_msg.header.frame_id> to the frame specified by ekf_localization_node's base_link_frame parameter.

I was using this message type to publish the orientation transform from the ENU frame (e.g. "map", assuming that the map frame and the ENU frame are identical) to the solar compass camera's optical frame, but discovered that this won't work with ekf_localization_node unless pose_msg.pose includes the transform from the optical frame to "base_link". If ekf_localization_node handles nav_msgs/Odometry and sensor_msgs/Imu messages in the way that I think it does, I can use one of those instead of geometry_msgs/PoseWithCovarianceStamped.

geometry_msgs/TwistWithCovarianceStamped
twist_msg.twist specifies angular velocities and linear accelerations in <twist_msg.frame_id>.

Parameterizable Queue Sizes

Right now, robot_localization's state estimation nodes assume a queue size of 1. This ensures that the filter always receives the most recent measurements from each sensor. However, it has two significant drawbacks:

  1. Users must increase the frequency parameter in order to not miss any measurements from high-frequency sensors.
  2. If at time t1 the sensor receives a measurement with timestamp tS1 <= t1, and then at time t2 receives a measurement from another sensor with timestamp tS2 < tS1, then the filter's default behavior is to skip the prediction step and just call correct. There really isn't anything else we can reasonably do in this instance, but we can avoid the situation happening as frequently by decreasing the frequency parameter value and increasing the queue size. Since the filters will order measurements based on timestamps before attempting to integrate them, we'll have fewer cases of old measurements being fused after newer ones.

This should be exposed as a per-sensor parameter, with a default value of 1 so as to maintain compatibility with previous releases.

Add 2D Mode

Allow users to specify that their robot will be running in 2D. We can then force 3D variables (z, roll, pitch, and their respective velocities) to 0 within the node, rather than requiring users to set up their configurations to "falsely" measure 0 values for 3D variables.

Implement true differential integration

robot_localization does not carry out true differential integration of measurements when the XXXX_differential parameter values are set to true. Rather, it creates a static offset for future measurements. In reality, the filter should calculate the difference between a given measurement and its last value, transform that difference into the robot's world frame (likely odom), and then integrate the difference as a velocity measurement. This will eliminate cases wherein two measurement sources that are measuring absolute pose data (x, y, z, roll, pitch, yaw) drift apart from one another and cause the output to oscillate between them.

Enhancement Proposal: Ability to conditionally enable/disable sensor data

Tldr; I would like the ability to ignore IMU sensor data when the encoders show no motion

I have a mobile robot which is by nature subject to incorrect encoder-odometry data (the navigational base is a clearpath husky, and thus skid-steer), and is unfortunately also subject to an imperfect IMU (slow drift over time). The operations of the robot are such that it alternates between mobile navigation and long periods during which the base is not moving but other operations are taking place. It is during these periods of standstill that the IMU drift is most problematic, as the estimated pose will slowly but continuously rotate over time due to the IMU. However, the robot as a whole has a high enough inertia and operates in a sufficiently docile environment that there is no reason why it should be moving if the wheels themselves are not moving. It would be therefore very helpful to be able to only use the IMU data when the encoder data is showing motion, and ignore it otherwise. Note that removing the IMU entirely from the localization is not an option, as it is essential for actual rotations (due to skid-steer).

I considered downloading and custom modifying the source on my own, but this seems like a practical enhancement from which multiple users may benefit.

Add support for asynchronous message times.

If measurements come from more than one machine, and those machines have clocks that are not synched, then the time stamps of the messages will likely cause large jumps in the filtered position estimate. A new parameter should be added that lets the filter completely ignore message arrival times when calculating time deltas for state projection, and instead just using the time difference on the host machine. This may lead to slightly less accurate solutions overall (as compared to systems with synchronized clocks), but will allow users who don't have control over all machines in their systems to still use the nodes in robot_localization.

Correct Handling of Differential Covariances

Currently, when a user sets a sensor to differential, the covariances, other than being rotated into the correct frame, are used as-is. This is not technically correct. Differential measurements are formed by taking the delta between the previous measurement and the current measurement, tacking them onto the current state estimate, and calling that a new measurement. The covariances should receive the same treatment, and should therefore be the current estimate error covariance combined with the covariance of the measurement delta.

Doing this is problematic, however, as it will involve multiple matrix inversion operations. A much better means of handling differential data (and one that I've experimented with previously) is to use the delta in measurements, combined with the time delta between the measurements, to generate twist messages and handle them as with any other twist data. Calculation of the covariances would be equally simple, and require less CPU time. Differential calculations should therefore be changed to operate this way.

While this method will be more correct in a numerical sense, it will also require that users take a bit more care in using the differential parameter, especially with orientation data. When raw positional velocities accumulate, their small errors also accumulate, resulting in absolute positional error. Over time, these errors can become significant. However, consider a robot that moves 20 meters and accumulates 1 meter of positional error. This seems reasonable, and won't result in any seemingly strange behavior. The same is not true for a robot that accumulates rotational error. Consider a robot that doesn't use an absolute orientation reference, but just accumulates rotational velocity. If that robot starts and immediately turns in place until it accumulates 1 radian of error, then drives straight for 20 meters, its final positional error will be much larger. In general, integration of rotational velocity is not as good as fusing absolute orientation measurements.

For users, what this means is that if you have only one source of absolute orientation data, you should fuse it with the differential parameter set to false. If you have multiple IMUs or sources of absolute rotational information, one of them should be fused with the differential parameter off, and the rest should have it on.

Fix versioning

My original thought for versioning was to increment the major number with each ROS release, so that if a user reported an issue with x.y.z, I would know that 'x' meant, for example, Hydro. I now believe that was a bad idea. The way I've been trying to work is to keep the software utterly identical for all non-EOL'd ROS versions, and then just ceasing updates when EOL is reached. I'll continue to do this, but as long as the software is identical, its version numbers should match. As I can't move backwards with version numbers, I am going to make the next release 2.2.0, and will afterward follow the standards of semantic versioning.

Adherence to ENU standard for IMU data

This really only applies to navsat_transform_node. Right now, it assumes that the signs of all IMU data are in accordance with REP-103. However, it does not assume that an IMU will read 0 when facing east. This needs to be corrected to fully bring the node in line with REP-103.

Covariance override

Users don't always have control over the nodes that produce sensor data. Many sensors indicate the absence of measurement of a given variable by inflating its covariance to an extremely large number. A Kalman filter based approach will apply a very small weight to such a measurement, effectively ignoring it. However, it is sometimes the case that even if a sensor does not measure a variable, it should be included as part of its measurements. For example, a robot that operates in a planar environment will produce 0 measurements for Z and Z velocity. Rather than giving these values a large covariance value, it makes sense to make the covariance for those variable very small and integrate the 0 values. To give users greater control over input data, a new (set of) parameter(s) should be added to allow the users to manually override a sensor's covariance values.

Add failure callbacks for message filter topics

When the tf message filters fail because the message cannot be transformed, it does so silently, and the user receives no indication as to why their data was not being integrated. We should use the registerFailureCallback method on each message-filtered topic to generate a warning.

Eliminate dependency on gps_common

navsat_transform_node uses a header in gps_common. The file is BSD licensed, and was originally written by The University of Texas at Austin. We should eliminate robot_localization's dependency on gps_common by pulling that header file into robot_localization and citing the original license where appropriate.

Future transform issue

I'm getting the same issue as this guy: http://answers.ros.org/question/188046/tf-lookup-would-require-extrapolation-into-the-future/

Node: /ekf_localization
Time: 16:16:48.307097616 (2014-09-02)
Severity: Warn
Published Topics: /odometry/filtered, /rosout, /tf

Could not obtain transform from odom to base_link. Error was Lookup would require extrapolation into the future.  Requested time 1409689008.298073576 but the latest data is at time 1409689008.286969633, when looking up transform from frame [odom] to frame [base_link]


Location:
/tmp/buildd/ros-indigo-robot-localization-2.1.3-0trusty-20140805-0104/include/robot_localization/ros_filter.h:RosFilter<Filter>::preparePose:1049

It's particularly odd to me because the the odom -> base_link transform is being published by the ekf_localization node itself! Should the lookupTransform call instead by a waitForTransform?

Include linear acceleration in state estimate

The state estimate for all of robot_localization's nodes currently include only x, y, z, roll, pitch, yaw, and their respective velocities. However, linear acceleration is often available from acelerometers. This information should be included in the state estimate.

Integrate UTM translation into utm_transform_node?

For users running nmea_navsat_driver, it's lame to have to run a separate node and depend on gps_common to get utm_odometry_node.

Would you consider adding a parameter which enables the gpsFixCallback to just compute and publish the appropriate utm-converted PoseWithCovarianceStamped? It would mean bringing in the conversions header from gps_common, and very slightly swelling the utm_odometry_node binary size, but shouldn't otherwise impact indoor users of the package.

Tf publishing under multiple robot_localizations (world_frame = map_frame)

Quick question! In the guide it is suggested that we could use two instances of robot_localization to make sure things are aligned with REP-105. Specifically,

"If you are fusing global absolute position data that is subject to discrete jumps (e.g., GPS or position updates from landmark observations) then:
Set your world_frame to your map_frame value
MAKE SURE something else is generating the odom->base_link transform. Note that this can even be another instance of robot_localization! However, that instance should not fuse the global data."

However, the tf is always published between "world_frame" and "base_link". We can't really set the base_link to odom (repeat names are not allowed). I feel like I'm missing a solution here?.

Calling set_pose with worldframe = map is incorrectly transformed

The general problem I am seeing is that when I call "set_pose" to the EKF the state gets set incorrectly. I'm unsure if this is a bug or if I may simply have my ekf configured incorrectly.

My frames are the standard:
map_frame: map
world_frame: map
odom_frame: odom

What I am seeing is if I send a /set_pose with a frame_id of map then I expect the output of the filter to be what I set in the set_pose message. Instead what I see is that it gets transformed by the odom->map transform before being set.

Looking through the code I found in ros_filter.cpp, line 1265:
preparePose(msg, topicName, odomFrameId_, false, false, false, updateVector, measurement, measurementCovariance);

which is using the odomFrameId_ for the transform. By changing this to worldFrameId_, everything works as expected.

It seems odd that the filter is transforming into the odom frame. If this is intended then I am not sure what I am doing wrong.

Thanks!

Incorrect handing of transforms for linear velocity

ros_filter.h takes in linear velocities in whatever frame they are reported and transforms them into the target frame. However, only the transform's rotation should be applied, as linear offsets in transforms do not affect velocity. As it is, this issue causes the vehicle to have false linear velocity based on the transform's linear offset.

Add parameter for map frame ID.

REP-105 (http://www.ros.org/reps/rep-0105.html) states the following:

"The transform from map to base_link is computed by a localization component. However, the localization component does not broadcast the transform from map to base_link. Instead, it first receives the transform from odom to base_link, and uses this information to broadcast the transform from map to odom."

robot_localization needs to be updated to more closely follow the REP.

Node crashes on startup

The EKF localisation node is crashing right on launch :

process[ekf_localization-3]: started with pid [10649]
ekf_localization_node: malloc.c:3695: _int_malloc: Assertion `(unsigned long) (size) >= (unsigned long) (nb)' failed.

ROS Indigo on Ubuntu 14.04 LTS

Metric scale estimation in monocular system

Hi Guys,

We're using the rpg_svo stack for monocular visual odometry. We also have an IMU for estimating the metric scale. Previously, we've used the ethzasl_msf ekf framework for the sensor fusion and scaling, but now I'd like to try out this package.

Ethzasl_msf automatically scales the measurement from SVO using IMU data (common movement) and the output has correct metric scale. Is this implemented in your package? If so, are there any specific steps or parameters required to be set? If not, how difficult would implementation be?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.