Comments (7)
The two tools behave completely different with their default parameters!
There's a reason why the used parameters are explicitly shown in the title of evo_rpe
& evo_rpe
and why there are conflict checks in evo_res
: it's easy to forget what you're actually measuring. Unfortunately, evaluate_rpe.py
's behavior is not so obvious.
Please also be aware of the disclaimer in the README of this package 😉
What it's not: a 1-to-1 re-implementation of a particular evaluation protocol tailored to a specific dataset.
From python evaluate_rpe.py --help
:
--max_pairs MAX_PAIRS
maximum number of pose comparisons (default: 10000,
set to zero to disable downsampling)
--fixed_delta only consider pose pairs that have a distance of delta
delta_unit (e.g., for evaluating the drift per
second/meter/radian)
--delta DELTA delta for evaluation (default: 1.0)
--delta_unit DELTA_UNIT
unit of delta (options: 's' for seconds, 'm' for
meters, 'rad' for radians, 'f' for frames; default:
's')
The default behavior of this script is weird "interesting" (non-fixed delta) - changing delta
and delta_unit
changes absolutely nothing. This might be a bug, but I don't know. Check the code if you really need to know what it does.
The delta parameters only have an effect if you use --fixed_delta
. "Seconds" is not supported by evo, I also don't think it makes a lot of sense as a delta unit for an odometry error.
An example that gives more or less similar results:
evo_rpe tum fr2_desk_groundtruth.txt fr2_desk_ORB.txt
evo_rpe tum fr2_desk_groundtruth.txt fr2_desk_ORB.txt --pose_relation angle_deg
python evaluate_rpe.py fr2_desk_groundtruth.txt fr2_desk_ORB.txt --delta 1 --delta_unit f --fixed_delta --verbose
from evo.
Ok, just saw this, my bad. Thank you!
from evo.
Edit: just saw your second comment.
from evo.
Well, what I understand from TUM's website (quoted here):
By default, the script computes the error between all pairs of timestamps in the estimated trajectory file. As the number of timestamp pairs in the estimated trajectory is quadratic in the length of the trajectory, it can make sense to downsample this set to a fixed number (–max_pairs)
Is that for each pose in the trajectory they loop again over all the trajectory and compute the relative errors (that's why I think they mention the quadratic term). So obviously changing delta and delta_unit does not change anything. Answering your comment below:
The default behavior of this script is weird "interesting" (non-fixed delta) - changing delta and delta_unit changes absolutely nothing. This might be a bug, but I don't know. Check the code if you really need to know what it does.
I believe your implementation of --all_pairs is not really the same, you are indeed doing all pairs with a fixed delta, but they are doing absolutely all possible pairs.
These are my thoughts, but I haven't checked the code... :( maybe someone can clarify this point.
Otherwise I'll do it later this month :)
👍 Thank you nevertheless for your code! It's great!
from evo.
Thanks for the investigation!
you are indeed doing all pairs with a fixed delta, but they are doing absolutely all possible pairs
Yes, that's correct. Maybe I shouldn't write "RPE w.r.t ... using all possible pairs" then 🤔
from evo.
**evo_rpe tum fr2_desk_groundtruth.txt fr2_desk_ORB.txt
evo_rpe tum fr2_desk_groundtruth.txt fr2_desk_ORB.txt --pose_relation angle_deg
python evaluate_rpe.py fr2_desk_groundtruth.txt fr2_desk_ORB.txt --delta 1 --delta_unit f --fixed_delta --verbose**
Hi, this code generate a similar result in TUM-rgbd datasets, but the results in Bonn dataset are still different.
I think your evo tool is more credible, but what is the reason for the difference.
from evo.
Hi, maybe I found the possible reason: the rate of the groundtruth between TUM dataset(100 Hz) and Bonn dataset(30 Hz) is different. In a sequence of TUM dataset, there are 3060 poses in groundtruth and 856 poses in estimated traj. The command and output is
python evaluate_rpe.py --verbose groundtruth.txt my_af202.txt --delta 1 --delta_unit f --fixed_delta --verbose
translational_error.rmse 0.019422 m
translational_error.mean 0.015232 m
translational_error.median 0.012157 m
translational_error.std 0.012049 m
translational_error.min 0.001467 m
translational_error.max 0.113745 m
rotational_error.rmse 0.479557 deg
rotational_error.mean 0.390465 deg
rotational_error.median 0.320511 deg
rotational_error.std 0.278410 deg
rotational_error.min 0.020445 deg
rotational_error.max 2.119221 deg
evo_rpe tum groundtruth.txt my_af202.txt -va
Loaded 3062 stamps and poses from: groundtruth.txt
Loaded 866 stamps and poses from: my_af202.txt
Synchronizing trajectories...
Found 864 of max. 866 possible matching timestamps between...
groundtruth.txt
and: my_af202.txt
..with max. time diff.: 0.01 (s) and time offset: 0.0 (s).
--------------------------------------------------------------------------------
Aligning using Umeyama's method...
Rotation of alignment:
[[ 0.99803838 0.02161303 -0.05875595]
[ 0.06091865 -0.11890846 0.99103466]
[ 0.01443268 -0.99266996 -0.11999185]]
Translation of alignment:
[-0.6356544 -2.76933336 1.4763658 ]
Scale correction: 1.0
--------------------------------------------------------------------------------
Found 863 pairs with delta 1 (frames) among 864 poses using consecutive pairs.
Compared 863 relative pose pairs, delta = 1 (frames) with consecutive pairs.
Calculating RPE for translation part pose relation...
--------------------------------------------------------------------------------
RPE w.r.t. translation part (m)
for delta = 1 (frames) using consecutive pairs
(with SE(3) Umeyama alignment)
max 0.113863
mean 0.015228
median 0.012145
min 0.001467
rmse 0.019418
sse 0.325414
std 0.012050
The number of used poses is similar, so the result is similar.
Howerver ,in one sequence of Bonn dataset, there are 937 poses in groundtruth and 931 poses in estimated traj. The command and output is
python evaluate_rpe.py groundtruth.txt my_af202.txt --delta 1 --delta_unit f --fixed_delta --verbose
compared_pose_pairs 929 pairs
translational_error.rmse 0.019285 m
translational_error.mean 0.015794 m
translational_error.median 0.013768 m
translational_error.std 0.011066 m
translational_error.min 0.001321 m
translational_error.max 0.121428 m
rotational_error.rmse 0.974779 deg
rotational_error.mean 0.831387 deg
rotational_error.median 0.705680 deg
rotational_error.std 0.508911 deg
rotational_error.min 0.032085 deg
rotational_error.max 3.797937 deg
evo_rpe tum groundtruth.txt my_af202.txt -va
--------------------------------------------------------------------------------
Loaded 937 stamps and poses from: groundtruth.txt
Loaded 931 stamps and poses from: my_af202.txt
Synchronizing trajectories...
Found 546 of max. 931 possible matching timestamps between...
groundtruth.txt
and: my_af202.txt
..with max. time diff.: 0.01 (s) and time offset: 0.0 (s).
--------------------------------------------------------------------------------
Aligning using Umeyama's method...
Rotation of alignment:
[[-0.47477806 -0.4005333 0.78368289]
[-0.87920361 0.25615124 -0.4017307 ]
[-0.03983482 -0.87974975 -0.47376531]]
Translation of alignment:
[ 0.12510496 -1.68790529 1.44251111]
Scale correction: 1.0
--------------------------------------------------------------------------------
Found 545 pairs with delta 1 (frames) among 546 poses using consecutive pairs.
Compared 545 relative pose pairs, delta = 1 (frames) with consecutive pairs.
Calculating RPE for translation part pose relation...
--------------------------------------------------------------------------------
RPE w.r.t. translation part (m)
for delta = 1 (frames) using consecutive pairs
(with SE(3) Umeyama alignment)
max 1.091000
mean 0.019737
median 0.013485
min 0.001030
rmse 0.067781
sse 2.503871
std 0.064844
The number of used poses is different, which results in different result.
from evo.
Related Issues (20)
- --save_as_tum generates a .tum file with numbers all containing scientific notations. How to suppress scientific notations? Why modification to file_interface.py does nothing? HOT 1
- How to automatically close this window HOT 4
- Importing apply_settings from evo.tools.plot doesn't work HOT 4
- Why is --all_pairs not the default? HOT 3
- Question about Umeyama alignment HOT 4
- Query regarding alignment
- Question about the difference between RPE/APE w.r.t. translation and full transformation HOT 2
- can not use parameter -p. No module named 'tkinter' HOT 3
- Why are only part of the trajectory left after I use the -as parameter? HOT 1
- Questions about Umeyama Alignment and Evaluation Results HOT 4
- plot error with matplotlib 3.8.1 HOT 2
- Pandas ValueError: Multi-dimensional indexing (e.g. `obj[:, None]`) is no longer supported. HOT 2
- The evo tools don't work today.
- Data format(TUM, KITTI) conversion HOT 1
- Questions about KITTI_00_ORB.txt KITTI_00_SPTAM.txt. HOT 3
- Data conversion HOT 4
- Unable to use /tf for trajectories HOT 1
- Unable to convert recorded trajectory bag file to kitti format HOT 3
- Questions about aligning poses and drawing trajectories HOT 1
- Rotations don't match after alignment HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from evo.