I once posted the following online (on robotics.stackexchange.com) and it got me the “Tumbleweed” badge: “Zero score, no answers, no comments, and low views for a week.” It eventually got hidden/deleted by a bot.
Unfortunately I couldn’t resolve the question yet and it is informative by itself, so I’ll put it back out there:
I currently wonder how to output the uncertainty of pose estimates for a navigation software.
There seem to be two main choices:
The representation for translation is pretty clear: Cartesian coordinates in meter. Concerning the coordinate frame, I think it is common to express it in the external reference frame in which the pose is given.
For orientation, it seems to make more sense to place it in the local coordinate frame, as that is where the linearization takes place. However, having a covariance matrix with parts in the external and parts in the local frame is certainly unexpected and would result in weird off-diagonal blocks (the upper right and lower left blocks).
For the representation of orientation I have the following questions
I couldn’t find much on this online. The ROS conventions propose roll-pitch-yaw (rotation about X, Y, Z, with fixed axes1), but I couldn’t find the reasons behind the decision. g2o uses quaternions without the \(w\) coefficient in the slam3d types.
Thanks for reading so far. From writing this up, it seems to me that using a local reference frame for translation and rotation with the rotational part being parameterized by a rotation vector is the way to go, but I don’t feel particularly confident about it.
I’d appreciate your thoughts on the matter. References on this topic are also very welcome!