Motion Sensor Fusion

 

Satellite-based geolocation has become a commonplace, thanks to the manufacturers that managed to integrate high-sensitivity RF receivers and complex mathematical computation required by the GPS technology into small and cheap devices that fit into the palm of your hand hidden in a smartphone, or behind a car’s dashboard.

Beside position, another important characteristic to get is the orientation in space, known as the “attitude”. But in this case, no RF communication with an external system is required as it is for GPS, this information can be obtained by using local physical value sensors.

The difficulty is that there is no single sensor that is able to provide an absolute attitude by eliminating all 6 mechanical Degrees of Freedom at once (3 DoF along each axis, 3 DoF around each axis). Thus, a “mix” of several types of sensor is required, and with the democratization of motion sensors based on MEMS (for “MicroElectroMechanical Systems”) at low cost  ($3.02 for a 9-axis sensor on AliExpress), their usage raises today a lot of momentum, especially because of the development of drones and Virtual Reality Helmets.

Compared to the flight instruments depicted above (airspeed indicator, attitude indicator, altimeter, turn coordinator, heading indicator and vertical speed indicator), they basically serve the same goal: get an attitude (or an orientation in space), but at a fraction of the size and cost.

When these sensors are considered as a group, the term IMU (“Inertial Measurement Unit”) is used to reference the sensors delivering only raw values.

But these MEMs motion sensors have an important issue in that the different sensors do not deliver independently a ready-to-process information, and thus a “sensor fusion” operation is required in order to obtain an absolute orientation in space. The term used in this case is AHRS (“Attitude and Heading Reference System”).

Among all existing motion sensor types, the gyroscope is generally considered as the most important one, as it provides an angular velocity and that a simple integration will enable to retrieve relative angles, a very good start for getting the overall orientation in space. One drawback of this method is that by integrating, a constant is introduced, that will accumulate into an error that will result into a drift over time. This is exactly the purpose of the “PUSH” knob near the heading indicator (bottom center in the picture above), used to realign it to the true magnetic North every ten to fifteen minutes.

These relative angles can be constrained further by using an accelerometer sensor that provides linear acceleration values and thus allowing us to determine the Earth gravitational force in form of a “1g” down vector when idle, adding 2 constraints to the angles but also significant disturbance when moving.

There remains a last degree of freedom: the yaw. It can be given by a magnetometer, which provides the magnetic North direction. However, it is far less dynamic than the gyroscope when during rotations.

The problem of data fusion from values given by all these sensors can be definitely solved by using the tactical nuke that applies to this kind of kind of disturbance generated by sensor noise: the Kalman predicitive filter or more precisely its extended version (EKF for “Extended Kalman Filter” for its non-linear version), at the cost of relatively complex (for an embedded system) matrix computations. Nevertheless, an Arduino library implementing this method is available.

Another approach consists in exploiting complementary filters (low-pass + high-pass) to filter the noise inherent to each sensor, as experimented by Robert Mahony in 2008, using a simple PI (for Proportional–Integral) feedback loop.

The complementary filter idea has been pushed further by Sebastian Madgwick in 2010, by using a spherical gradient (a kind of “mean” value between 2 points on a sphere), who was the first to provide an Open Source implementation of Mahony and Masgwick filters. Something worth noticing is the use of the Fast Inverse Square Root algorithm.

The Madgwick algorithm has been improved by Kris Winer in 2014 to remove undefined cases when computing norms with null vectors from the accelerometer and magnetometer to create a library for the integrated Invensense MPU-9250 9-axis sensor.

Eventually, Kris Winer’s sources have been cleaned up by Sparkfun in 2016 to create a clean Arduino library.

Another more basic implementation of Madgwick algorithm has been done by Philippe Lucidarme in 2015, with a bonus graphical visualization tool written in Qt.

If you reach this point, you may have noticed that all the above algorithms were based on “quaternions” instead of more classical Euler angles or matrices. This is because a quaternion represents an object rotation in space under the form of a 3D vector and a scalar value to provide the orientation around this axis. This enables computing rotation compositions in a simpler way compared to the Euler angles or even full-fledged  matrix transforms, and to avoid the dreadful Gimbal Lock.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *