Dry information | Robot’s “position sensor”——IMU

Nowadays, intelligent devices that can move autonomously, such as robots and autonomous driving, are developing faster and faster, control algorithms are becoming more and more complex, and perception systems are becoming more and more powerful. The sensor IMU, which measures the motion and attitude of controlled objects, is an important part of the perception system and is also developing in the direction of low cost and high precision. In this technical issue, we invited Wang Qi, an engineer from Xiaomi , to discuss IMU as the main subject. From what IMU is to its working principle, combined with specific applications, we will show the robot's "position sensor" from multiple angles.

1. What is IMU?

Our bodies are doing various movements all the time, such as standing, sitting, lying down, etc. The words "standing", "sitting" and "lying down" here are very important to us. It's a natural thing, but have you ever noticed one thing - how do we know whether we are "standing", "sitting" or "lying down"?

In fact, there is a very inconspicuous but extremely important organ in our body, the position receptor, which is the key to the body's normal movement and balance. Just imagine, after walking around in circles for many times, don’t we feel dizzy, or even unable to walk in severe cases? Although our vision, hearing, touch, etc. are all working normally at this time, there is actually a problem with the position receptors, so the body cannot feel its current posture and cannot maintain balance.

Similarly, there is such a position sensor on the robot - IMU, which is used to measure the motion and posture information of the robot. Its work process is also the same as the position sensor, and the measurement results are reported to the robot's motion control module (cerebellum). ), the motion control module makes decisions based on this information and issues instructions to the execution terminals (hands, legs, feet) to control the movement of the robot.

In a narrow sense, IMU refers to an inertial measurement unit , which can measure the three-axis acceleration and three-axis angular velocity of an object moving in a three-dimensional space, and then obtain the attitude of the object through data fusion and other algorithms; in actual use, IMU Generally refers to sensors that measure the movement and attitude of controlled objects , including IMU, VRU, AHRS, etc. in a narrow sense. For robots, the quality of IMU data directly determines the robot's motion performance. In fact, IMU is not only used in the field of robotics, but also has applications in many fields, ranging from aircraft, missiles, and spacecraft to mobile phones, electronic bracelets, etc.

2. How does the IMU sense posture?

▍What is the posture ?

Attitude is the rotation of an object relative to the axes of a coordinate system. Generally speaking, when we talk about posture, we refer to the rotation of an object relative to its own coordinate system, but we can calculate its posture relative to other coordinate systems through rotation transformation. For example, in the picture, two people see an object on the ground. A thinks it is 6 and B thinks it is 9. In order to solve this problem, we established a coordinate system for A and B and the object on the ground. It is stipulated that the object on the ground is 6 and the initial direction of x. The hook pointing to 6, the x direction of the coordinate system of A and B is parallel to the x direction of the object, and the y direction is vertically upward. In this way, through coordinate conversion, the object on the ground looks like 6 to A, and the object on the ground looks like 6 to B, but it is just " 6 turned over”.

dbade43465c4849d5b4266bad883662d.pngFigure 1 Relative posture of objects

We use mathematical language to describe the above content. Taking the two-dimensional coordinate system as an example, we use two angle values ​​​​to represent the posture and relative posture of each object (unit: deg). First, the postures of A, B, and objects on the ground relative to their own coordinate system. All are:

The posture of object 6 relative to A, that is, the posture of the object in A's eyes is:

 First 

The posture of object 6 relative to B, that is, the posture of the object in B's eyes is:

 Second 

The above method of expressing posture is actually the most intuitive Euler angle representation method in posture representation. In three-dimensional Cartesian space, the rotation angle of the object around three axes is used to represent the attitude of the object:

It is usually defined that the angle of rotation around the x-axis is the roll angle (roll), the angle of rotation around the y-axis is the pitch angle (pitch), and the angle of rotation around the z-axis is the yaw angle (yaw), as shown in Figure 2.

d2d434939c04c283b4f6ccf9feb3e581.gif

Figure 2 Euler angle represents attitude

Image source: https://www.youtube.com/watch?v=UpSMNYTVqQI

Although Euler angles are intuitive, they have certain limitations when used to express postures. The definition of the rotation order will affect the rotation results. For example, the object 6 in Figure 2 first rotates 90 degrees around x and then rotates 90 degrees around y, and first around y. The results obtained by rotating 90 degrees and then rotating 90 degrees around x are different; another example is the universal joint deadlock problem that occurs during rotation, which is also caused by the rotation sequence.

In reality, most rigid body rotations are completed in one go. For example, if we want to take a selfie, our head should be tilted 45 degrees upward to make the photo look good. When the head is turned, it should rotate around a certain axis at a certain angle and directly reach the 45-degree posture diagonally upward, instead of first turning horizontally. 45 degrees, then turn upward 45 degrees. Therefore, the attitude can also be expressed by axis angle or quaternion, which "directly reaches a certain attitude".

The attitude is represented by the axis angle. The axis is the axis of rotation and is represented by a three-dimensional vector. The angle is the angle of rotation around the axis of rotation. Therefore, the axis angle can be expressed as follows:

Quaternions are defined in four-dimensional space and have many good properties when used to represent the posture of objects, but they are not very intuitive. The representation of quaternions is as follows:

The conversion relationship between quaternion and axis angle is as follows:

where is the rotation angle, and is the rotation axis, so that we can feel how the quaternion represents the posture from the side.

In addition, posture expression methods include directional cosine matrix, Lie group Lie algebra, etc., which will not be introduced here.

▍The principle of IMU measuring attitude

Take the inertial measurement unit (MEMS IMU) as an example. It consists of a three-axis accelerometer and a three-axis gyroscope. It can measure the acceleration and angular velocity of an object moving in three-dimensional space. So how do we use acceleration or angular velocity to obtain the attitude of the object? Woolen cloth?

5f4da64764ee005a9185761deeae1400.png

Figure 3 Working principle of accelerometer

Image source: https://www.zhihu.com/question/19769131/answer/886359013

The principle of MEMS accelerometer measuring acceleration is actually Newton's second law. As shown in Figure 3, the inertia generated when a mass object moves will cause the force-bearing parts on both sides to deform. In MEMS, changing voltages are used to measure this deformation. That is, there are two fixed capacitor plates on both sides, and a movable capacitor plate in the middle, which is equivalent to an object with mass. In this way, the deformation can be reflected on the changes in capacitance on both sides, and the object can be obtained through analog-to-digital conversion and other methods. The acceleration of motion.

735f799acc1c4bafce3d54aa2a360e09.png

Figure 4 Principle of acceleration attitude measurement

When the IMU is at rest, the acceleration measured by the accelerometer is the acceleration of gravity. Using this property, the z-axis of the world coordinate system is defined to be parallel to the acceleration of gravity and upward. Based on the synthesis and decomposition of forces, the object's relative position to the world can be obtained. The attitude of the coordinate system. As shown in Figure 4, when the object rotates at an angle, the accelerometer can measure the component of the acceleration of gravity on the x-axis, thereby calculating the included angle, that is, the rotation angle of the object around the y-axis. In the same way, the object around the x-axis can also be obtained. The rotation angle of the axis, this is the basic principle of acceleration measurement attitude. However, the actual MEMS accelerometer has high-frequency noise, and the attitude calculated from acceleration in a short time is inaccurate.

The traditional mechanical gyroscope utilizes the principle of conservation of angular momentum, that is, the direction of the rotation axis of a high-speed rotating gyroscope remains unchanged when it is not affected by external forces. This is the fixed-axis nature of the gyroscope, and its precession is essentially due to the fixed axis. axis. Obviously, this method is not realistic if applied to MEMS, so another method is used to implement the MEMS gyroscope.

d3acd00345efd0c3e302cc92149c4a87.gif

Figure 5 The fixed axis and precession of the gyroscope

Image source: https://www.youtube.com/watch?v=n_6p-1J551Y

The principle of MEMS gyroscope measuring angular velocity is that a rotating object will generate a Coriolis force when it has radial motion. Its direction is perpendicular to the plane formed by the radial velocity direction and the rotation angular velocity direction, and its magnitude is:

  is the mass of the object, is the angular velocity of the object's rotation, and is the speed of the object's radial motion.

In MEMS, there are radial and tangential orthogonal capacitive plates. The radial capacitive plate applies an oscillating voltage to drive the mass movement. When the object rotates to generate angular velocity, the object produces tangential displacement due to the Coriolis force. This causes the voltage of the tangential capacitor plate to change, and then the Coriolis acceleration is obtained through analog-to-digital conversion and other methods. The angular velocity of the object can be calculated according to the above formula.

77f5a7b3ff751acb76a14e9eb7419a5a.gif

Figure 6 Working principle of MEMS gyroscope

Image source: Basics丨How does a MEMS gyroscope work? - Robotics Committee of Chinese Society of Automation

The principle of calculus tells us that the integral of angular velocity is the angle. Therefore, we can directly integrate the three-axis angular velocity measured by the gyroscope to obtain the angle of rotation of the object around the three axes. However, the actual MEMS gyroscope will have various errors due to reasons such as process, temperature, vibration, etc. These errors will make the angle obtained by the gyroscope increasingly inaccurate as it is integrated. The final result is that the object is clearly in the "Standing", but the IMU tells it that it is "lying down". What is the solution?

▍IMU attitude calculation

We found that although the attitude obtained by acceleration is inaccurate in the short term, there is no integral error and it can be trusted in the long term; although the attitude obtained by angular velocity has an integral error in the long term, it can be trusted in the short term. So if both of us believe a little bit, wouldn't we be able to get a long-term, stable and accurate stance? From this we discovered the simplest data fusion method:

Of course, this simple fusion method cannot be directly used in IMU in practice. Due to the noise characteristics of the sensor, etc., it is difficult to adjust the parameters to achieve convergence. Here is just an example. However, the data fusion method is based on this simple idea. Believe a little here and a little there, but we must use scientific methods to achieve "believe a little here and believe a little there".

IMU data fusion is also called attitude calculation. Commonly used algorithms include complementary filtering (Complementary Filter), Bayesian filtering (Bayesian Filter), etc. Mahony algorithm is widely used in complementary filtering, and its complementary principle is similar to pid; there is also Madgwick algorithm based on gradient descent optimization, etc. The classic algorithm in Bayesian filtering is Kalman Filter (KF) and its derivative algorithms, such as Extended Kalman Filter (EKF), Error State Kalman Filter (ESKF), Auto Adaptive Extended Kalman Filter (AEKF), etc. Compared with the complementary filtering method, Kalman filtering fuses data based on the Bayesian model and can obtain an unbiased estimate of the target.

cdd116d396145f7e34344592a665f056.png

Figure 7 IMU attitude calculation based on extended Kalman filter [1]

As shown in Figure 7, taking the extended Kalman filter for attitude solution as an example, closed-loop iteration is performed on the quaternion. Kalman filtering is mainly divided into two parts: prediction and update. We will briefly introduce the two parts respectively.

The first is the prediction part, which predicts the attitude and its covariance matrix. The prediction here is an integral prediction using the angular velocity measured by the gyroscope. The state transition equation is in the figure, which can be obtained through the quaternion differential equation.

After that comes the update part, where the predicted values ​​and observed values ​​are compared and compared during the update. The predicted value here is the attitude obtained in the prediction part, and the observed value is the three-axis component of gravity acceleration measured with an accelerometer. How to make a difference in the comparison between attitude and acceleration? We thought that in the above analysis, acceleration can be used to obtain the attitude, so that the difference between the attitude and the attitude can be made; similarly, we can also convert the attitude into acceleration, so that the difference can also be made. The role of the measurement matrix is ​​here, it can transform the predicted vector from the prediction space to the observation space. Since the transformation from prediction space to observation space is generally a nonlinear relationship and does not satisfy the assumption of Gaussian distribution of Kalman filter, in EKF, the first-order Taylor expansion of the transformation equation is used to approximate this linear relationship, and the measurement matrix is ​​the transformation equation The Jacobian matrix.

The conversion equation from the prediction space to the observation space is shown in Figure 7. It contains two meanings, one is to convert the predicted attitude into the predicted acceleration, and the other is to convert the predicted acceleration to the coordinate system where the observed acceleration is located. If there is no error in the predicted attitude, then when the predicted attitude is used to convert the gravity acceleration to the observation coordinate system, it should completely coincide with the observed acceleration, because the observed acceleration is the gravity acceleration in the observation coordinate system, and it is precisely because of the predicted attitude There is an error, which ultimately leads to a deviation between the predicted acceleration and the observed acceleration. What the Kalman filter does is to perform a weighted average of the two to obtain an unbiased estimate of the acceleration, and then use the inverse of the measurement matrix to convert the acceleration. into the attitude, thus obtaining an unbiased estimate of the attitude. This is the basic principle of EKF for IMU attitude calculation.

What is AHRS doing?

In the process of measuring the attitude of the above IMU, we found that the gravitational acceleration only has a projection on the xz plane and the yz plane, and on the xy plane, no matter how it is rotated, the projection of the gravitational acceleration is just a point, and the angle of rotation around the Z axis cannot be calculated. Therefore, The IMU can only measure the rotation around the z-axis by relying on the angular velocity integral. As we said before, the angular velocity integral will have long-term integration errors. So how to correct the rotation around the z-axis? The answer is to use a magnetometer.

We mentioned earlier that the generalized IMU includes AHRS. AHRS refers to Attitude and heading reference system, which includes MEMS-based three-axis gyroscopes, accelerometers and magnetometers. It integrates internal data fusion algorithms and can directly output its own attitude information. The ten-axis AHRS also has Barometer to measure altitude.

8f47d37295eefaa7293611c40156fd71.png

Figure 8 Magnetometer measures geomagnetic field

Image source: https://www.youtube.com/watch?v=qXVLXrOB8Ag&feature=youtu.be

A magnetometer can measure the size and direction of the geomagnetic field. Most of the time, the direction of the geomagnetic field is at a certain angle with the direction of gravity. Therefore, the rotation angle of an object around the z-axis can be calculated through the projection of the geomagnetic field on the xy plane. The principle is the same as The gravity acceleration measurement attitude is the same, and then the data can be fused with the angle obtained by integrating the angular velocity. Through the nine-axis AHRS, long-term and stable attitude measurement of the object can be obtained. This is also the IMU used in most robot controls today. However, at the Earth's poles, the direction of the magnetic field is close to parallel to gravity, and the method of calculating the attitude through the magnetic field also fails. AHRS can be regarded as degenerating into six axes. In addition, due to the weak strength of the geomagnetic field, the method of using a magnetometer will also fail in environments with strong magnetic interference.

In addition, there is an inertial navigation system (INS), which integrates GPS and not only senses its own posture, but also provides stable position, speed and other information, making its function more powerful.

2. Some applications of IMU

▍The role of IMU in legged robots

The most important application of IMU in legged robots is to control the closed loop. The goal of motion control of a footed robot is to control the movement of the robot to reach a specified position. For example, when the robot is running, the primary goal of control is to keep it balanced and not fall. Only on this basis can the action of running be realized, just like a human. If you fall down, you will definitely not be able to run. Specifically, the attitude of the robot body is controlled through the data fed back by the IMU to ensure that its roll, pitch, and yaw angles remain at the positions we expect.

5aa32b7981af54a76a809af6d96c6213.png

Figure 9 Iron Egg robot in motion

Image source: https://www.mi.com/cyberdog

We mentioned data fusion earlier, and we only introduced the data fusion within the IMU. There are usually many sensors on robots, such as visual sensors, lidar, ultrasound, TOF, etc. A variety of sensors together form the robot's perception system, just like our human visual, auditory, tactile and other receptors. Similar to IMU, various sensors have their limitations. For example, visual SLAM is easily affected by image occlusion, illumination changes, interference from moving objects, weak texture scenes, etc., and is easily lost during fast motion. Laser SLAM lacks semantic information, etc. Therefore, a variety of sensors are needed for data fusion.

The IMU can achieve attitude estimation during fast motion, perform data fusion with other sensors, and provide other sensors with accurate prior information. For example, the fusion of the IMU and the visual sensor can obtain the initial posture of the visual sensor and reduce the error caused by the motion of the sensing source; at the same time, the IMU can also be used to interpolate and smooth the visual image. In turn, the visual sensor itself has characteristics such as non-drift and direct measurement of translation and rotation, which can be used to calibrate the drift of the IMU and improve the accuracy of the IMU. Through "complementary advantages", multi-sensor fusion provides a stable and reliable sensing system for robots, which is an indispensable part of the intelligent robot system.

▍The role of IMU in other fields

Here we take several IMU companies that are leading the world in industry technology as examples to introduce some applications of IMU in other fields.

>>>> Xsens

Xsens is a Dutch company that focuses on the development and production of inertial sensors, inertial navigation systems and motion capture equipment. Its products are mainly used in fields such as motion analysis, posture measurement, virtual reality, human-computer interaction, and medical rehabilitation. Xsens' products are mainly divided into three categories: inertial sensors (AHRS, IMU), inertial navigation systems (INS) and motion capture devices (MVN). Among them, inertial sensors are mainly used to measure physical quantities such as acceleration, angular velocity, and magnetic field strength of objects, and can be used for attitude measurement, motion analysis, and navigation. Inertial navigation systems are based on inertial sensors and combine geomagnetism, air pressure, and GPS. and other sensors to achieve high-precision attitude measurement and navigation; motion capture equipment is used to capture the movement trajectory of the human body or objects, and can be used in fields such as virtual reality and motion capture.

7f8f6ebf1d4c6d6d413637bc92f52f67.png

Figure 10 Xsens’ MTi AHRS equipment

Image source: https://www.movella.com/

Traditional optical motion capture equipment requires a dedicated motion capture venue, using high-speed cameras distributed around to capture reflective points attached to the human body, and then post-processing to restore the body's movements. Optical motion capture is mostly used for high-precision motion data. fields, such as game production, film shooting, etc. Although this type of motion capture system is highly accurate, it is expensive and inconvenient. Therefore, in some fields such as VR and medical recovery, inertial motion capture systems are now used to calculate and restore human body movements in real time using the attitude data fed back by the IMU.

Xsens MVN Link is Xsens' motion capture series product, which can achieve motion capture without external equipment. It only needs sensors worn on the body to obtain data, without the need to install cameras and other external equipment. One of the key technologies is to use IMU to sense the posture data of the human body or other objects in real time, and transmit it back to the device to play back the action or do some other post-processing. The following video is an application of inertial motion capture equipment. .

Video source: https://mp.weixin.qq.com/s/_rWwnZ2vy2ZfjXcq_b-v3A

>>>> Microstrain

Microstrain is a company in the United States that focuses on the R&D and production of inertial sensors and inertial navigation systems. Its products are mainly used in aerospace, national defense, automation control, motion analysis and other fields. Microstrain's products are famous for their high precision, high stability and high reliability, and are widely used in various fields. Its customers include many well-known companies and institutions, such as Boeing, Lockheed Martin, the U.S. Department of Defense, and NASA. wait.

Microstrain's RTK products are real-time dynamic positioning solutions based on Global Positioning System (GPS) technology. They use high-precision GPS receivers and IMUs to provide very accurate positioning and navigation information. They are currently used in land surveying and map production, buildings It has applications in monitoring, transportation, agriculture, etc.

RTK systems can be roughly divided into several types in terms of application: UAV RTK system: This system can provide high-precision positioning capabilities for UAVs, support a variety of flight control systems, and is lightweight, easy to install and operate; Fixed RTK system: This system can provide high-precision positioning capabilities for fixed equipment, such as building monitoring, bridge monitoring, geological exploration and other fields; Mobile RTK system: This system can provide high-precision positioning capabilities for mobile devices, Such as vehicle navigation, ship navigation and robot navigation.

b963f414d1f7f3b1317b29d55562e861.jpeg

Figure 11 Microstrain’s 3DM RTK equipment

Image source: https://www.microstrain.com/

The video below shows the positioning accuracy of Microstrain's 3DM RTK very well. Engineers designed a car that can navigate autonomously, installed a color-changing LED light tube on it, and used long-exposure photography to restore the car's movement trajectory. Relying on RTK's high-precision real-time perception capabilities, the car successfully restored multiple works of art, and its positioning accuracy reached centimeter level. The yaw angle accuracy of the IMU played a key role in the car's directional cruising.

Video source: https://youtu.be/iVUTnPN4m-Y

6. Summary

This article briefly introduces the posture sensor in the robot perception system and its working principle. Although it looks inconspicuous, it plays a very important role. Nowadays, intelligent devices that can move autonomously, such as robots and autonomous driving, are developing faster and faster, control algorithms are becoming more and more complex, and perception systems are becoming more and more powerful. As an important part of the perception system, IMU can be seen in everything from civilian electronics to aerospace. Its price ranges from a few dollars at the civilian level to tens or millions at the strategic level. IMUs are also In the direction of low cost and high precision.

Of course, the capabilities of a single sensor are limited, and only the fusion of multiple sensors can maximize their respective effects. Multi-sensor fusion perception has become one of the hot topics in fields such as robotics and autonomous driving. With traditional data fusion algorithms and deep learning and other methods, the accuracy of the perception system is gradually improving. It is believed that robots can enter thousands of households in the future, rely on their powerful sensory capabilities to interact with people, become one of the main participants in social production, and bring convenience to human life.

references:

[1] Sabatelli S , Galgani M , Fanucci L ,et al.A Double-Stage Kalman Filter for Orientation Tracking With an Integrated Processor in 9-D IMU[J].IEEE Transactions on Instrumentation and Measurement, 2013, 62(3):590-598.

83d15f9fe8e46ad8963842d749eae0bd.gif

e4c387e38d0bfb0019ea72129f820bf4.png

Guess you like

Origin blog.csdn.net/pengzhouzhou/article/details/132222222