Anatomy of the Kalman family from scratch - (02) Derivation and understanding of Bayesian formula → Know why

Explaining the summary link of a series of articles about slam:The most comprehensive slam in history starts from scratch, for the Kalman family explained in this column Zero anatomy link:The Kalman family’s anatomy from scratch - (00) catalog latest explanation without blind spots: https://blog.csdn.net/weixin_43013761/article/details/133846882
 
The center directly below the end of the article provides my contact information. Click on my photo to display W X → Official certification {\color{blue}{The center directly below the end} provides my \color{red} contact information, \color {blue}Click on your photo to display WX→official certification} The center directly below the end of the articleprovides mycontact information,click on my photo to display a>XWOfficial certification

Solemnly declare: This series of blogs are exclusively owned by me (W e n h a i Z h u). Reprinting and plagiarism are prohibited. Thank you for reporting for the first time! \color{red} Solemnly declare: This series of blogs are exclusively owned by me (WenhaiZhu). Reprinting and plagiarism are prohibited. , thank you for the first report!Zheng heavy statement: 该 SERIES EXPO CUSTOMER HIMSELF(WenhaiZhu)Single private ownership,Prohibited abuse,Thank you for your first report!
 

I. Introduction

If we start with the derivation of the Kalman filter formula, I personally feel that this is not appropriate because it will cause too much confusion? For example, why do we need this thing and what is its use? Why do you have to use it? After all, you are not in school, unlike high school, just for exams. My personal purpose here is to thoroughly understand, think through, and use it. In other words, when others use this algorithm, they can easily understand their purpose, why they choose this algorithm, and what are the advantages of this algorithm. Advantage. For another example, when a problem occurs when using other people's open source algorithms for a project, if you know the reason, you can locate the cause faster, instead of just fishing blindly and relying on luck. Without further ado, let’s get started.

To explore Bayesian filtering, you have to mention Bayesian formula. Before deriving it in detail, let’s talk about some common things in daily life:
( 1 ) Roommate is late: \color{blue} (1) Roommate is late: (1)Receive your friend: When your roommate was late for class today (9.00 class), did you occasionally have a thought, what is the probability that he would get up before 8.30?
(2) Cause of car accident: \color{blue} (2) Cause of car accident: (2)Yu Kashihara: When you were passing by the sidewalk, you saw a car accident. I believe many people will wonder, was it caused by running a traffic light, or was it an accident while driving normally?
(3) Children fighting: \color{blue} (3) Children fighting: (3)Small platform: Your child got into a fight with a neighbor’s child. Your child came to complain and told you that he was bullied. You thought to yourself, whose child started the fight? ?
(4) The detective solves the case: \color{blue}(4) The detective solves the case: (4)God Exploration Plan: This example is quite typical. For example, a murder occurs and three suspects are identified. Who is the real murderer? Or who has the highest probability of being the murderer.

These things all have one thing in common, which is to deduce the cause from the result. If I have enough time, I will try to analyze the above case using mathematical formulas. I will not analyze some relatively simple forward derivation, such as a deck of cards, what is the probability that one card is randomly selected and it is the Ace of Hearts; what is the probability of a coin being tossed 10 times in a row and 3 heads? ; There are 10 cards in the box worth one dollar, 5 cards worth five dollars, one card worth ten dollars, and 3 cards are randomly selected. How much can you get? These are all things based on probability theory.

There is a course in the university called Probability Theory and Mathematical Statistics. In fact, Probability Theory and Mathematical Statistics are two closely related but different fields of mathematics. They are both used to study and understand uncertainty and random phenomenon. Here are their definitions and main concerns:

Probability Theory:
Probability theory is a branch of mathematics that studies mathematical models of uncertain events and random phenomena, as well as their probabilistic properties and laws.
The main focus includes the probability of events, random variables, probability distribution, conditional probability, independence, expected value, variance, etc.
Probability theory is used to describe and analyze the likelihood of an event occurring. It provides a framework to quantify uncertainty so that reasonable decisions and inferences can be made.

Mathematical Statistics:
Mathematical statistics is a branch of mathematics concerned with collecting and analyzing data to make inferences and decisions about general characteristics.
The main focus includes parameter estimation, hypothesis testing, confidence intervals, variance analysis, regression analysis, etc.
Mathematical statistics helps us extract information from collected data, understand population characteristics, and answer questions about the population through statistical inference.

Personal understanding: \color{red}Personal understanding:Personal understanding: Generally, what is related to forward derivation (cause → effect) is more closely related to probability theory; what is related to reverse derivation (effect → cause) is related to Mathematical statistics is more closely related.

Before explaining Bayesian filtering, you need to understand the Bayesian formula. For the derivation process of this formula, please seeDissection of the Kalman family from scratch - (01) Preliminary knowledge points , here are the Bayesian formulas for discrete and continuous random variables:

(1) Discrete: \color{blue}(1)Discrete:(1)disperse:
P ( X i ∣ Y ) = P ( Y ∣ X i ) P ( X i ) P ( Y ) = P ( Y ∣ X i ) P ( X i ) ∑ i = 1 n P ( Y ∣ X i ) P ( X i ) (01) \color{Green} \tag{01} P\left(X_{i} \mid Y\right)=\frac{P\left(Y \mid X_{i}\right) P\left(X_{i}\right)}{P(Y)}=\frac{P\left(Y \mid X_{i}\right) P\left(X_{i}\right)}{\sum_{i=1}^{n} P\left(Y \mid X_{i}\right) P\left(X_{i}\right)} P(XiY)=P(Y)P(YXi)P(Xi)=i=1nP(YXi)P(Xi)P(YXi)P(Xi)(01)
( 2 ) Connection: \color{blue}(2) Connection: (2)connection:
f X ∣ Y ( x ∣ y ) = f X , Y ( x , y ) f Y ( y ) = f Y ∣ X ( y ∣ x ) f X ( x ) ∫ − ∞ + ∞ f Y ∣ X ( y ∣ x ) f X ( x ) d x = η f Y ∣ X ( y ∣ x ) f X ( x ) (02) \color{Green} \tag{02} f_{X \mid Y}(x \mid y)=\frac{f_{X, Y}(x, y)}{f_{Y}(y)}=\frac{f_{Y \mid X}(y \mid x) f_{X}(x)}{\int_{-\infty}^{+\infty} f_{Y \mid X}(y \mid x) f_{X}(x) \mathrm{d} x}=\eta f_{Y \mid X}(y \mid x) f_{X}(x) fXY(xy)=fY(y)fX,Y(x,y)=+fYX(yx)fX(x)dxfYX(yx)fX(x)=ηfYX(yx)fX(x)(02)In order to deeply understand the role of Bayesian formula, let’s cite a few Examples are used to deepen the impact, so that the formula is derived but cannot be used.

2. Examples of Bayesian formula

Example 1 (discrete):

Based on your own child’s understanding (subjective experience), if there is a fight with other children, the probability of him striking first is assumed to be P (active) = 0.2 P (active) =0.2 P(主动)=0.2, the probability of taking action later is P (passive) = 0.8 P (passive) = 0.8 P(被动)=0.8, in addition, the probability of taking the initiative to fight and telling me is P (Tell | Initiative) = 0.3 P(Tell | Active)=0.3 P(Adventuremain action)=0.3, the probability of taking the initiative to fight and not informing me is P (not informing | proactive) = 0.7 P (not informing | proactive) =0.7 P(UnannouncedMain activity)=0.7; Of course, it is also possible that P (Tell | Passive) = 0.9 P (Tell | Passive) = 0.9 < /span>P(告之被动)=0.9, P (Don’t tell ∣ Passive) = 0.1 P(Don’t tell | Passive)=0.1 P(not notifiedaffected a>)=0.1. For convenience, the conditions are listed together: P (active) = 0.2 P (passive) = 0.8 P (tell ∣ active) = 0.3 P (not tell ∣ active) = 0.7 P (report ∣ Passive) = 0.9 P (Don't Tell ∣ Passive) = 0.1 P(Active)=0.2~~~~~~~~~~~P(Passive)=0.8 \\P(Tell | Active)=0.3 ~~~~~~~~~ P(Don’t tell | Active)=0.7 \\P(Tell | Passive)=0.9~~~~~~~~P(Don’t tell | Passive)=0.1 P(主动)=0.2P(被动)           =0.8P(Adventuremain action)=0.3P(Not announced)main action         =0.7P(告之被动)=0.9P(Not reported)affected        =0.1Having known the above conditions, ask the baby what is the probability of taking the initiative to fight today? According to the Bayesian formula, it is expanded as follows: P (active∣ tell) = P (told∣ active) P (active) P (told∣ active) P (active) + P ( Tell ∣ Passive) P (Passive) = 0.06 0.06 + 0.72 = 0.077 P (Active|Tell)=\frac{P(Tell|Active)P(Active)}{P(Tell|Active)P(Active )+P(Tell | Passive)P(Passive)}=\frac{0.06}{0.06+0.72}=0.077 P(main actionannouncement)=P(Adventuremain action)main action(P)+P(告之被动)被动(P)P(Adventuremain action)main action(P)=0.06+0.720.06=0.077The above is the result we require. If you doubt whether you are correct, you can calculate it P (passive∣ tell) P (Passive | Tell) P(affectedannouncement P (main action ∣ announcement) P(main action | announcement) Approximate probability, given )P(Take the initiativeInform P (passive∣ tell it) = P ( tell it∣ passive) P (passive) P ( tell it∣ passive) P (passive) + P ( tell it∣ active) P (active) = 0.72 0.72 + 0.06 = 0.923 P(passive|report)=\frac{P(report|passive)P(passive)}{P(report|passive)P(passive)+P(report of|Active)P(Active)}=\frac{0.72}{0.72+0.06}=0.923 If the addition result is 1, it means it is correct. The calculation is as follows:)
P(被动告之)=P(告之被动)被动(P)+P(Adventuremain action)main action(P)P(告之被动)被动(P)=0.72+0.060.72=0.923So the above calculation process is correct. Generally speaking, the probability of the baby taking the initiative to fight today is still very low. A question that needs to be considered deeply here is that the probability of a baby actively fighting was originally 0.2, but why it was later reduced to 0.077 after calculation. This is because with the prior knowledge of the baby's personality, if the baby takes the initiative to fight, there is a high probability that he will not tell himself. This revised probability P (Tell | Take the initiative) P (Tell | Active) P(Adventuremain action It is also a corrected estimate. )

Example 2 (discrete):

Assume that 10% of the people in the world do not like to drink yogurt, and 90% of the people like to drink yogurt. The probability of people who like to drink yogurt admit that they like to drink yogurt is 99%, and the probability of people who do not like to drink yogurt to say that they like to drink yogurt is 99%. is 2%. If you interview someone, this person says that he likes to drink yogurt. Ask this person what is the general idea that he likes to drink yogurt?

Answer: The probability that people who like to drink yogurt say that they like to drink yogurt is 90% * 99%, and the probability that people who don’t like to drink yogurt say that they like to drink yogurt is 10% * 2%. So the probability that this person really likes to drink yogurt is: 90 % × 99 % 90 % × 99 % + 10 % × 2 % = 99.78 % \frac{90 \% \times 99 \% }{90 \% \times 99 \%+10 \% \times 2 \%}=99.78 \% 90%×99%+10%×2%90%×99%=99.78%

Example 3 (continuous):

The previous two examples were both discrete and did not use the probability density function (PDF). Now let's give a continuous example using distance measurement. Assuming that the renovated house needs to be inspected and accepted, the length of the living room needs to be measured to see if it meets the standards. According to the drawings given by the sales unit, the living room is 900cm long. If the deviation is within ±1cm, it meets the acceptance criteria. Then we buy a tape measure online. According to official information, the accuracy of the tape measure is plus or minus ±2cm (no matter how long the distance is measured). In addition to accuracy, the manufacturer does not provide any extra parameters.

Then let’s discuss the data given by the sales unit, which is 900cm, and the deviation is within ±1cm. Is this worth trusting? If it is credible, we have no need to calculate it, so we are skeptical here, at least not 100% trustworthy. Therefore, we believe that the data conforms to a normal distribution probability density function, as follows;
f ( x ) = 1 σ 2 π e − ( x − μ ) 2 2 σ 2 X ∼ N ( μ , σ 2 ) (03) \color{Green} \tag{03} f(x)=\frac {1}{ \sigma \sqrt{2 \pi}} e^{-\frac{(x-\mu)^{2}}{2 \sigma^{2}}}~~~~~~~~ ~~~X \sim N\left(\mu, \sigma^{2}\right) f(x)=p2π 1It is2σ2(xμ)2           XN(μ,p2)(03)onward μ \mu μ Display uniformity, σ \sigma σ The length of the living room given by the selling unit is an a priori value, here u = 900 u=900 in=900cm, standard difference σ = 1 \sigma=1 p=1cm. Then we took the tape measure to measure, and the measurement result was recorded as 897 897 897cm, this is an observed value. So what do we actually want to get? In fact, the probability of the house symbol acceptance standard. If the probability is high, the symbol is considered standard. According to the Bayesian probability density formula of Bayesian continuous random variables, we can get:
P ( X = 900 ∣ Y = 897 ) = ∫ f X ∣ Y ( x ∣ y ) d x = ∫ η f Y ∣ ~|~Y=897)=\int f_{X|Y}(x|y) \mathrm dx=\int \eta f_{Y|X}(y|x)f_X(x) \mathrm dx P(X=900  Y=897)=fXY(xy)dx=ηfYX(yx)fX(x)dx(04)According to the above formula, it can be seen that integration is required, where < /span> f X ( x ) f_X(x) fX(x) represents the prior value, which has been assumed to conform to the normal distribution of. For f Y ∣ X ( y ∣ x ) f_{Y|X}(y|x) fYX(yx) represents the likelihood probability density function. In this example, it means that when X = 900 X=900 X=900 time, Y = 897 Y=897 AND=The probability density function of 897, based on the sensor accuracy information, is also assumed to conform to the normal distribution, u = 897 u=897 in=897cm, standard difference σ = 2 \sigma=2 p=2cm. Then the following two positive probability density functions are currently known: Priori probability density function: f X ( x ) = X ∼ N ( 900 , 1 2 ) Likelihood probability density function: f Y ∣ , 1^{2}\right) \\Likelihood probability density function: ~f_{Y|X}(y|x)=Y \sim N\left(897, 2^{2}\right) Prior probability density function:f X(x)=XN(900,12)Likelihood probability density function:f YX(yx)=ANDN(897,22)(05)Specify the slightly larger, slightly larger range: η f Y ∣ X ( y ∣ x ) f X ( x ) = η 1 1 2 π e − ( x − 900 ) 2 2 ∗ 1 2 ∗ 1 2 2 π e − ( x − μ ) 2 2 ∗ 2 2 = η 1 0.8 2 π e − ( x − 899.4 ) 2 2 ∗ 0.8 2 (06) \color{Green} \tag{06} \eta f_{Y|X}(y|x)f_X (x)=\eta \frac{1}{ 1 \sqrt{2\pi}} e^{-\frac{(x-900)^{2}}{2 *1^{2}}}*\ frac{1}{2\sqrt{2\pi}} e^{-\frac{(x-\mu)^{2}}{2*2^{2}}}=\eta \frac{1} { 0.8 \sqrt{2\pi} } e^{-{\frac{(x-899.4)^{2}}{2*{0.8}^{2}}}} ηfYX(yx)fX(x)=the12π 1It is212(x900)222π 1It is222(xμ)2=the0.82π 1It is20.82(x899.4)2(06) Let’s ignore the calculation process of the product of the two normal distributions in the above formula. The blog will conduct detailed derivation, just look at the results. By observing the above formula, we can know that the posterior probability probability density function is η ∗ N ( 899.4 , 0. 8 2 ) \eta *N\left(899.4, 0.8^{2}\right ) theN(899.4,0.82), first of all, the value that can be seen intuitively, relative to the prior probability density function N ( 900 , 1 2 ) N\left(900, 1^{2}\right) N(900,12) The standard deviation has been reduced, which means the accuracy has been improved. Generally speaking, the actual living room has a high probability of 899.4cm , the deviation is about 0.8cm, here η \eta η is a constant.

Although we have reached this step, it still seems that our question has not been answered, that is, whether the actual living room meets the acceptance standards, and the integral of formula (23) has no upper and lower limits, although we know η = 1 / ∫ − ∞ + ∞ f Y ∣ X ( y ∣ x ) f X ( x ) d x \eta=1/{\int_{-\infty}^{+\infty} f_{Y \mid X}(y \mid x) f_{X}(x) \mathrm{d} x} the=1/+fYX(yx)fX(x)dx . We then analyze that according to the sales department’s instructions, 899.0~991.0 is a normal compliance standard, so we will calculate the probability that the length of the living room is greater than or equal to 899.0, that is: P ( 899.0 ≤ X ∣ Y = 900 ) P(899.0≤X|Y=900) P(899.0XY=900). Essentially, the posterior probability probability density function is to find η ∗ N ( 899.4 , 0. 8 2 ) \eta*N\left(899.4, 0.8^{2}\right) theN(899.4,0.82) x ∈ [ 899.0 , ∞ ] x\in[899.0,\infty ]x[899.0,Integral at ∞]. For example, find the probability P P P After that, if it is greater than 80%, then we can think that the developer’s feasibility is relatively high, which means it meets the acceptance criteria. I have omitted the calculation process here. Divide by here η \eta η, its function is normalized, which is actually P ( − ∞ ≤ X + ∞ ∣ Y = 900 ) P( -\infty≤X+\infty|Y=900) P(X+∞∣Y=The integration result of 900). To put it bluntly, it is N ( 899.4 , 0. 8 2 ) N \left(899.4, 0.8^{2}\right) N(899.4,0.8The integration results of 2) are not consistent between 0 and 1, so normalization is performed.

3. Bayesian formula of normal distribution

The above example of continuous random variable and normal distribution, Example 3, can be directly substituted into the following formula to obtain the result:
Prior probability density function: f X ( x ) = X ∼ N ( μ 1 , σ 1 2 ) = 1 σ 1 2 π e − ( x − μ 1 ) 2 2 σ 1 2 (07) \color{Green} \tag{07} Prior probability density function: ~f_X(x) =X \sim N\left(\mu_1, \sigma_1^{2}\right)=\frac{1}{ \sigma_1 \sqrt{2 \pi}} e^{-\frac{(x-\mu_1) ^{2}}{2 \sigma_1^{2}}} Prior probability density function:f X(x)=XN(μ1,p12)=p12π 1It is2σ12(xμ1)2(07) Find the function of a function: f Y ∣ X ( y ∣ x ) = X ∼ N ( µ 2 , σ 2 2 ) = 1 σ 2 2 π e − ( x − µ 2 ) 2 2 σ 2 2 (08) ~f_{Y|X}(y|x)=X \sim N\left(\mu_2, \sigma_2^{2}\right)=\frac{1}{\sigma_2\sqrt{2\pi}} e ^{-\frac{(x-\mu_2)^{2}}{2\sigma_2^{2}}} Likelihood probability density function:f YX(yx)=XN(μ2,p22)=p22π 1It is2σ22(xμ2)2(08) f X ∣ Y ( x ∣ y ) = f X , Y ( x , y ) f Y ( y ) = f Y ∣ X ( y ∣ x ) f X ( x ) ∫ − ∞ + ∞ f Y ∣ X ( y ∣ x ) f X ( x ) d x (09) \color{Green} \tag{09} f_{X \mid Y}(x \mid y)=\frac{f_{X, Y}(x, y)}{f_{Y}(y)}=\frac{f_{Y \mid X}(y \mid x) f_{X}(x)}{\int_{-\infty}^{+\infty} f_{Y \mid X}(y \mid x) f_{X}(x) \mathrm{d} x} fXY(xy)=fY(y)fX,Y(x,y)=+fYX(yx)fX(x)dxfYX(yx)fX(x)(09) Solve the functional forms : f X ∣ Y ( x ∣ y ) = N ( σ 1 2 σ 1 2 + σ 2 2 µ 2 + σ 2 2 σ 1 2 + σ 2 2 µ 1 , σ 1 2 σ 2 2 σ 1 2 + σ 2 2 ) (10) \color{Green } \tag{10} also define the boundary value:~f_{X|Y}(x|y)=N\left(\frac{\sigma_{1}^{2}}{\sigma_{1}^2 +\sigma_{2}^{2}} \mu_{2}+\frac{\sigma_{2}^{2}}{\sigma_{1}^{2}+\sigma_{2}^{2} } \mu_{1} , \frac{\sigma_{1}^{2} \sigma_{2}^{2}}{\sigma_{1}^{2}+\sigma_{2}^{2}} \right) posterior probability density function: fXY(xy)=N(p12+p22p12m2+p12+p22p22m1p12+p22p12p22)(10)From this formula, it can be clearly known that after the likelihood probability density function After correction, the standard deviation (error) of the prior probability density function can be reduced. Because σ 1 2 σ 2 2 σ 1 2 + σ 2 2 \frac{\sigma_{1}^{2} \sigma_{2}^{2}}{\sigma_{1 }^{2}+\sigma_{2}^{2}} p12+σ22p12p22 must be less than σ 1 2 \sigma_{1}^{2} p12 σ 1 2 \sigma_{1}^{2} p12of. It is more troublesome to directly expand the calculation. You can use the mathematical tool Mathematica to directly output the results. If there is sufficient time in the future, we may expand the length and conduct theoretical derivation.

4. Bayesian filtering idea

In order to make it easier for everyone to read my series of articles, some mathematical symbols need to be specified here to avoid accidental confusion when seeing the symbols later. First make some introductions or agreements. At the same time, the derivation of important formulas will also be recordedAnatomy of the Kalman family from scratch - (01) Preliminary knowledge points In this article, I personally recommend that after reading it, Go back and read that blog post.

Core (personal understanding): \color{red}Core (personal understanding):Core(Personal understanding): First of all, Bayesian filtering is not a specific implementation of a specific algorithm, but an ideological guidance, similar to the abstract class in C++ programming, which only defines some virtual functions. Here is a more appropriate example, such as primary school Chinese class, children will learn strokes, one horizontal, one vertical, one stroke, etc. These are similar to Bayesian filtering, which is relatively basic and abstract. However, through the combination of strokes, various Chinese sub-pictures can be realized, such as you, me, him, good, bad, etc. These actual words are like Kalman filter, particle filter, etc. Although every child writes a word, as long as it is written (combination of strokes) correctly, then it is a correct word, but what? Every child's handwriting is different, some are good-looking, some are not good-looking, some are big, some are small, but these words are all the children's own characteristics. It's like here, all walks of life and different algorithms may all use Kalman filtering, but they are not the same, and there will be some differences. Therefore, when we use these instantiated algorithms, we also need to make some changes or adjustments based on the actual situation.

For example, in the above three examples, the assumed known conditions and Gaussian distribution have actually been instantiated, because Bayes is given P ( Y ∣ X i ) P\left(Y \mid X_{i}\right) P(YXi) P ( X i ) P\left(X_{i}\right) P(Xi), but it does not give specific values. The values ​​are our assumptions. In actual application, in order to obtain these probability values, modeling is usually required. For example, if Gaussian distribution is used for modeling, it can be called Gaussian filtering. So what is filtering? In fact, it is easy to understand in one sentence. The function of filtering is to reduce noise. For example, if you buy a thermometer, the parameters purchased have stated that the accuracy deviation of the sensor is ±1°, then this ±1° It's noise, which is determined by the manufacturer after various tests. Then, through the filtering algorithm, the accuracy can be improved, if it reaches a deviation of ±0.5°. In this way, the noise is obviously reduced.

5. Summary

Through the explanation of this blog, I mainly learned the following concepts: first verification probability, likelihood probability, and posterior probability. In the subsequent process, in order to be consistent with various textbooks or blogs, it should be mentioned here that the prior is also called prediction, and the likelihood is also called observation. They may be mixed in the subsequent process. I hope everyone will notice in advance to avoid more doubts. About:
f X ∣ Y ( x ∣ y ) = f X , Y ( x , y ) f Y ( y ) = f Y ∣ X ( y ∣ x ) f X ( x ) ∫ − ∞ + ∞ f Y ∣ {X \mid Y}(x \mid y)=\frac{f_{X, Y}(x, y)}{f_{Y}(y)}=\frac{f_{Y \mid X}(y \mid x) f_{X}(x)}{\int_{-\infty}^{+\infty} f_{Y \mid X}(y \mid x) f_{X}(x) \mathrm{d } x}=\eta f_{Y \mid X}(y \mid x) f_{X}(x) fXY(xy)=fY(y)fX,Y(x,y)=+fYX(yx)fX(x)dxfYX(yx)fX(x)=ηfYX(yx)fX(x)(11) is a very important formula. Although it is abstract, the subsequent Many derivations are based on this abstract formula, or the concretization and instantiation of this formula. There are thousands of instantiation methods, but it doesn’t matter. We will analyze and explain them one by one later.

This blog explains an example of Bayesian filtering of continuous random variables using the normal distribution modeling method. However, strictly speaking, this example is not comprehensive. In fact, Bayesian filtering usually changes along the time axis. There are multiple prior values ​​and predicted values. And there is a recursive relationship. The specific details will be analyzed in detail in the next blog.

Guess you like

Origin blog.csdn.net/weixin_43013761/article/details/133847733