Artificial Intelligence_Uncertainty Reasoning Method (3 Subjective Bayes Method, 4 Credibility Method)

4.3 Subjective Bayesian approach (empirical Bayesian approach)

4.3.1 Representation of knowledge uncertainty

Please add image description

4.3.2 Representation of evidentiary uncertainty

In the subjective Bayesian approach, the uncertainty of the evidence is also expressed in terms of probabilities. For evidence E, the user gives P(E|S) based on observation S, which is the dynamic strength. Since it is difficult to subjectively give P(E|S), the credibility C(E|S) can be used instead of P(E|S) in practice . For example, in PROSPECTOR, C(E|S) and P(E|S) follow the following relationship

4.3.3 Algorithm for combining evidence uncertainty

1. The conjunction of multiple single pieces of evidence: E=E1 AND E2 AND … AND En
then the probability of combining the evidence: P(E|S)=min{P(E1|S),P(E2|S),…, P(En|S)}
2. Disjunction of multiple single pieces of evidence: E=E1 OR E2 OR … OR En
then the probability of combining evidence: P(E|S)=max{P(E1|S),P( E2|S),…,P(En|S)}
3. Not operation: P(¬E|S)=1-P(E|S)

4.3.4 Uncertainty transfer algorithm

The task of subjective Bayes method reasoning: based on the probability P(E) of the evidence E and the values ​​of LS and LN, update the prior probability P(H) of H to the posterior probability P(H|E) or P(H|¬ E). Right now

Methods of determining the posterior probability vary as the evidence is definitely present, definitely not present, or is inconclusive.

  1. When the evidence definitely exists
    , the probability function Θ(x) is introduced. Its relationship with probability is: Θ(x)=P(x)/(1-P(x)), P(x)=Θ(x)/(1 +Θ(x)) When the evidence definitely exists, P(E)=P(E|S)=1.
    From Bayes formula: P(H|E)=P(E|H)×P(H)/P(E) (1)

                           P(¬H|E)=P(E|¬H)×P(¬H)/P(E)        (2)
    

    Divide equation (1) by equation (2): P(H|E)/P(¬H|E)=P(E|H)/P(E|¬H)×P(H)/P(¬ H)
    is obtained from the definition of LS and probability function: Θ(H|E)=LS×Θ(H), that is, P(H|E)=LS×P(H) / [(LS-1)×P(H) +1]

  2. When the evidence definitely does not exist
    , P(E)=P(E|S)=0, P(¬E)=1.
    From Bayes formula: P(H|¬E)=P(¬E|H)×P(H)/P(¬E) (1)

                           P(¬H|¬E)=P(¬E|¬H)×P(¬H)/P(¬E)        (2)
    

    Divide equation (1) by equation (2): P(H|¬E)/P(¬H|¬E)=P(¬E|H)/P(¬E|¬H)×P(H) /P(¬H)
    is obtained from the definition of LN and probability function: Θ(H|¬E)=LN×Θ(H), that is, P(H|¬E)=LN×P(H) / [(LN-1 )×P(H)+1]

    Please add image description
    Please add image description
    Please add image description

  3. When the evidence is uncertain:
    When 0<P(E|S)<1, the posterior probability P(H|S) should be calculated using the following formula proved by Duda et al. in 1976: P(H|S)
    =P( H|E)×P(E|S)+P(H|¬E)×P(¬E|S)
    When P(E|S)=1, the evidence must exist.
    When P(E|S)=0, the evidence definitely does not exist.
    When P(E|S)=P(E), the evidence E has nothing to do with the observation S. From the total probability formula: P(H|S)=P(H|E)×P(E)+P(H|¬E)×P(¬E)=P(H)
    when P(E|S) For other values, P(H|S) is calculated through piecewise linear interpolation, that is

20201214184028914

Please add image description

4.3.5 Synthesis algorithm for conclusion uncertainty

All knowledge rules are connected into a directed graph. The nodes in the graph represent hypotheses and conclusions, and the arcs represent rules. Each arc corresponds to two numerical values ​​(LSLN), which are used to measure the sufficiency and necessity of the rule; such a directed graph graph called inference network

  • The inference network connects some evidence with some important hypotheses and conclusions

  • The leaf nodes are the evidence obtained by asking the user questions, and the other nodes are the conclusion hypotheses.

  • Each conclusion is first attached with a prior probability. The connection of the PH reasoning network is to measure how the change in the probability of a conclusion affects other conclusions.

Characteristics and advantages of the subjective BAYES method
:
It has a solid theoretical foundation.
The static strengths LS and LN of knowledge are given by domain experts, and the conclusions derived are more accurate and certain.
The subjective Bayes method not only provides a method for updating the posterior probability when the evidence definitely exists or does not exist, but also provides a method for updating the posterior probability when the evidence is uncertain, realizing the step-by-step transfer of uncertainty.
Disadvantages: It requires domain experts to give the prior probability
P(H) of H when giving knowledge , which is more difficult. Bayes' theorem requires that events be independent , which limits its application.

Methods for obtaining prior probabilities mainly include
(1) experimental probabilities derived from the distribution of a large number of experimental samples ( 2) classical probabilities
derived from the principle of equal probabilities of equally likely events.

(3) Subjective probabilities based on human subjective feelings . When there is no prior information, the prior probabilities of each possible event can be set to be equal to the "principle of no distinction", etc.

4.4 Credibility method

In 1975, E. HS Hortliffe and others from Stanford University proposed an uncertainty reasoning method based on the theory of confirmation and combined with probability theory. It has been well used in the hematology diagnosis expert system MYCIN.
Advantages: intuitive, simple, and effective

4.4.1 The concept of credibility

Credibility: The degree of belief that an object or phenomenon is true based on experience
.
Credibility is highly subjective and experiential, and its accuracy is difficult to grasp.
C-F (Certainty Factor) model: a basic method of uncertainty reasoning based on credibility representation.

4.4.2 CF concept

CF is obtained by subtracting MB (trust growth) and MD (distrust growth).

CF ( H , E ) = MB ( H , E ) − MD ( H , E ) CF(H,E)= MB(H,E)-MD(H,E)CFH,E=MB ( H ,EMDH,E

Please add image description

When MB(H, E)>0, it means that the appearance of evidence E increases the degree of trust in H.
When MD(HE)>0, it means that the distrust of H has increased due to the emergence of evidence E.
Since for the same evidence E, it is impossible to increase both the degree of trust in H and the degree of distrust in H. Therefore, MB(HE) and MD(HE) are mutually exclusive.
When MB(HE)>0, MD(HE)=0
When MD(HE)>0, MB(HE)=0

Please add image description

  1. Representation of knowledge uncertainty

    Please add image description
    Insert image description here

  2. Expression of evidential uncertainty

    Static strength CF (H, E): the strength of knowledge, that is, the degree of influence on H when the evidence corresponding to E is true.

    Dynamic strength CF(E): the current degree of uncertainty of evidence E

  3. Combined evidence uncertainty.

    Please add image description

    1. Uncertainty Transfer Algorithm

Uncertainty reasoning in the C-F model: Starting from uncertain initial evidence and using relevant uncertainty knowledge, the conclusion is finally derived and the credibility value of the conclusion is obtained. The credibility of conclusion H is calculated by the following formula:
CF(H) = CF(H,E) × max{0,CF(E)}

When CF(E)<0, then CF(H)=0

When CF(E)=1, then CF(H)= CF(H,E)

 5. 结论不确定性的合成算法

Please add image description
Please add image description

Guess you like

Origin blog.csdn.net/weixin_64625466/article/details/134551948