Markov property / no after-effect - Case Studies definitions

Markov properties / no aftereffect

Foundation Markov Decision Process (Markov Decision Process), dynamic programming (Dynamic Programming) is the Markov property / no after-effect. In short, * future has nothing to do with the past, and now only about *
that is:
P ( X n + 1 X 0 , . . . , X n ) = P ( X n + 1 X n ) P({X_{n+1}|X_0, . . . , X_n}) = P({X_{n+1}|X_n} )

What is the "after-effect"

Look at a few of Markovin (Not markov property example) case:

Look at the definition

强化学习第二版Reinforced Learning,second edition:
The state must include information about all aspects of the past agent–environment interaction that make a diference for the future. If it does, then the state is said to have the Markov property.

Probability theory and mathematical statistics Zhejiang Fourth Edition:
Markov or no after-effect: the process or the system at the moment t 0 t_0 State which is known in the case where, at time t > t 0 t>t_0 The conditional distribution of the process in which the state at time t 0 t_0 In which the state has nothing to do before. In layman's terms, that is, the future does not depend on the past.

Interesting interpretation:
http://blog.sciencenet.cn/blog-350729-665509.html

Guess you like

Origin blog.csdn.net/houhuipeng/article/details/90691916