US study notes mathematics (To be continued)
- Charm mathematical model algorithm
First, the mathematical probability statistical model | Application Chinese word
Process parameters estimated by the hidden parameter (Markov chain) observed (state transition sequence), but also the implied relationship (transition probability) between the parameters.
Markov assumptions :
A state transition depends only on the previous state of n, when n is 1 taken Markov assumptions.
Independence assumption : Suppose observed any time depends only on the state of the Markov chain that time, independent of other observation state.
For example 1:
Shake out by several different dice face a series of numbers, presumably in the end with what dice.
Premise (output probability): Known probability of each dice and each number.
Example 2:
By type of day activity, suggesting that the day's weather.
Premise (output probability): Known probability of occurrence of each activity under each weather.
Speech Recognition:
Voice o1, o2, o3 presumed text to speech want to say s1, s2, s3 (not just simple text translation, identifying the user instruction is issued to the computer).
Algorithms for the role of software: more than just learning a short summary of recent short period of time, the road ahead will be long Come, as well as on the front and back still need to understand the use of specific matrix, function and so on. On the model of learning, can be summarized, highly mathematical knowledge base determines the height is the height of the algorithm thinking, thinking can be achieved, excellent necessarily need solid basic skills of any software algorithms to achieve. The study summarizes for me difficult to understand obscure knowledge of the learning step: look at the online excellent summary of the blog, they use simple (dice, weather events) and other examples to explain knowledge, facilitate entry; followed by a look at the outstanding paper, difficult algorithm thinking requires eating slowly.
Leaving the question: Why should the voice recognition into P (o1 ... | s1 ...) ? That in speech recognition, P (o1 ... | s1 ... ) or P (s1, s2, s3, ... | o1, o2, o3 ....) how to calculate?
The study reference:
http://www.52nlp.cn/hmm-learn-best-practices-four-hidden-markov-models
https://www.cnblogs.com/bigmonkey/p/7230668.html (重点推荐入门点击!!!)
https://wiki.mbalib.com/wiki/%E9%9A%90%E9%A9%AC%E5%B0%94%E6%9F%AF%E5%A4%AB
https://www.cnblogs.com/skyme/p/4651331.html
关于代码规范:
《高质量C++/C编程指南》作者: 林锐
https://wenku.baidu.com/view/07631b604a73f242336c1eb91a37f111f1850d37.html
展望:本学期想利用自己学到的知识做一个应用数学模型的“预判”小程序。