HongYiLee Classification Notes

HongYiLee Classification Notes

标签: Notes MechineLearning DeepLearning Classification


Classification分类问题

Generative Model 生成模型

P ( x ) = P ( x | C 1 ) P ( C 1 ) + P ( x ) P ( x | C 2 ) 通过先验概率 P ( C 1 ) P ( C 2 ) 生成 P ( x )

Prior 先验概率 P ( C 1 ) P ( C 2 )

Assume the points are sampled from a Gaussion Distribution高斯分布。
Given the points how to find the Gaussion Distribution.

But the Gaussion Distribution with any mean m u and covariance matrix Σ can generative these points. So we use the Likelihood Function.似然函数。

We want to find the Maximum Likelihood on this Likelihood Function.
μ , Σ = a r g m a x μ , Σ ( μ , Σ )
which: μ = 1 n i = 1 n x i
and: Σ = 1 n i = 1 n ( x i x ) ( x i x ) T

So Class1 is a Gaussion distribution and Class2 is a Gaussion distribution, once we get each μ , Σ of the distribution, we can caclate the problem.

How to Modifying the Model

Different Distribution can share the same Σ . 应该要share怎么样的一个 Σ 比较合适呢,可以将两个Class的 Σ 加权平均。变成两个类共同share的一个协方差矩阵。
When We share the same covariance, the boundary become linear and the perfomance become better.当共用一个协方差矩阵的时候,边界变成了线性的并且表现变得更好。

Posterior Probabllity 后验概率(精彩推导,马上开始)

P ( C 1 | x ) = P ( x | C 1 ) P ( C 1 ) P ( x | C 1 ) P ( C 1 ) + P ( x | 2 ) P ( C 2 ) = 1 1 + P ( x | C 2 ) P ( C 2 ) P ( x | C 1 ) P ( C 1 ) = 1 1 + e z

Here we let z = l n P ( x | C 1 ) P ( C 1 ) P ( x | C 2 ) P ( C 2 ) .看起来很眼熟吗?没错这就是 sigmod函数啦!
z = w T x + b

P ( C 1 | x ) = 1 1 + e z = σ ( z ) = σ ( w T x + b )

So we got Logistic Regression.

猜你喜欢

转载自blog.csdn.net/weixin_39457086/article/details/80979582
今日推荐