机器学习基石笔记 Lecture 2: Learning to Answer Yes/No

Lecture 2: Learning to Answer Yes/No


QQ截图20151219201921.png-90.7kB

Perceptron

A Simple Hypothesis Set: the ‘Perceptron’

QQ截图20151213110730.png-105.9kB

感知器类比神经网络,threshold类比考试60分及格

Vector Form of Perceptron Hypothesis

QQ截图20151213110839.png-50.7kB
each ‘tall’ W represents a hypothesis h & is multiplied with ‘tall’ X —will use tall versions to simplify notation

Perceptrons in R2

QQ截图20151213111118.png-83kB

Fun time

QQ截图20151213111157.png-109kB

Select g from H

遍历是不现实的,所以还是迭代吧
QQ截图20151213111442.png-76kB

Perceptron Learning Algorithm

A fault confessed is half redressed.

QQ截图20151213111711.png-95.3kB
因为 wTtxn(t)=wtxn(t)cos(wt,xn(t)) ,所以当二者夹角大于90°的时候,内积为-,反之为+

Fun time

QQ截图20151213113247.png-72.3kB

说明了什么含义 为什么不对?

Implementation

QQ截图20151213112600.png-58.8kB
start from some w0 (say, 0,并不是随机的初始化), and ‘correct’ its mistakes on D next can follow naïve cycle (1, · · · , N) or precomputed random cycle
QQ截图20151213112944.png-11.8kB
QQ截图20151213112901.png-12.8kB
QQ截图20151213112914.png-22.1kB

(note: made xix0=1 for visual purpose) Why ?

Issues of PLA

QQ截图20151213113143.png-57.9kB

Linear Separability

QQ截图20151213113358.png-79.9kB
assume linear separable D ,does PLA always halt?

halts!

QQ截图20151213113746.png-55.8kB
QQ截图20151213113644.png-21.5kB
因为 wTfwTwfwT<=1 ,所以T肯定有上限

PLA Fact: wt Gets More Aligned with wf

QQ截图20151213113958.png-60.7kB

wt appears more aligned with wf after update really?

PLA Fact: wt Does Not Grow Too Fast
QQ截图20151213114134.png-77.4kB
QQ截图20151213114248.png-67.4kB

wTfwTwTfwT1+minnynwTfxnwfw0+TminnynwTfxnTminnynwTfxnρTwf2(A)

wT2wT12+maxnynxnw02+Tmaxynxn2Tmaxynxn2TR2(B)

推导过程中需要注意的是, w0=0 ,然后将 (A)(B)代入即可得答案为

得到是上限,而且无法准确求出,因为 wf 未知
即使 w00 也是能证明有上限的

特性

QQ截图20151213120920.png-68.1kB

Learning with Noisy Data

NP难问题
QQ截图20151213121021.png-83.9kB

Pocket Algorithm

modify PLA algorithm (black lines) by keeping best weights in pocket
QQ截图20151213121151.png-75.6kB

猜你喜欢

转载自blog.csdn.net/soidnhp/article/details/50359570