梯度提升决策树从名字看是由三个部分组成,这里的提升(Boosted)指的是 AdaBoost 的运用,现在先看一下后两个部分提升决策树(Adaptive Boosted Decision Tree)。
提升决策树(Adaptive Boosted Decision Tree)
提升决策树顾名思义是将决策树作为 AdaBoost 的基模型。那么训练流程如下:
function AdaBoost-DTree(
D
)
For
t
=
1
,
2
,
…
,
T
reweight data by
u
(
t
)
obtain tree
g
t
by
DTree
(
D
,
u
(
t
)
)
calculate ’vote’
α
t
of
g
t
return
G
=
Linearhypo
(
{
(
g
t
,
α
t
)
}
)
\begin{aligned} &\text { function AdaBoost-DTree( } \mathcal { D } \text { ) } \\ &\text { For } t = 1,2 , \ldots , T \\ &\qquad \text { reweight data by } \mathrm { u } ^ { ( t ) } \\ &\qquad \text { obtain tree } g _ { t } \text { by } \\ &\qquad \text { DTree } \left( \mathcal { D } , \mathbf { u } ^ { ( t ) } \right) \\ &\qquad \text { calculate 'vote' } \alpha _ { t } \text { of } g _ { t } \\ &\text { return } G = \text { Linearhypo } \left( \left\{ \left( g _ { t } , \alpha _ { t } \right) \right\} \right) \end{aligned}
function AdaBoost-DTree( D ) For t = 1 , 2 , … , T reweight data by u ( t ) obtain tree g t by DTree ( D , u ( t ) ) calculate ’vote’ α t of g t return G = Linearhypo ( { ( g t , α t ) } )
显然现在需要一个决策树作为 加权基算法(Weighted Base Algorithm)。所以现在需要使用实现决策树
DTree
(
D
,
u
(
t
)
)
\text { DTree } \left( \mathcal { D } , \mathbf { u } ^ { ( t ) } \right)
DTree ( D , u ( t ) ) 可以接受加权特征。但是由于决策树有很多前人的巧思或者说很复杂,很难找出跟
E
in
E_\text{in}
E in 有关的所有项,将权重加入。所以这时选择退回到复制样本数据而不是直接把权重直接交给决策树。即从重赋予权重(Re-weighting)退回到重采样(Re-sample) 。
如何使用 bootstrap 实现权重信息的传递呢?这里进入采样概率的想法。
什么意思呢?那在 Original AdaBoost 中使用权重代替样本复制,那么现在为了使得决策树模型可以接受权重信息,需要根据权重对于样本进行同倍数的复制,这一过程体现在重新采样中便是使用采样概率,即所有样本被选择的概率不是一样的而是与自己的权重单调相关。也就是有概率的 bootstrap,生成所需的训练样本集
D
~
t
\tilde \mathcal{D}_t
D ~ t 。
所以提升决策树(AdaBoost-DTree)经常通过 AdaBoost + sample
∝
u
(
t
)
+
DTree
(
D
~
t
)
\propto \mathbf { u } ^ { ( t ) } + \operatorname { DTree } \left( \tilde { \mathcal { D } } _ { t } \right)
∝ u ( t ) + D T r e e ( D ~ t ) 实现。
弱决策树(Weak Decision Tree)
在前文决策树的学习中,可以知道只要数据无噪声,一颗完全长成树的
E
in
=
0
E_{\text{in}} = 0
E in = 0 ,这不仅仅是过拟合的问题。在 AdaBoost 中时,如果采样的结果与权重矩阵效果完全一样,那么便可以训练处一颗完全长成树,也就是说
E
in
u
=
0
E^{u}_{\text{in}} = 0
E in u = 0 ,也就是错误率
ϵ
t
=
0
\epsilon _ { t } = 0
ϵ t = 0 ,最终导致
α
t
=
ln
(
⋆
t
)
=
ln
1
−
ϵ
t
ϵ
t
→
inf
→
\alpha_t = \ln(\mathbf { \star } _ { t }) = \ln \sqrt { \frac { 1 - \epsilon _ { t } } { \epsilon _ { t } }} \rightarrow \inf \rightarrow
α t = ln ( ⋆ t ) = ln ϵ t 1 − ϵ t
→ inf → autocracy(独裁者),所以应该让现在的算法弱一些。
具体的方法有:
剪枝(pruned): 使用原来剪枝策略或者只是限制树的高度
不完全采样(some): 数据少一些,决策树也“难为无米之炊”。
所以提升决策树(AdaBoost-DTree)另一实现形式是 AdaBoost + sample
∝
u
(
t
)
+
pruned
DTree
(
D
~
t
)
\propto \mathbf { u } ^ { ( t ) } + \text{pruned } \operatorname { DTree } \left( \tilde { \mathcal { D } } _ { t } \right)
∝ u ( t ) + pruned D T r e e ( D ~ t ) 。
极端状态则是决策树的高度(height)小于等于 1,也就是 C&RT 中就只剩下分钟条件(branching criteria)
b
(
x
)
b(\mathbf{x})
b ( x ) :
b
(
x
)
=
argmin
decision stumps
h
(
x
)
∑
c
=
1
2
∣
D
c
with
h
∣
⋅
impurity
(
D
c
with
h
)
b ( \mathbf { x } ) = \underset { \text { decision stumps } h ( \mathbf { x } ) } { \operatorname { argmin } } \sum _ { c = 1 } ^ { 2 } | \mathcal { D } _ { c } \text { with } h | \cdot \text { impurity } \left( \mathcal { D } _ { c } \text { with } h \right)
b ( x ) = decision stumps h ( x ) a r g m i n c = 1 ∑ 2 ∣ D c with h ∣ ⋅ impurity ( D c with h )
可以看出这便是 AdaBoost-Stump,也就是说 AdaBoost-Stump 是 AdaBoost-DTree 的一种。当然这么简单的模型则不需要进行 sampling 的操作了。
AdaBoost 的优化视角
回顾 AdaBoost 中的权值更新过程:
u
n
(
t
+
1
)
=
{
u
n
(
t
)
⋅
⋆
t
if incorrect
u
n
(
t
)
/
⋆
t
if correct
=
u
n
(
t
)
⋅
⋆
t
−
y
n
g
t
(
x
n
)
=
u
n
(
t
)
⋅
exp
(
−
y
n
α
t
g
t
(
x
n
)
)
\begin{aligned} u _ { n } ^ { ( t + 1 ) } & = \left\{ \begin{array} { l l } u _ { n } ^ { ( t ) } \cdot \mathbf { \star } _ { t } & \text { if incorrect } \\ u _ { n } ^ { ( t ) } / \mathbf { \star } _ { t } & \text { if correct }\end{array} \right. \\ & = u _ { n } ^ { ( t ) } \cdot \mathbf { \star } _ { t } ^{ - y _ { n } g _ { t } \left( \mathbf { x } _ { n } \right) }= u _ { n } ^ { ( t ) } \cdot \exp \left( - y _ { n } \alpha _ { t } g _ { t } \left( \mathbf { x } _ { n } \right) \right) \end{aligned}
u n ( t + 1 ) = { u n ( t ) ⋅ ⋆ t u n ( t ) / ⋆ t if incorrect if correct = u n ( t ) ⋅ ⋆ t − y n g t ( x n ) = u n ( t ) ⋅ exp ( − y n α t g t ( x n ) )
也就是说
u
n
(
T
+
1
)
=
u
n
(
1
)
⋅
∏
t
=
1
T
exp
(
−
y
n
α
t
g
t
(
x
n
)
)
=
1
N
⋅
exp
(
−
y
n
∑
t
=
1
T
α
t
g
t
(
x
n
)
)
u _ { n } ^ { ( T + 1 ) } = u _ { n } ^ { ( 1 ) } \cdot \prod _ { t = 1 } ^ { T } \exp \left( - y _ { n } \alpha _ { t } g _ { t } \left( \mathbf { x } _ { n } \right) \right) = \frac { 1 } { N } \cdot \exp \left( - y _ { n } \sum _ { t = 1 } ^ { T } {\alpha _ { t } g _ { t } \left( \mathbf { x } _ { n } \right) }\right)
u n ( T + 1 ) = u n ( 1 ) ⋅ t = 1 ∏ T exp ( − y n α t g t ( x n ) ) = N 1 ⋅ exp ( − y n t = 1 ∑ T α t g t ( x n ) )
由于
G
(
x
)
=
sign
(
∑
t
=
1
T
α
t
g
t
(
x
)
)
G ( \mathbf { x } ) = \operatorname { sign } \left( \sum _ { t = 1 } ^ { T } \alpha _ { t } g _ { t } ( \mathbf { x } ) \right)
G ( x ) = s i g n ( ∑ t = 1 T α t g t ( x ) ) ,所以在这里将
∑
t
=
1
T
α
t
g
t
(
x
)
\sum _ { t = 1 } ^ { T } \alpha _ { t } g _ { t } ( \mathbf { x } )
∑ t = 1 T α t g t ( x ) 称为
{
g
t
}
\{g_t\}
{ g t } 在
x
\mathbf{x}
x 上的投票分数(voting score)。也就是说
u
n
(
T
+
1
)
∝
exp
(
−
y
n
(
voting score on
x
n
)
)
u _ { n } ^ { ( T + 1 ) } \propto \exp \left( - y _ { n } \left( \text { voting score on } \mathbf { x } _ { n } \right) \right)
u n ( T + 1 ) ∝ exp ( − y n ( voting score on x n ) ) ,即两者单调相关。
从另一个角度来看
G
(
x
n
)
=
sign
(
∑
t
=
1
T
α
t
⏟
w
i
g
t
(
x
n
)
⏟
ϕ
i
(
x
n
)
⏞
voting score
)
G \left( \mathbf { x } _ { n } \right) = \operatorname { sign } \left( \overbrace { \sum _ { t = 1 } ^ { T } \underbrace { \alpha _ { t } } _ { \mathbf {w} _ { i } } \underbrace { g _ { t } \left( \mathbf { x } _ { n } \right) } _ { \phi _ { i } (\mathbf{x}_n)} }^{\text{voting score}} \right)
G ( x n ) = s i g n ⎝ ⎜ ⎜ ⎜ ⎜ ⎛ t = 1 ∑ T w i
α t ϕ i ( x n )
g t ( x n )
voting score ⎠ ⎟ ⎟ ⎟ ⎟ ⎞
回顾一下在 hard-margin SVM 的间隔求取公式为:
margin
=
y
n
⋅
(
w
T
ϕ
(
x
n
)
+
b
)
∥
w
∥
\text { margin } = \frac { y _ { n } \cdot \left( \mathbf { w } ^ { T } \boldsymbol { \phi } \left( \mathbf { x } _ { n } \right) + b \right) } { \| \mathbf { w } \| }
margin = ∥ w ∥ y n ⋅ ( w T ϕ ( x n ) + b )
是不是很类似呢,只是没有
b
b
b 而已。所以从这个角度来看:
y
n
(
voting score
)
=
signed & unnormalized margin
y _ { n } ( \text { voting score } ) = \text { signed \& unnormalized margin }
y n ( voting score ) = signed & unnormalized margin
也就是说:
u
n
(
T
+
1
)
small
⇒
1
N
⋅
exp
(
−
y
n
∑
t
=
1
T
α
t
g
t
(
x
n
)
⏟
G
(
x
)
)
small
⇒
y
n
(
voting score
)
positive & large
\begin{aligned} & \quad u _ { n } ^ { ( T + 1 ) } \text{ small } \\ &\Rightarrow \frac { 1 } { N } \cdot \exp \left( - y _ { n } \sum _ { t = 1 } ^ { T } \underbrace{\alpha _ { t } g _ { t } \left( \mathbf { x } _ { n } \right) }_{G(\mathbf{x})}\right) \text{ small } \\ &\Rightarrow y _ { n } ( \text { voting score } )\text{ positive \& large } \end{aligned}
u n ( T + 1 ) small ⇒ N 1 ⋅ exp ⎝ ⎜ ⎛ − y n t = 1 ∑ T G ( x )
α t g t ( x n ) ⎠ ⎟ ⎞ small ⇒ y n ( voting score ) positive & large
所以说如果 AdaBoost 学习过程会不断优化减小
u
n
(
T
+
1
)
u _ { n } ^ { ( T + 1 ) }
u n ( T + 1 ) ,那么也是一种 large-margin 算法。事实证明 AdaBoost 是会降低
∑
n
=
1
N
u
n
(
t
)
\sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) }
∑ n = 1 N u n ( t ) 的,也就是说 AdaBoost 在最小化下面这式子:
∑
n
=
1
N
u
n
(
T
+
1
)
=
1
N
∑
n
=
1
N
exp
(
−
y
n
∑
t
=
1
T
α
t
g
t
(
x
n
)
)
\sum _ { n = 1 } ^ { N } u _ { n } ^ { ( T + 1 ) } = \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \exp \left( - y _ { n } \sum _ { t = 1 } ^ { T } \alpha _ { t } g _ { t } \left( \mathbf { x } _ { n } \right) \right)
n = 1 ∑ N u n ( T + 1 ) = N 1 n = 1 ∑ N exp ( − y n t = 1 ∑ T α t g t ( x n ) )
如果将投票分数
s
=
∑
t
=
1
T
α
t
g
t
(
x
n
)
s = \sum _ { t = 1 } ^ { T } \alpha _ { t } g _ { t } \left( \mathbf { x } _ { n } \right)
s = ∑ t = 1 T α t g t ( x n ) 用于测量预测误差的话,可以写出 0/1 误差测量表达式:
err
0
/
1
(
s
,
y
)
=
[
[
y
s
≤
0
]
]
\operatorname { err } _ { 0 / 1 } ( s , y ) = \left[ \kern-0.15em \left[ys \leq 0 \right] \kern-0.15em \right]
e r r 0 / 1 ( s , y ) = [ [ y s ≤ 0 ] ]
那现在提出另一种误差测量方式指数误差测量(exponential error measure):
err
^
A
D
A
(
s
,
y
)
=
exp
(
−
y
s
)
\widehat { \operatorname { err } } _ { A D A } ( s , y ) = \exp ( - y s )
e r r
A D A ( s , y ) = exp ( − y s )
绘制出两个误差测量函数曲线图: 可见
err
^
A
D
A
(
s
,
y
)
\widehat { \operatorname { err } } _ { A D A } ( s , y )
e r r
A D A ( s , y ) 是一种
err
0
/
1
(
s
,
y
)
\operatorname { err } _ { 0 / 1 } ( s , y )
e r r 0 / 1 ( s , y ) 的凸上限误差测量函数。所以说不断降低
u
n
(
T
+
1
)
u _ { n } ^ { ( T + 1 ) }
u n ( T + 1 ) 也是一种降低 0/1 误差的过程,也就是说
∑
n
=
1
N
u
n
(
t
)
⇔
E
^
A
D
A
\sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } \Leftrightarrow \hat { E } _ { \mathrm { ADA } }
∑ n = 1 N u n ( t ) ⇔ E ^ A D A 。
现在回忆一下梯度下降法的迭代公式:
min
∥
v
∥
=
1
E
i
n
(
w
t
+
η
v
)
≈
E
i
n
(
w
t
)
⏟
k
n
o
w
n
+
η
⏟
given positive
v
T
∇
E
i
n
(
w
t
)
⏟
k
n
o
w
n
\min _ { \| \mathbf { v } \| = 1 } E _ { \mathrm { in } } \left( \mathbf { w } _ { t } + \eta \mathbf { v } \right) \approx \underbrace { E _ { \mathrm { in } } \left( \mathbf { w } _ { t } \right) } _ { \mathrm { known } } + \underbrace { \eta } _ { \text {given positive } } \mathbf { v } ^ { T } \underbrace { \nabla E _ { \mathrm { in } } \left( \mathbf { w } _ { t } \right) } _ { \mathrm { known } }
∥ v ∥ = 1 min E i n ( w t + η v ) ≈ k n o w n
E i n ( w t ) + given positive
η v T k n o w n
∇ E i n ( w t )
实际上就是在当前的位置上,找到下一个比较好的位置,那么现在就需要当前位置,以及移动的方向和距离。类比于 AdaBoost 的话,便是
u
n
(
T
)
⇒
u
n
(
T
+
1
)
u _ { n } ^ { ( T ) } \Rightarrow u _ { n } ^ { ( T + 1 ) }
u n ( T ) ⇒ u n ( T + 1 ) 过程,也就是
min
h
E
^
A
D
A
=
1
N
∑
n
=
1
N
exp
(
−
y
n
(
∑
τ
=
1
t
−
1
α
τ
g
τ
(
x
n
)
+
η
h
(
x
n
)
)
)
=
∑
n
=
1
N
u
n
(
t
)
exp
(
−
y
n
η
h
(
x
n
)
)
≈
T
a
y
l
o
r
∑
n
=
1
N
u
n
(
t
)
(
1
−
y
n
η
h
(
x
n
)
)
=
∑
n
=
1
N
u
n
(
t
)
−
η
∑
n
=
1
N
u
n
(
t
)
y
n
h
(
x
n
)
\begin{aligned} \min _ { h } \hat { E } _ { \mathrm { ADA } } & = \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \exp \left( - y _ { n } \left( \sum _ { \tau = 1 } ^ { t - 1 } \alpha _ { \tau } g _ { \tau } \left( \mathbf { x } _ { n } \right) + \eta h \left( \mathbf { x } _ { n } \right) \right) \right) \\ & = \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } \exp \left( - y _ { n } \eta h \left( \mathbf { x } _ { n } \right) \right) \\ & \mathop{\approx} \limits^{Taylor} \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } \left( 1 - y _ { n } \eta h \left( \mathbf { x } _ { n } \right) \right) = \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } - \eta \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } y _ { n } h \left( \mathbf { x } _ { n } \right) \end{aligned}
h min E ^ A D A = N 1 n = 1 ∑ N exp ( − y n ( τ = 1 ∑ t − 1 α τ g τ ( x n ) + η h ( x n ) ) ) = n = 1 ∑ N u n ( t ) exp ( − y n η h ( x n ) ) ≈ T a y l o r n = 1 ∑ N u n ( t ) ( 1 − y n η h ( x n ) ) = n = 1 ∑ N u n ( t ) − η n = 1 ∑ N u n ( t ) y n h ( x n )
所以说现在需要好的
h
h
h 用于最小化
−
∑
n
=
1
N
u
n
(
t
)
y
n
h
(
x
n
)
=
∑
n
=
1
N
u
n
(
t
)
(
−
y
n
h
(
x
n
)
)
- \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } y _ { n } h \left( \mathbf { x } _ { n } \right) = \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } (-y _ { n } h \left( \mathbf { x } _ { n } \right))
− ∑ n = 1 N u n ( t ) y n h ( x n ) = ∑ n = 1 N u n ( t ) ( − y n h ( x n ) ) 。
那么在二分类中
y
n
y _ { n }
y n 和
h
(
x
n
)
)
h \left( \mathbf { x } _ { n } \right))
h ( x n ) ) 均
∈
{
−
1
,
+
1
}
\in \{-1,+1\}
∈ { − 1 , + 1 } ,那么有:
∑
n
=
1
N
u
n
(
t
)
(
−
y
n
h
(
x
n
)
)
=
∑
n
=
1
N
u
n
(
t
)
⋅
{
−
1
if
y
n
=
h
(
x
n
)
+
1
if
y
n
≠
h
(
x
n
)
=
−
∑
n
=
1
N
u
n
(
t
)
+
∑
n
=
1
N
u
n
(
t
)
⋅
{
0
if
y
n
=
h
(
x
n
)
2
if
y
n
≠
h
(
x
n
)
→
这实际上就是含权重样本的
N
⋅
E
in
=
−
∑
n
=
1
N
u
n
(
t
)
+
2
E
in
u
(
t
)
(
h
)
⋅
N
\begin{aligned} \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } \left( - y _ { n } h \left( \mathbf { x } _ { n } \right) \right) & = \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } \cdot \left\{ \begin{array} { c c } - 1 & \text { if } y _ { n } = h \left( \mathbf { x } _ { n } \right) \\ + 1 & \text { if } y _ { n } \neq h \left( \mathbf { x } _ { n } \right) \end{array} \right.\\ & = - \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } + \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } \cdot \left\{ \begin{array} { c c } 0 & \text { if } y _ { n } = h \left( \mathbf { x } _ { n } \right) \\ 2 & \text { if } y _ { n } \neq h \left( \mathbf { x } _ { n } \right) \end{array} \right. \rightarrow \text{这实际上就是含权重样本的 }N \cdot E _ { \text {in } } \\ & = - \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } + 2 E _ { \text {in } } ^ { \mathbf { u } ^ { ( t ) } } ( h ) \cdot N \end{aligned}
n = 1 ∑ N u n ( t ) ( − y n h ( x n ) ) = n = 1 ∑ N u n ( t ) ⋅ { − 1 + 1 if y n = h ( x n ) if y n = h ( x n ) = − n = 1 ∑ N u n ( t ) + n = 1 ∑ N u n ( t ) ⋅ { 0 2 if y n = h ( x n ) if y n = h ( x n ) → 这实际上就是含权重样本的 N ⋅ E in = − n = 1 ∑ N u n ( t ) + 2 E in u ( t ) ( h ) ⋅ N
同时在 AdaBoost 中,算法
A
\mathcal A
A 会最小化
E
in
u
(
t
)
(
h
)
E _ { \text {in } } ^ { \mathbf { u } ^ { ( t ) } } ( h )
E in u ( t ) ( h ) ,也就是说为
min
h
E
^
A
D
A
\min _ { h } \hat { E } _ { \mathrm { ADA } }
min h E ^ A D A 找出了好的优化方向,这里指的是函数
h
h
h 。
当通过算法
A
\mathcal A
A 找到最佳的函数(方向)
g
t
g_t
g t 后,优化目标转换为:
min
η
E
^
A
D
A
=
∑
n
=
1
N
u
n
(
t
)
exp
(
−
y
n
η
g
t
(
x
n
)
)
\min _ { \eta } \widehat { E } _ { \mathrm { ADA } } = \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } \exp \left( - y _ { n } \eta g _ { t } \left( \mathbf { x } _ { n } \right) \right)
η min E
A D A = n = 1 ∑ N u n ( t ) exp ( − y n η g t ( x n ) )
那么现在就可以通过不断的向正确的方向小步移动便可以得到最佳值,那有没有什么方法可以找到可以移动当前状态的最佳步长(optimal
η
t
\eta_t
η t ,steepest descent for optimization,当然这是贪心的,因为只管当前移动步长的最佳值)而不用蹑手蹑脚呢?
现在分情况讨论一下,对于分类的正确与否:
y
n
=
g
t
(
x
n
)
:
u
n
(
t
)
exp
(
−
η
)
y
n
≠
g
t
(
x
n
)
:
u
n
(
t
)
exp
(
+
η
)
\begin{array} { l } y _ { n } = g _ { t } \left( \mathbf { x } _ { n } \right) : u _ { n } ^ { ( t ) } \exp ( - \eta ) \\ y _ { n } \neq g _ { t } \left( \mathbf { x } _ { n } \right) : u _ { n } ^ { ( t ) } \exp ( + \eta ) \end{array}
y n = g t ( x n ) : u n ( t ) exp ( − η ) y n = g t ( x n ) : u n ( t ) exp ( + η )
那么再根据 AdaBoost 中的定义:
ϵ
t
=
∑
n
=
1
N
u
n
(
t
)
[
[
y
n
≠
g
t
(
x
n
)
]
]
∑
n
=
1
N
u
n
(
t
)
\epsilon _ { t } = \frac {\sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } \left[ \kern-0.15em \left[y _ { n } \neq g _ { t } \left( \mathbf { x } _ { n } \right) \right] \kern-0.15em \right] } { \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } }
ϵ t = ∑ n = 1 N u n ( t ) ∑ n = 1 N u n ( t ) [ [ y n = g t ( x n ) ] ]
那么
E
^
A
D
A
\widehat { E } _ { \mathrm { ADA } }
E
A D A 可以改写为:
E
^
A
D
A
=
∑
n
=
1
N
u
n
(
t
)
[
[
y
n
≠
g
t
(
x
n
)
]
]
exp
(
+
η
)
+
∑
n
=
1
N
u
n
(
t
)
[
[
y
n
=
g
t
(
x
n
)
]
]
exp
(
−
η
)
=
(
∑
n
=
1
N
u
n
(
t
)
)
⋅
(
(
1
−
ϵ
t
)
exp
(
−
η
)
+
ϵ
t
exp
(
+
η
)
)
\begin{aligned} \widehat { E } _ { \mathrm { ADA } } &= {\sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t) } \left[ \kern-0.15em \left[y _ { n } \neq g _ { t } \left( \mathbf { x } _ { n } \right) \right] \kern-0.15em \right] } \exp ( + \eta ) + {\sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t) } \left[ \kern-0.15em \left[y _ { n } = g _ { t } \left( \mathbf { x } _ { n } \right) \right] \kern-0.15em \right] } \exp ( - \eta ) \\ & = \left( \sum _ { n = 1 } ^ { N } u _ { n } ^ { ( t ) } \right) \cdot \left( \left( 1 - \epsilon _ { t } \right) \exp ( - \eta ) + \epsilon _ { t } \exp ( + \eta ) \right) \end{aligned}
E
A D A = n = 1 ∑ N u n ( t ) [ [ y n = g t ( x n ) ] ] exp ( + η ) + n = 1 ∑ N u n ( t ) [ [ y n = g t ( x n ) ] ] exp ( − η ) = ( n = 1 ∑ N u n ( t ) ) ⋅ ( ( 1 − ϵ t ) exp ( − η ) + ϵ t exp ( + η ) )
对于凸函数的最小化问题,其必要条件是导数为零:
∂
E
^
A
D
A
∂
η
=
0
\frac { \partial \widehat { E } _ { \mathrm { ADA } } } { \partial \eta } = 0
∂ η ∂ E
A D A = 0
可以解出:
η
t
=
ln
1
−
ϵ
t
ϵ
t
=
α
t
\eta _ { t } = \ln \sqrt { \frac { 1 - \epsilon _ { t } } { \epsilon _ { t } } } = \alpha _ { t }
η t = ln ϵ t 1 − ϵ t
= α t
所以说 AdaBoost 通过对估计函数的梯度获得了最佳步长(最陡梯度)(steepest descent with approximate functional gradient),是一种最速梯度下降法(steepest gradient descent)。
梯度提升(Gradient Boosting)
前面已经证明,对于二分类问题(binary-output hypothesis
h
h
h ),AdaBoost 在不断地对
η
&
h
\eta\, \& \, h
η & h 做最佳化。
min
η
min
h
1
N
∑
n
=
1
N
exp
(
−
y
n
(
∑
τ
=
1
t
−
1
α
τ
g
τ
(
x
n
)
+
η
h
(
x
n
)
)
)
\min _ { \eta } \min _ { h } \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \exp \left( - y _ { n } \left( \sum _ { \tau = 1 } ^ { t - 1 } \alpha _ { \tau } g _ { \tau } \left( \mathbf { x } _ { n } \right) + \eta h \left( \mathbf { x } _ { n } \right) \right) \right)
η min h min N 1 n = 1 ∑ N exp ( − y n ( τ = 1 ∑ t − 1 α τ g τ ( x n ) + η h ( x n ) ) )
这样的话,便可以将指数误差拓展到任意误差测量函数(allows extension to different err for regression/soft classification/etc.),同时也不在拘泥于二分类问题,对于任意的假设函数
h
h
h 均适用,可以改写为:
min
η
min
h
1
N
∑
n
=
1
N
err
(
∑
τ
=
1
t
−
1
α
τ
g
τ
(
x
n
)
+
η
h
(
x
n
)
,
y
n
)
\min _ { \eta } \min _ { h } \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \operatorname { err } \left( \sum _ { \tau = 1 } ^ { t - 1 } \alpha _ { \tau } g _ { \tau } \left( \mathbf { x } _ { n } \right) + \eta h \left( \mathbf { x } _ { n } \right) , y _ { n } \right)
η min h min N 1 n = 1 ∑ N e r r ( τ = 1 ∑ t − 1 α τ g τ ( x n ) + η h ( x n ) , y n )
GradientBoost for Regression
用于回归模型的话,误差检测函数为平方误差(square error):
min
η
min
h
1
N
∑
n
=
1
N
err
(
∑
τ
=
1
t
−
1
α
τ
g
τ
(
x
n
)
⏟
s
n
+
η
h
(
x
n
)
,
y
n
)
with err
(
s
,
y
)
=
(
s
−
y
)
2
\min _ { \eta } \min _ { h } \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \operatorname { err } \left( \underbrace {\sum _ { \tau = 1 } ^ { t - 1 } \alpha _ { \tau } g _ { \tau } \left( \mathbf { x } _ { n } \right) } _ { s _ { n } } + \eta h \left( \mathbf { x } _ { n } \right) , y _ { n } \right) \text { with err } ( s , y ) = ( s - y ) ^ { 2 }
η min h min N 1 n = 1 ∑ N e r r ⎝ ⎜ ⎜ ⎜ ⎛ s n
τ = 1 ∑ t − 1 α τ g τ ( x n ) + η h ( x n ) , y n ⎠ ⎟ ⎟ ⎟ ⎞ with err ( s , y ) = ( s − y ) 2
知识回顾:
泰勒展开
f
(
x
)
=
∑
i
=
0
n
f
(
i
)
(
x
0
)
i
!
(
x
−
x
0
)
i
=
f
(
x
0
)
+
f
′
(
x
0
)
(
x
−
x
0
)
+
f
(
2
)
(
x
0
)
2
!
(
x
−
x
0
)
2
+
⋯
+
f
(
n
)
(
x
0
)
n
!
(
x
−
x
0
)
n
\begin{aligned} f ( x ) &= \sum _ { i = 0 } ^ { n } \frac { f ^ { ( i ) } \left( x _ { 0 } \right) } { i ! } \left( x - x _ { 0 } \right) ^ { i }\\ & = f \left( x _ { 0 } \right) + f ^ { \prime } \left( x _ { 0 } \right) \left( x - x _ { 0 } \right) + \frac { f ^ { ( 2 ) } \left( x _ { 0 } \right) } { 2 ! } \left( x - x _ { 0 } \right) ^ { 2 } + \cdots + \frac { f ^ { ( n ) } \left( x _ { 0 } \right) } { n ! } \left( x - x _ { 0 } \right) ^ { n } \end{aligned}
f ( x ) = i = 0 ∑ n i ! f ( i ) ( x 0 ) ( x − x 0 ) i = f ( x 0 ) + f ′ ( x 0 ) ( x − x 0 ) + 2 ! f ( 2 ) ( x 0 ) ( x − x 0 ) 2 + ⋯ + n ! f ( n ) ( x 0 ) ( x − x 0 ) n
其中
s
n
s_n
s n 和
y
n
y_n
y n 为固定值,那么上式在
s
=
s
n
s = s_n
s = s n 进行一阶泰勒展开可以改写为:
min
h
1
N
∑
n
=
1
N
err
(
∑
τ
=
1
t
−
1
α
τ
g
τ
(
x
n
)
⏟
s
n
+
η
h
(
x
n
)
,
y
n
)
在
s
n
处
泰
勒
展
开
→
≈
min
h
1
N
∑
n
=
1
N
err
(
s
n
,
y
n
)
⏟
constant
+
1
N
∑
n
=
1
N
η
h
(
x
n
)
∂
err
(
s
,
y
n
)
∂
s
∣
S
=
S
n
=
min
h
constants
+
η
N
∑
n
=
1
N
h
(
x
n
)
⋅
2
(
s
n
−
y
n
)
\begin{aligned} & \min _ { h } \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \operatorname { err } \left( \underbrace {\sum _ { \tau = 1 } ^ { t - 1 } \alpha _ { \tau } g _ { \tau } \left( \mathbf { x } _ { n } \right) } _ { s _ { n } } + \eta h \left( \mathbf { x } _ { n } \right) , y _ { n } \right) \\ 在 s_n 处泰勒展开 \rightarrow \mathop{\approx} & \min _ { h } \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \underbrace { \operatorname { err } \left( s _ { n } , y _ { n } \right) } _ { \text {constant } } + \left. \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \eta h \left( \mathbf { x } _ { n } \right) \frac { \partial \operatorname { err } \left( s , y _ { n } \right) } { \partial s } \right| _ { S = S _ { n } } \\ = & \min _ { h } \text { constants } + \frac { \eta } { N } \sum _ { n = 1 } ^ { N } h \left( \mathbf { x } _ { n } \right) \cdot 2 \left( s _ { n } - y _ { n } \right) \end{aligned}
在 s n 处 泰 勒 展 开 → ≈ = h min N 1 n = 1 ∑ N e r r ⎝ ⎜ ⎜ ⎜ ⎛ s n
τ = 1 ∑ t − 1 α τ g τ ( x n ) + η h ( x n ) , y n ⎠ ⎟ ⎟ ⎟ ⎞ h min N 1 n = 1 ∑ N constant
e r r ( s n , y n ) + N 1 n = 1 ∑ N η h ( x n ) ∂ s ∂ e r r ( s , y n ) ∣ ∣ ∣ ∣ ∣ S = S n h min constants + N η n = 1 ∑ N h ( x n ) ⋅ 2 ( s n − y n )
那么为了保证后半部分最小,那么每个部分应该满足:
h
(
x
n
)
=
−
k
(
s
n
−
y
n
)
h \left( \mathbf { x } _ { n } \right) = - k \left( s _ { n } - y _ { n } \right)
h ( x n ) = − k ( s n − y n )
也就是说每一项均为负数,同时
k
k
k 越大,后半部分越小。但是由于步长由
η
\eta
η 决定,所以
k
k
k 没必要这么大。同时在梯度下降法中这只代表了方向,所以原始的梯度下降法中需要使得
∥
h
(
x
)
∥
=
1
\| h(\mathbf { x }) \| = 1
∥ h ( x ) ∥ = 1 ,但是这里不想加入约束条件,所以在后半部分加入
h
(
x
n
)
2
h \left( \mathbf { x } _ { n } \right)^2
h ( x n ) 2 用于限制
h
(
x
n
)
h \left( \mathbf { x } _ { n } \right)
h ( x n ) 的大小(类似于正则化)。
min
h
constants
+
η
N
∑
n
=
1
N
(
2
h
(
x
n
)
(
s
n
−
y
n
)
+
(
h
(
x
n
)
)
2
)
=
constants
+
η
N
∑
n
=
1
N
(
−
(
s
n
−
y
n
)
2
+
(
h
(
x
n
)
−
(
y
n
−
s
n
)
)
2
)
=
constants
+
η
N
∑
n
=
1
N
(
constant
+
(
h
(
x
n
)
−
(
y
n
−
s
n
)
)
2
)
\begin{aligned} \min _ { h } \quad& \text { constants } + \frac { \eta } { N } \sum _ { n = 1 } ^ { N } \left( 2 h \left( \mathbf { x } _ { n } \right) \left( s _ { n } - y _ { n } \right) + \left( h \left( \mathbf { x } _ { n } \right) \right) ^ { 2 } \right) \\ & = \text { constants } + \frac { \eta } { N } \sum _ { n = 1 } ^ { N } \left( -\left( s _ { n } - y _ { n } \right) ^ 2+ \left( h \left( \mathbf { x } _ { n } \right) - \left( y _ { n } - s _ { n } \right) \right) ^ { 2 } \right) \\ & = \text { constants } + \frac { \eta } { N } \sum _ { n = 1 } ^ { N } \left( \text { constant } + \left( h \left( \mathbf { x } _ { n } \right) - \left( y _ { n } - s _ { n } \right) \right) ^ { 2 } \right) \end{aligned}
h min constants + N η n = 1 ∑ N ( 2 h ( x n ) ( s n − y n ) + ( h ( x n ) ) 2 ) = constants + N η n = 1 ∑ N ( − ( s n − y n ) 2 + ( h ( x n ) − ( y n − s n ) ) 2 ) = constants + N η n = 1 ∑ N ( constant + ( h ( x n ) − ( y n − s n ) ) 2 )
那么现在只需要使得:
h
(
x
n
)
−
(
y
n
−
s
n
)
=
0
h \left( \mathbf { x } _ { n } \right) - \left( y _ { n } - s _ { n } \right) = 0
h ( x n ) − ( y n − s n ) = 0
所以只需要解一个关于
{
(
x
n
,
y
n
−
s
n
⏟
residual
)
}
\left\{ ( \mathbf { x } _ { n } , \underbrace { y _ { n } - s _ { n } } _ { \text {residual } } ) \right\}
⎩ ⎨ ⎧ ( x n , residual
y n − s n ) ⎭ ⎬ ⎫ 的平方误差回归问题(squared-error regression)即可。所以在GradientBoost for Regression中是使用回归方法找出一个
g
t
=
h
g_t = h
g t = h 。
在获取到
g
t
g_t
g t 之后,原优化问题可以写为:
min
η
1
N
∑
n
=
1
N
(
s
n
+
η
g
t
(
x
n
)
−
y
n
)
2
=
1
N
∑
n
=
1
N
(
(
y
n
−
s
n
)
−
η
g
t
(
x
n
)
)
2
\min _ { \eta } \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \left( s _ { n } + \eta g _ { t } \left( \mathbf { x } _ { n } \right) - y _ { n } \right) ^ { 2 } = \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \left( \left( y _ { n } - s _ { n } \right) - \eta g _ { t } \left( \mathbf { x } _ { n } \right) \right) ^ { 2 }
η min N 1 n = 1 ∑ N ( s n + η g t ( x n ) − y n ) 2 = N 1 n = 1 ∑ N ( ( y n − s n ) − η g t ( x n ) ) 2
所以通过一个关于
{
(
g
t
−
transformed input, residual
)
}
\left\{ \left( g _ { t } - \text { transformed input, residual } \right) \right\}
{ ( g t − transformed input, residual ) } 的单变量的线性回归便可以得出
η
\eta
η 也就是
α
t
\alpha_t
α t 的最优解。即
α
t
\alpha_t
α t = optimal
η
\eta
η by
g
t
g_t
g t -transformed linear regression
有了前文的求解过程,那么Gradient Boosted Decision Tree (GBDT) 的具体实现步骤写为:
s
1
=
s
2
=
…
=
s
N
=
0
for
t
=
1
,
2
,
…
,
T
1. obtain
g
t
by
A
(
{
(
x
n
,
y
n
−
s
n
)
}
)
where
A
is a (squared-error)
regression algorithm
→
Decision Tree
2. compute
α
t
=
OneVar Linear Regression
(
{
(
g
t
(
x
n
)
,
y
n
−
s
n
)
}
)
3. update
s
n
←
s
n
+
α
t
g
t
(
x
n
)
return
G
(
x
)
=
∑
t
=
1
T
α
t
g
t
(
x
)
\begin{array} { l } s _ { 1 } = s _ { 2 } = \ldots = s _ { N } = 0 \\ \text { for } t = 1,2 , \ldots , T \\ \qquad \text { 1. obtain } g _ { t } \text { by } \mathcal { A } \left( \left\{ \left( \mathbf { x } _ { n } , y _ { n } - s _ { n } \right) \right\} \right) \text { where } \mathcal { A } \text { is a (squared-error) } \\ \text { regression algorithm } \rightarrow \text{ Decision Tree} \\ \qquad \text { 2. compute } \alpha _ { t } = \text { OneVar Linear Regression } \left( \left\{ \left( g _ { t } \left( \mathbf { x } _ { n } \right) , y _ { n } - s _ { n } \right) \right\} \right) \\ \qquad \text { 3. update } s _ { n } \leftarrow s _ { n } + \alpha _ { t } g _ { t } \left( \mathbf { x } _ { n } \right) \\ \text { return } G ( \mathbf { x } ) = \sum _ { t = 1 } ^ { T } \alpha _ { t } g _ { t } ( \mathbf { x } ) \end{array}
s 1 = s 2 = … = s N = 0 for t = 1 , 2 , … , T 1. obtain g t by A ( { ( x n , y n − s n ) } ) where A is a (squared-error) regression algorithm → Decision Tree 2. compute α t = OneVar Linear Regression ( { ( g t ( x n ) , y n − s n ) } ) 3. update s n ← s n + α t g t ( x n ) return G ( x ) = ∑ t = 1 T α t g t ( x )
其中
s
1
=
s
2
=
…
=
s
N
=
0
s _ { 1 } = s _ { 2 } = \ldots = s _ { N } = 0
s 1 = s 2 = … = s N = 0 代表了一开始在原始的 Regression 问题上求取,之后再在余数上进行求取。这里 Gradient Boosting 介绍了 Regression 的实现,其中如果使用决策树做回归分析的话,那便的出了GBDT(Gradient Boosted Decision Tree),当然 Classification/Soft Classification 的实现也差不多,最终可以实现 Putting Everything Together。