Machine learning approximation algorithmsfor high-dimensional fully nonlinear PDE

1. 总体框架

文中提出了一个新算法来求解完全非线性偏微分方程和非线性二阶倒向随机微分方程2BSDEs

  1. Section 2
    推导(2.1-2.6)并计算(2.7)本文提出算法的一个特例。
    核心思想在2.7的简化框架
  2. Section 3
    提出的算法在一般情况下推导(3.1-3.5)和计算(3.7)。
  3. Section 4
    该算法在几个高维偏微分方程情况下的数值结果
    4.1:采用2.7中简化框架中提出的算法近似计算一个20维Allen-Cahn方程的解

2. Section 2 (deep 2BSDE method 的主要思想)

此部分主要说明思想,推导比较粗略;
deep 2BSDE更精确、一般的定义(2.7、3.7);
deep 2BSDE推导主要基于E,Han和Jentzen [33]和Cheridito[22]等人观点

2.1 完全非线性二阶偏微分方程(Fully nonlinear second-order PDEs)

  • d d d∈N={1,2,3,…},( d d d 表示维度)

  • T ∈ ( 0 , ∞ ) T∈(0,∞) T(0),( T T T 表示终止时间点)

  • u = ( u ( t , x ) ) t ∈ [ 0 , T ] , x ∈ R d u=(u(t,x))_{t∈[0,T],x∈\mathbb{R^d}} u=(u(tx))t[0T],xRd ∈ C 1 , 2 ∈C^{1,2} C1,2 ( [ 0 , T ] × R d , R ) ([0,T]×\mathbb{R^d},\mathbb{R}) ([0T]×RdR)

    t ∈ [ 0 , T ] t∈[0,T] t[0T] 表示某个时间点; x ∈ R d x∈\mathbb{R^d} xRd 表示 d d d 维实数向量;
    C 1 , 2 C^{1, 2} C1,2 表示连续且有1, 2阶导数;

  • f ∈ C ( [ 0 , T ] × R d × R × R d × R d × d , R ) f∈C([0,T]×\mathbb{R^d}×\mathbb{R}×\mathbb{R^d}×\mathbb{R^{d×d}},\mathbb{R}) fC([0T]×Rd×R×Rd×Rd×dR),( f f f 有5个参数)

  • g ∈ C ( R d , R ) g∈C(\mathbb{R^d},\mathbb{R}) gC(RdR),

对所有 t ∈ [ 0 , T ) , x ∈ R d t∈[0,T),x∈\mathbb{R^d} t[0T,xRd
满足:

u ( T , x ) = g ( x ) u(T,x)=g(x) u(Tx)=g(x)

∂ u ∂ t ( t , x ) = f ( t , x , u ( t , x ) , ( ∇ x u ) ( t , x ) , ( Hess ⁡ x u ) ( t , x ) ) \frac{\partial u}{\partial t}(t, x)=f\left(t, x, u(t, x),\left(\nabla_{x} u\right)(t, x),\left(\operatorname{Hess}_{x} u\right)(t, x)\right) tu(t,x)=f(t,x,u(t,x),(xu)(t,x),(Hessxu)(t,x))                     ( 1 ) (1) (1)

deep 2BSDE允许近似地计算函数 u ( 0 , x ) , x ∈ R d u(0,x),x∈\mathbb{R^d} u(0x)xRd;这一节中对 ξ ∈ R d ξ∈\mathbb{R^d} ξRd u ( 0 , ξ ) ∈ R u(0,ξ)∈\mathbb R u(0ξ)R 近似计算(实数),参考3.7的一般算法。

2.2 完全非线性二阶偏微分方程与2BSDEs的联系

  • ( Ω , F , P ) (Ω, \mathcal{F}, \mathbb{P}) (Ω,F,P) 是一个概率空间
  • W : [ 0 , T ] × Ω → R d ( 映 射 ) W: [0, T] × Ω \rightarrow \mathbb{R^d}(映射) W:[0,T]×ΩRd ( Ω , F , P ) (Ω, \mathcal{F}, \mathbb{P}) (Ω,F,P) 上具有连续采样路径的标准布朗运动
  • F = ( F t ) t ∈ [ 0 , T ] \mathbb{F}=\left(\mathbb{F}_{t}\right)_{t \in[0, T]} F=(Ft)t[0,T] W W W ( Ω , F , P ) (Ω, \mathcal{F}, \mathbb{P}) (Ω,F,P) 上产生的正常滤波(存疑?)
  • Y : [ 0 , T ] × Ω → R Y:[0, T] \times \Omega \rightarrow \mathbb{R} Y:[0,T]×ΩR
  • Z : [ 0 , T ] × Ω → R d Z:[0, T] \times \Omega \rightarrow \mathbb{R^d} Z:[0,T]×ΩRd
  • Γ : [ 0 , T ] × Ω → R d × d \Gamma:[0, T] \times \Omega \rightarrow \mathbb{R}^{d \times d} Γ:[0,T]×ΩRd×d
  • A : [ 0 , T ] × Ω → R d A:[0, T] \times \Omega \rightarrow \mathbb{R^d} A:[0,T]×ΩRd
    为具有连续采样路径的 F \mathbb{F} F -适应随机过程

对所有 t ∈ [ 0 , T ] t∈[0,T] t[0T],几乎确定有:

Y t = g ( ξ + W T ) − ∫ t T ( f ( s , ξ + W s , Y s , Z s , Γ s ) + 1 2 Trace ⁡ ( Γ s ) ) d s − ∫ t T ⟨ Z s , d W s ⟩ R d Y_{t}=g\left(\xi+W_{T}\right)-\int_{t}^{T}\left(f\left(s, \xi+W_{s}, Y_{s}, Z_{s}, \Gamma_{s}\right)+\frac{1}{2} \operatorname{Trace}\left(\Gamma_{s}\right)\right) d s-\int_{t}^{T}\left\langle Z_{s}, d W_{s}\right\rangle_{\mathbb{R}^{d}} Yt=g(ξ+WT)tT(f(s,ξ+Ws,Ys,Zs,Γs)+21Trace(Γs))dstTZs,dWsRd                     ( 2 ) (2) (2)

Z t = Z 0 + ∫ 0 t A s d s + ∫ 0 t Γ s d W s Z_{t}=Z_{0}+\int_{0}^{t} A_{s} d s+\int_{0}^{t} \Gamma_{s} d W_{s} Zt=Z0+0tAsds+0tΓsdWs                     ( 3 ) (3) (3)

在适当的平滑度和规则性假设下,完全非线性的 PDE(1)与 2BSDE 中的(2)、(3)有关;从某种意义上说,对于所有 t ∈ [ 0 , T ] t∈[0,T] t[0T],几乎确定有:

Y t = u ( t , ξ + W t ) ∈ R , Z t = ( ∇ x u ) ( t , ξ + W t ) ∈ R d Y_{t}=u\left(t, \xi+W_{t}\right) \in \mathbb{R}, \quad Z_{t}=\left(\nabla_{x} u\right)\left(t, \xi+W_{t}\right) \in \mathbb{R}^{d} Yt=u(t,ξ+Wt)R,Zt=(xu)(t,ξ+Wt)Rd                     ( 4 ) (4) (4)

Γ t = ( Hess ⁡ x u ) ( t , ξ + W t ) ∈ R d × d , \Gamma_{t}=\left(\operatorname{Hess}_{x} u\right)\left(t, \xi+W_{t}\right) \in \mathbb{R}^{d \times d}, \quad Γt=(Hessxu)(t,ξ+Wt)Rd×d,                     ( 5 ) (5) (5)

A t = ( ∂ ∂ t ∇ x u ) ( t , ξ + W t ) + 1 2 ( ∇ x Δ x u ) ( t , ξ + W t ) ∈ R d A_{t}=\left(\frac{\partial}{\partial t} \nabla_{x} u\right)\left(t, \xi+W_{t}\right)+\frac{1}{2}\left(\nabla_{x} \Delta_{x} u\right)\left(t, \xi+W_{t}\right) \in \mathbb{R}^{d} At=(txu)(t,ξ+Wt)+21(xΔxu)(t,ξ+Wt)Rd                     ( 6 ) (6) (6)

参见Cheridito等人,[22]和引理3.1

2.3 PDE和2BSDE的合并公式(Merged formulation of the PDE and the 2BSDE)

在此节,推导 PDE(1)和 2BSDE(2)–(3)的合并公式((9)和(10));
更具体地,观察(2)和(3)得出,对于任意 τ 1 , τ 2 ∈ [ 0 , T ] \tau_{1}, \tau_{2} \in[0, T] τ1,τ2[0,T] τ 1 ≤ τ 2 \tau_{1} \leq \tau_{2} τ1τ2,可以得到:

Y τ 2 = Y τ 1 + ∫ τ 1 τ 2 ( f ( s , ξ + W s , Y s , Z s , Γ s ) + 1 2 Trace ⁡ ( Γ s ) ) d s + ∫ τ 1 τ 2 ⟨ Z s , d W s ⟩ R d Y_{\tau_{2}}=Y_{\tau_{1}}+\int_{\tau_{1}}^{\tau_{2}}\left(f\left(s, \xi+W_{s}, Y_{s}, Z_{s}, \Gamma_{s}\right)+\frac{1}{2} \operatorname{Trace}\left(\Gamma_{s}\right)\right) d s+\int_{\tau_{1}}^{\tau_{2}}\left\langle Z_{s}, d W_{s}\right\rangle_{\mathbb{R}^{d}} Yτ2=Yτ1+τ1τ2(f(s,ξ+Ws,Ys,Zs,Γs)+21Trace(Γs))ds+τ1τ2Zs,dWsRd                     ( 7 ) (7) (7)

Z τ 2 = Z τ 1 + ∫ τ 1 τ 2 A s d s + ∫ τ 1 τ 2 Γ s d W s Z_{\tau_{2}}=Z_{\tau_{1}}+\int_{\tau_{1}}^{\tau_{2}} A_{s} d s+\int_{\tau_{1}}^{\tau_{2}} \Gamma_{s} d W_{s} Zτ2=Zτ1+τ1τ2Asds+τ1τ2ΓsdWs                     ( 8 ) (8) (8)

       (7)式由下面过程所得:
       把 τ 1 , τ 2 \tau_{1}, \tau_{2} τ1,τ2 代入(2)式,得到: Y τ 1 Y_{\tau_{1}} Yτ1 Y τ 2 Y_{\tau_{2}} Yτ2,再 Y τ 2 − Y τ 1 Y_{\tau_{2}} - Y_{\tau_{1}} Yτ2Yτ1
       经过化简就可以得到(7);(8)式同样如此,只是代入的是(3)式

将(5)和(6)放入(7)和(8)中,证明了对于任意 τ 1 , τ 2 ∈ [ 0 , T ] \tau_{1}, \tau_{2} \in[0, T] τ1,τ2[0,T] τ 1 ≤ τ 2 \tau_{1} \leq \tau_{2} τ1τ2,下面式子成立:

Y τ 2 = Y τ 1 + ∫ τ 1 τ 2 ⟨ Z s , d W s ⟩ R d Y_{\tau_{2}}=Y_{\tau_{1}}+\int_{\tau_{1}}^{\tau_{2}}\left\langle Z_{s}, d W_{s}\right\rangle_{\mathbb{R}^{d}} Yτ2=Yτ1+τ1τ2Zs,dWsRd
+ ∫ τ 1 τ 2 ( f ( s , ξ + W s , Y s , Z s , ( Hess ⁡ x u ) ( s , ξ + W s ) ) + 1 2 Trace ⁡ ( ( Hess ⁡ x u ) ( s , ξ + W s ) ) ) d s +\int_{\tau_{1}}^{\tau_{2}}\left(f\left(s, \xi+W_{s}, Y_{s}, Z_{s},\left(\operatorname{Hess}_{x} u\right)\left(s, \xi+W_{s}\right)\right)+\frac{1}{2} \operatorname{Trace}\left(\left(\operatorname{Hess}_{x} u\right)\left(s, \xi+W_{s}\right)\right)\right) d s +τ1τ2(f(s,ξ+Ws,Ys,Zs,(Hessxu)(s,ξ+Ws))+21Trace((Hessxu)(s,ξ+Ws)))ds                     ( 9 ) (9) (9)

Z τ 2 = Z τ 1 + ∫ τ 1 τ 2 ( ( ∂ ∂ t ∇ x u ) ( s , ξ + W s ) + 1 2 ( ∇ x Δ x u ) ( s , ξ + W s ) ) d s + ∫ τ 1 τ 2 ( Hess ⁡ x u ) ( s , ξ + W s ) d W s \begin{aligned} Z_{\tau_{2}}=& Z_{\tau_{1}}+\int_{\tau_{1}}^{\tau_{2}}\left(\left(\frac{\partial}{\partial t} \nabla_{x} u\right)\left(s, \xi+W_{s}\right)+\frac{1}{2}\left(\nabla_{x} \Delta_{x} u\right)\left(s, \xi+W_{s}\right)\right) d s \\ &+\int_{\tau_{1}}^{\tau_{2}}\left(\operatorname{Hess}_{x} u\right)\left(s, \xi+W_{s}\right) d W_{s} \end{aligned} Zτ2=Zτ1+τ1τ2((txu)(s,ξ+Ws)+21(xΔxu)(s,ξ+Ws))ds+τ1τ2(Hessxu)(s,ξ+Ws)dWs                     ( 10 ) (10) (10)

2.4 合并的PDE-2BSDE系统的前向离散化(Forward-discretization of the merged PDE-2BSDE system)

在本节,推导了合并的PDE-2BSDE系统(9)–(10)的正向离散化。
t 0 , t 1 , … , t N ∈ [ 0 , T ] t_{0}, t_{1}, \ldots, t_{N} \in[0, T] t0,t1,,tN[0,T] 为实数,其中

0 = t 0 < t 1 < … < t N = T 0=t_{0}<t_{1}<\ldots<t_{N}=T 0=t0<t1<<tN=T                     ( 11 ) (11) (11)
时间间隔足够小,即 ( t k + 1 − t k ) \left(t_{k+1}-t_{k}\right) (tk+1tk) 0 ≤ k ≤ N 0 \leq k \leq N 0kN)足够小;
注意,(9)和(10)表明对于足够大的 N ∈ N N \in \mathbb{N} NN ,它对任意 n ∈ { 0 , 1 , … , N − 1 } n \in\{0,1, \ldots, N-1\} n{ 0,1,,N1}都有如下式子成立:

Y t n + 1 ≈ Y t n + ( f ( t n , ξ + W t n , Y t n , Z t n , ( Hess ⁡ x u ) ( t n , ξ + W t n ) ) + 1 2 Trace ⁡ ( ( Hess ⁡ x u ) ( t n , ξ + W t n ) ) ) ( t n + 1 − t n ) + ⟨ Z t n , W t n + 1 − W t n ⟩ R d \begin{aligned} Y_{t_{n+1}} \approx & Y_{t_{n}}+\left(f\left(t_{n}, \xi+W_{t_{n}}, Y_{t_{n}}, Z_{t_{n}},\left(\operatorname{Hess}_{x} u\right)\left(t_{n}, \xi+W_{t_{n}}\right)\right)\right.\\ &\left.+\frac{1}{2} \operatorname{Trace}\left(\left(\operatorname{Hess}_{x} u\right)\left(t_{n}, \xi+W_{t_{n}}\right)\right)\right)\left(t_{n+1}-t_{n}\right)+\left\langle Z_{t_{n}}, W_{t_{n+1}}-W_{t_{n}}\right\rangle_{\mathbb{R}^{d}} \end{aligned} Ytn+1Ytn+(f(tn,ξ+Wtn,Ytn,Ztn,(Hessxu)(tn,ξ+Wtn))+21Trace((Hessxu)(tn,ξ+Wtn)))(tn+1tn)+Ztn,Wtn+1WtnRd                     ( 12 ) (12) (12)

Z t n + 1 ≈ Z t n + ( ( ∂ ∂ t ∇ x u ) ( t n , ξ + W t n ) + 1 2 ( ∇ x Δ x u ) ( t n , ξ + W t n ) ) ( t n + 1 − t n ) + ( Hess ⁡ x u ) ( t n , ξ + W t n ) ( W t n + 1 − W t n ) \begin{aligned} Z_{t_{n+1}} \approx & Z_{t_{n}}+\left(\left(\frac{\partial}{\partial t} \nabla_{x} u\right)\left(t_{n}, \xi+W_{t_{n}}\right)+\frac{1}{2}\left(\nabla_{x} \Delta_{x} u\right)\left(t_{n}, \xi+W_{t_{n}}\right)\right)\left(t_{n+1}-t_{n}\right) \\ &+\left(\operatorname{Hess}_{x} u\right)\left(t_{n}, \xi+W_{t_{n}}\right)\left(W_{t_{n+1}}-W_{t_{n}}\right) \end{aligned} Ztn+1Ztn+((txu)(tn,ξ+Wtn)+21(xΔxu)(tn,ξ+Wtn))(tn+1tn)+(Hessxu)(tn,ξ+Wtn)(Wtn+1Wtn)                     ( 13 ) (13) (13)

2.5 深度学习近似 (Deep learning approximations)

下一步,对每个 n ∈ { 0 , 1 , … , N − 1 } n \in\{0,1, \ldots, N-1\} n{ 0,1,,N1} 使用合适的近似函数

R d ∋ x ↦ ( Hess ⁡ x u ) ( t n , x ) ∈ R d × d \mathbb{R}^{d} \ni x \mapsto\left(\operatorname{Hess}_{x} u\right)\left(t_{n}, x\right) \in \mathbb{R}^{d \times d} Rdx(Hessxu)(tn,x)Rd×d                     ( 14 ) (14) (14)

R d ∋ x ↦ ( ∂ ∂ t ∇ x u ) ( t n , x ) + 1 2 ( ∇ x Δ x u ) ( t n , x ) ∈ R d \mathbb{R}^{d} \ni x \mapsto\left(\frac{\partial}{\partial t} \nabla_{x} u\right)\left(t_{n}, x\right)+\frac{1}{2}\left(\nabla_{x} \Delta_{x} u\right)\left(t_{n}, x\right) \in \mathbb{R}^{d} Rdx(txu)(tn,x)+21(xΔxu)(tn,x)Rd                     ( 15 ) (15) (15)

ν ∈ N ∩ [ d + 1 , ∞ ) \nu \in \mathbb{N} \cap[d+1, \infty) νN[d+1,)
对每一个 θ ∈ R ν \theta \in \mathbb{R}^{\nu} θRν n ∈ { 0 , 1 , … , N } n \in\{0,1, \ldots, N\} n{ 0,1,,N} ,令 G n θ : R d → R d × d \mathbb{G}_{n}^{\theta}: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d \times d} Gnθ:RdRd×d A n θ : R d → R d \mathbb{A}_{n}^{\theta}: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d} Anθ:RdRd 都为连续函数;
对每一个 θ = ( θ 1 , θ 2 , … , θ ν ) ∈ R ν \theta = \left(\theta_{1}, \theta_{2}, \ldots, \theta_{\nu}\right) \in \mathbb{R}^{\nu} θ=(θ1,θ2,,θν)Rν,令 Y θ : { 0 , 1 , … , N } × Ω → R \mathcal{Y}^{\theta}:\{0,1, \ldots, N\} \times \Omega \rightarrow \mathbb{R} Yθ:{ 0,1,,N}×ΩR Z θ : { 0 , 1 , … , N } × Ω → R d \mathcal{Z}^{\theta}:\{0,1, \ldots, N\} \times \Omega \rightarrow \mathbb{R^d} Zθ:{ 0,1,,N}×ΩRd 是随机过程 满足 Y 0 θ = θ 1 \mathcal{Y}^{\theta}_0={\theta}_1 Y0θ=θ1 Z 0 θ = ( θ 2 , θ 3 , … , θ d + 1 ) \mathcal{Z}^{\theta}_0=\left(\theta_{2}, \theta_{3}, \ldots, \theta_{d+1}\right) Z0θ=(θ2,θ3,,θd+1) 且对任意 n ∈ { 0 , 1 , … , N } n \in\{0,1, \ldots, N\} n{ 0,1,,N} 满足:

Y n + 1 θ = Y n θ + ⟨ Z n θ , W t n + 1 − W t n ⟩ R d \mathcal{Y}^{\theta}_{n+1}=\mathcal{Y}^{\theta}_{n}+\left\langle Z^{\theta}_{n}, W_{t_{n+1}}-W_{t_n}\right\rangle_{\mathbb{R}^{d}} Yn+1θ=Ynθ+Znθ,Wtn+1WtnRd
+ ( f ( t n , ξ + W t n , Y n θ , Z n θ , G n θ ( ξ + W t n ) ) + 1 2 Trace ⁡ ( G ⁡ n θ ( ξ + W t n ) ) ) ( t n + 1 − t n ) +\left(f\left(t_n, \xi+W_{t_n}, \mathcal{Y}^{\theta}_{n}, \mathcal{Z}^{\theta}_{n},\mathbb{G}_{n}^{\theta} \left(\xi+W_{t_n}\right)\right)+\frac{1}{2} \operatorname{Trace}\left(\operatorname\mathbb{G}_{n}^{\theta}\left(\xi+W_{t_n}\right)\right)\right)(t_{n+1}-t_n) +(f(tn,ξ+Wtn,Ynθ,Znθ,Gnθ(ξ+Wtn))+21Trace(Gnθ(ξ+Wtn)))(tn+1tn)                     ( 16 ) (16) (16)

Z n + 1 θ = Z n θ + A n θ ( ξ + W t n ) ( t n + 1 − t n ) + G n θ ( ξ + W t n ) ( W t n + 1 − W t n ) \mathcal{Z}_{n+1}^{\theta}=\mathcal{Z}_{n}^{\theta}+\mathbb{A}_{n}^{\theta}\left(\xi+W_{t_{n}}\right)\left(t_{n+1}-t_{n}\right)+\mathbb{G}_{n}^{\theta}\left(\xi+W_{t_{n}}\right)\left(W_{t_{n+1}}-W_{t_{n}}\right) Zn+1θ=Znθ+Anθ(ξ+Wtn)(tn+1tn)+Gnθ(ξ+Wtn)(Wtn+1Wtn)                     ( 17 ) (17) (17)

对于所有合适的 θ ∈ R ν θ∈\mathbb R^ν θRν 和所有 n ∈ { 0 , 1 , … , N } n\in\{0,1,\dots,N\} n{ 0,1,,N},认为 Y n θ : Ω → R \mathcal Y^θ_n:Ω→\mathbb R YnθΩR Y t n : Ω → R Y_{t_n}:Ω→\mathbb R YtnΩR 合适的近似值:

Y n θ ≈ Y t n \mathcal Y^θ_n ≈ Y_{t_n} YnθYtn                     ( 18 ) (18) (18)

对于所有合适的 θ ∈ R ν θ∈\mathbb R^ν θRν 和所有 n ∈ { 0 , 1 , … , N } n\in\{0,1,\dots,N\} n{ 0,1,,N},认为 Z n θ : Ω → R d \mathcal Z^θ_n:Ω→\mathbb R^d ZnθΩRd Z t n : Ω → R d Z_{t_n}:Ω→\mathbb R^d ZtnΩRd 合适的近似值:

Z n θ ≈ Z t n \mathcal Z^θ_n ≈ Z_{t_n} ZnθZtn                     ( 19 ) (19) (19)

对于所有合适的 θ ∈ R ν , x ∈ R d θ∈\mathbb R^ν, x∈\mathbb R^d θRν,xRd 和所有 n ∈ { 0 , 1 , … , N − 1 } n\in\{0,1,\dots,N-1\} n{ 0,1,,N1},认为 G n θ ( x ) ∈ R d × d \mathbb G^θ_n(x)\in\mathbb R^{d \times d} Gnθ(x)Rd×d ( Hess ⁡ x u ) ( t n , x ) ∈ R d × d \left(\operatorname{Hess}_{x} u\right)\left(t_{n}, x\right) \in \mathbb{R}^{d \times d} (Hessxu)(tn,x)Rd×d 合适的近似值:

G n θ ( x ) ≈ ( Hess ⁡ x u ) ( t n , x ) \mathbb G^θ_n(x) ≈ \left(\operatorname{Hess}_{x} u\right)\left(t_{n}, x\right) Gnθ(x)(Hessxu)(tn,x)                     ( 20 ) (20) (20)

对于所有合适的 θ ∈ R ν , x ∈ R d θ∈\mathbb R^ν, x∈\mathbb R^d θRν,xRd 和所有 n ∈ { 0 , 1 , … , N − 1 } n\in\{0,1,\dots,N-1\} n{ 0,1,,N1},认为 A n θ ( x ) ∈ R d \mathbb A^θ_n(x)\in\mathbb R^{d} Anθ(x)Rd ( ∂ ∂ t ∇ x u ) ( t n , x ) + 1 2 ( ∇ x Δ x u ) ( t n , x ) ∈ R d \left(\frac{\partial}{\partial t} \nabla_{x} u\right)\left(t_{n}, x\right)+\frac{1}{2}\left(\nabla_{x} \Delta_{x} u\right)\left(t_{n}, x\right) \in \mathbb{R}^{d} (txu)(tn,x)+21(xΔxu)(tn,x)Rd 合适的近似值:

A n θ ( x ) ≈ ( ∂ ∂ t ∇ x u ) ( t n , x ) + 1 2 ( ∇ x Δ x u ) ( t n , x ) \mathbb A^θ_n(x) ≈\left(\frac{\partial}{\partial t} \nabla_{x} u\right)\left(t_{n}, x\right)+\frac{1}{2}\left(\nabla_{x} \Delta_{x} u\right)\left(t_{n}, x\right) Anθ(x)(txu)(tn,x)+21(xΔxu)(tn,x)                     ( 21 ) (21) (21)

特别地,将 θ 1 \theta_1 θ1 视为 u ( 0 , ξ ) ∈ R d u(0,ξ)\in\mathbb R^d u(0ξ)Rd 的近似值:

θ 1 ≈ u ( 0 , ξ ) \theta_1≈u(0,ξ) θ1u(0ξ)                     ( 22 ) (22) (22)

( θ 2 , θ 3 , … , θ d + 1 ) \left(\theta_{2}, \theta_{3}, \ldots, \theta_{d+1}\right) (θ2,θ3,,θd+1) 视为 ( ∇ x u ) ( 0 , ξ ) ∈ R d (\nabla_{x} u)(0,ξ)\in\mathbb{R}^{d} (xu)(0ξ)Rd 的近似:

( θ 2 , θ 3 , … , θ d + 1 ) ≈ ( ∇ x u ) ( 0 , ξ ) \left(\theta_{2}, \theta_{3}, \ldots, \theta_{d+1}\right)≈(\nabla_{x} u)(0,ξ) (θ2,θ3,,θd+1)(xu)(0ξ)                     ( 23 ) (23) (23)

现在为每个 n ∈ { 0 , 1 , … , N − 1 } n\in\{0,1,\dots,N-1\} n{ 0,1,,N1} 选择函数 G n θ \mathbb G^θ_n Gnθ A n θ \mathbb A^θ_n Anθ 作为深度神经网络(参见,[8,67])。

例如,对于每个 k ∈ N k∈\mathbb N kN,令 R k : R k → R k \mathcal R_k:\mathbb R^k→\mathbb R^k RkRkRk 是函数,对所有 x = ( x 1 , … , x k ) ∈ R k x =(x_1, \dots, x_k) \in \mathbb R^k x=(x1,,xk)Rk 满足:

R k ( x ) = ( max ⁡ { x 1 , 0 } , … , max ⁡ { x k , 0 } ) \mathcal R_k(x)=(\max\{x_1,0\},\dots,\max\{x_k,0\}) Rk(x)=(max{ x1,0},,max{ xk,0})                     ( 24 ) (24) (24)

对每个 θ = ( θ 1 , … , θ ν ) ∈ R ν \theta=\left(\theta_{1},\ldots, \theta_{\nu}\right) \in \mathbb R^{\nu} θ=(θ1,,θν)Rν v ∈ N 0 = { 0 } ∪ N v \in \mathbb{N}_{0}=\{0\} \cup \mathbb{N} vN0={ 0}N k k k l ∈ N l\in \mathbb{N} lN v + k ( l + 1 ) ≤ ν v+k(l+1) \leq \nu v+k(l+1)ν ,令 M k , l θ , v : R l → R k M^{\theta,v}_{k,l}:\mathbb R_l→\mathbb R^k Mk,lθ,vRlRk 是仿射线性函数,对所有的 x = ( x 1 , … , x l ) x=(x_1,\dots,x_l) x=(x1,,xl) 满足:

M k , l θ , v ( x ) = ( θ v + 1 θ v + 2 … θ v + l θ v + l + 1 θ v + l + 2 … θ v + 2 l θ v + 2 l + 1 θ v + 2 l + 2 … θ v + 3 l ⋮ ⋮ ⋮ ⋮ θ v + ( k − 1 ) l + 1 θ v + ( k − 1 ) l + 2 … θ v + k l ) ( x 1 x 2 x 3 ⋮ x l ) + ( θ v + k l + 1 θ v + k l + 2 θ v + k l + 3 ⋮ θ v + k l + k ) M_{k, l}^{\theta, v}(x)=\left(\begin{array}{cccc}\theta_{v+1} & \theta_{v+2} & \ldots & \theta_{v+l} \\ \theta_{v+l+1} & \theta_{v+l+2} & \ldots & \theta_{v+2 l} \\ \theta_{v+2 l+1} & \theta_{v+2 l+2} & \ldots & \theta_{v+3 l} \\ \vdots & \vdots & \vdots & \vdots \\ \theta_{v+(k-1) l+1} & \theta_{v+(k-1) l+2} & \ldots & \theta_{v+k l}\end{array}\right)\left(\begin{array}{c}x_{1} \\ x_{2} \\ x_{3} \\ \vdots \\ x_{l}\end{array}\right)+\left(\begin{array}{c}\theta_{v+k l+1} \\ \theta_{v+k l+2} \\ \theta_{v+k l+3} \\ \vdots \\ \theta_{v+k l+k}\end{array}\right) Mk,lθ,v(x)=θv+1θv+l+1θv+2l+1θv+(k1)l+1θv+2θv+l+2θv+2l+2θv+(k1)l+2θv+lθv+2lθv+3lθv+klx1x2x3xl+θv+kl+1θv+kl+2θv+kl+3θv+kl+k                     ( 25 ) (25) (25)

假设 ν ≥ ( 5 N d + N d 2 + 1 ) ( d + 1 ) \nu \geq\left(5 N d+N d^{2}+1\right)(d+1) ν(5Nd+Nd2+1)(d+1) ,且假设对所有 θ ∈ R \theta \in \mathbb R θR , n ∈ { m ∈ N : m < N } n \in \{m \in \mathbb N:m<N\} n{ mN:m<N} x ∈ R d x\in \mathbb R^d xRd ,有:

A n θ = M d , d θ , [ ( 2 N + n ) d + 1 ] ( d + 1 ) ∘ R d ∘ M d , d θ , [ ( N + n ) d + 1 ] ( d + 1 ) ∘ R d ∘ M d , d θ , ( n d + 1 ) ( d + 1 ) \mathbb{A}_{n}^{\theta}=M_{d, d}^{\theta,[(2 N+n) d+1](d+1)} \circ \mathcal{R}_{d} \circ M_{d, d}^{\theta,[(N+n) d+1](d+1)} \circ \mathcal{R}_{d} \circ M_{d, d}^{\theta,(n d+1)(d+1)} Anθ=Md,dθ,[(2N+n)d+1](d+1)RdMd,dθ,[(N+n)d+1](d+1)RdMd,dθ,(nd+1)(d+1)                     ( 26 ) (26) (26)

G n θ = M d 2 , d θ , ( 5 N d + n d 2 + 1 ) ( d + 1 ) ∘ R d ∘ M d , d θ , [ ( 4 N + n ) d + 1 ] ( d + 1 ) ∘ R d ∘ M d , d θ , [ ( 3 N + n ) d + 1 ] ( d + 1 ) \mathbb{G}_{n}^{\theta}=M_{d^2, d}^{\theta,(5Nd+nd^2+1)(d+1)} \circ \mathcal{R}_{d} \circ M_{d, d}^{\theta,[(4N+n) d+1](d+1)} \circ \mathcal{R}_{d} \circ M_{d, d}^{\theta,[(3N+n)d+1](d+1)} Gnθ=Md2,dθ,(5Nd+nd2+1)(d+1)RdMd,dθ,[(4N+n)d+1](d+1)RdMd,dθ,[(3N+n)d+1](d+1)                     ( 27 ) (27) (27)

上述(26)式中的函数是一个4层的神经网络(1个输入层: d d d 个神经元,2个隐藏层:各有 d d d 个神经元,1个输出层: d d d 个神经元)且其激活函数为上述(24)式rectifier functions(整流函数) ;上述(27)式中的函数也是一个4层的神经网络(1个输入层: d d d 个神经元,2个隐藏层:各有 d d d 个神经元,1个输出层: d 2 d^2 d2 个神经元)且其激活函数为上述(24)式rectifier functions(整流函数) 。

激活函数ReLU(Rectified Linear Unit)

① 表达式: f ( x ) = { 0 , x ≤ 0 x , x > 0 f(x)=\left\{\begin{array}{ll}0, & x \leq 0 \\ x, & x>0\end{array}\right. f(x)={ 0,x,x0x>0
② 图像:

在这里插入图片描述

2.6 随机梯度下降优化 (Stochastic gradient descent-type optimization)

对下面函数应用随机梯度下降最小化算法求得合适的 θ θ θ

R ν ∋ θ ↦ E [ ∣ Y N θ − g ( ξ + W t N ) ∣ 2 ] ∈ R \mathbb{R}^{\nu} \ni \theta \mapsto \mathbb{E}\left[\left|\mathcal{Y}_{N}^{\theta}-g\left(\xi+W_{t_{N}}\right)\right|^{2}\right] \in \mathbb{R} RνθE[YNθg(ξ+WtN)2]R                     ( 28 ) (28) (28)

E [ ∣ Y T − g ( ξ + W T ) ∣ 2 ] = 0 \mathbb{E}\left[\left|Y_{T}-g\left(\xi+W_{T}\right)\right|^{2}\right]=0 E[YTg(ξ+WT)2]=0                     ( 29 ) (29) (29)

2.7 特定情况下的算法框架 (Framework for the algorithm in a specific case)

猜你喜欢

转载自blog.csdn.net/weixin_41857483/article/details/109988696