PINN Deep Learning to Solve Differential Equations Series 1: Solving Framework

Below I will introduce the embedded physics knowledge neural network (PINN) to solve differential equations. Firstly, the basic method of PINN is introduced, and the one-dimensional Poisson equation is solved based on the Pytorch framework.
Embedded Physical Knowledge Neural Network (PINN) Introduction and Related Papers
Deep Learning to Solve Differential Equations Series 1: PINN Solution Framework (Poisson 1d)
Deep Learning to Solve Differential Equations Series 2: PINN to Solve Burger Equation Forward Problems
Deep Learning to Solve Differential Equations Series 3: PINN solves the inverse problem of the Burger equation
Deep learning to solve the differential equation series four: Based on the adaptive activation function PINN solves the inverse problem of the Burger equation

1. Introduction to PINN

As a powerful information processing tool, neural network has been widely used in the fields of computer vision, biomedicine, and oil and gas engineering, triggering technological changes in many fields. The deep learning network has a very strong learning ability, not only can discover physical laws, but also solve partial differential equations. In recent years, the solution of partial differential equations based on deep learning has become a new research hotspot. Embedded physics-informed neural network (PINN) is an application of scientific machines in the traditional numerical field, which can be used to solve various problems related to partial differential equations (PDE), including equation solving, parameter inversion, model discovery, control and optimization etc.

2. PINN method

The main idea of ​​PINN is shown in Figure 1, first construct an output result as u ^ \hat{u}u^ 's neural network, which is used as a proxy model for the PDE solution, and the PDE information is used as a constraint, encoded into the neural network loss function for training.
insert image description here

The loss function mainly includes four parts: partial differential structure loss (PDE loss), boundary value condition loss (BC loss), initial value condition loss (IC loss) and real data condition loss (Data loss). In particular, consider the following PDE problem, where the solution of the PDE u ( x ) u(x)u(x) Ω ⊂ R d \Omega \subset \mathbb{R}^{d} OhRd definition, wherex = ( x 1 , … , xd ) \mathbf{x}=\left(x_{1}, \ldots, x_{d}\right)x=(x1,,xd)
f ( x ; ∂ u ∂ x 1 , … , ∂ u ∂ x d ; ∂ 2 u ∂ x 1 ∂ x 1 , … , ∂ 2 u ∂ x 1 ∂ x d ) = 0 , x ∈ Ω f\left(\mathbf{x} ; \frac{\partial u}{\partial x_{1}}, \ldots, \frac{\partial u}{\partial x_{d}} ; \frac{\partial^{2} u}{\partial x_{1} \partial x_{1}}, \ldots, \frac{\partial^{2} u}{\partial x_{1} \partial x_{d}} \right)=0, \quad \mathbf{x} \in \Omega f(x;x1u,,xdu;x1x12 and,,x1xd2 and)=0,xΩAt
the same time, satisfy the following boundary
B ( u , x ) = 0 on ∂ Ω \mathcal{B}(u, \mathbf{x})=0 \quad \text { on } \quad \partial \OmegaB(u,x)=0 on Ω
in order to measure the neural networku ^ \hat{u}u^和约束之间的差异,考虑损失函数定义:
L ( θ ) = w f L P D E ( θ ; T f ) + w i L I C ( θ ; T i ) + w b L B C ( θ , ; T b ) + w d L D a t a ( θ , ; T d a t a ) \mathcal{L}\left(\boldsymbol{\theta}\right)=w_{f} \mathcal{L}_{PDE}\left(\boldsymbol{\theta}; \mathcal{T}_{f}\right)+w_{i} \mathcal{L}_{IC}\left(\boldsymbol{\theta} ; \mathcal{T}_{i}\right)+w_{b} \mathcal{L}_{BC}\left(\boldsymbol{\theta},; \mathcal{T}_{b}\right)+w_{d} \mathcal{L}_{Data}\left(\boldsymbol{\theta},; \mathcal{T}_{data}\right) L( i )=wfLPDE( i ;Tf)+wiLIC( i ;Ti)+wbLBC( i ,;Tb)+wdLData( i ,;Tdata)
where:
L P D E ( θ ; T f ) = 1 ∣ T f ∣ ∑ x ∈ T f ∥ f ( x ; ∂ u ^ ∂ x 1 , … , ∂ u ^ ∂ x d ; ∂ 2 u ^ ∂ x 1 ∂ x 1 , … , ∂ 2 u ^ ∂ x 1 ∂ x d ) ∥ 2 2 L I C ( θ ; T i ) = 1 ∣ T i ∣ ∑ x ∈ T i ∥ u ^ ( x ) − u ( x ) ∥ 2 2 L B C ( θ ; T b ) = 1 ∣ T b ∣ ∑ x ∈ T b ∥ B ( u ^ , x ) ∥ 2 2 L D a t a ( θ ; T d a t a ) = 1 ∣ T d a t a ∣ ∑ x ∈ T d a t a ∥ u ^ ( x ) − u ( x ) ∥ 2 2 \begin{aligned} \mathcal{L}_{PDE}\left(\boldsymbol{\theta} ; \mathcal{T}_{f}\right) &=\frac{1}{\left|\mathcal{T}_{f}\right|} \sum_{\mathbf{x} \in \mathcal{T}_{f}}\left\|f\left(\mathbf{x} ; \frac{\partial \hat{u}}{\partial x_{1}}, \ldots, \frac{\partial \hat{u}}{\partial x_{d}} ; \frac{\partial^{2} \hat{u}}{\partial x_{1} \partial x_{1}}, \ldots, \frac{\partial^{2} \hat{u}}{\partial x_{1} \partial x_{d}} \right)\right\|_{2}^{2} \\ \mathcal{L}_{IC}\left(\boldsymbol{\theta};\mathcal{T}_{i}\right) &=\frac{1}{\left|\mathcal{T}_{i}\right|} \sum_{\mathbf{x}\in \mathcal{T }_{i}}\|\hat{u}(\mathbf{x})-u(\mathbf{x})\|_{2}^{2} \\ \mathcal{L}_{BC} \left(\ballsymbol{\theta};\mathcal{T}_{b}\right) &=\frac{1}{\left|\mathcal{T}_{b}\right|}\sum_{\ mathbf{x} \in \mathcal{T}_{b}}\|\mathcal{B}(\hat{u}, \mathbf{x})\|_{2}^{2}\\\mathcal {L}_{Data}\left(\bold symbol{\theta}; \mathcal{T}_{data}\right) &=\frac{1}{\left|\mathcal{T}_{data}\ right|} \sum_{\mathbf{x} \in \mathcal{T}_{data}}\|\hat{u}(\mathbf{x})-u(\mathbf{x})\|_{ 2}^{2} \end{aligned}=\frac{1}{\left|\mathcal{T}_{b}\right|}\sum_{\mathbf{x}\in \mathcal{T}_{b}}\|\mathcal{B} (\hat{u}, \mathbf{x})\|_{2}^{2}\\ \mathcal{L}_{Data}\left(\bold symbol{\theta}; \mathcal{T}_ {data}\right) &=\frac{1}{\left|\mathcal{T}_{data}\right|} \sum_{\mathbf{x}\in \mathcal{T}_{data}} \|\hat{u}(\mathbf{x})-u(\mathbf{x})\|_{2}^{2} \end{aligned}=\frac{1}{\left|\mathcal{T}_{b}\right|}\sum_{\mathbf{x}\in \mathcal{T}_{b}}\|\mathcal{B} (\hat{u}, \mathbf{x})\|_{2}^{2}\\ \mathcal{L}_{Data}\left(\bold symbol{\theta}; \mathcal{T}_ {data}\right) &=\frac{1}{\left|\mathcal{T}_{data}\right|} \sum_{\mathbf{x}\in \mathcal{T}_{data}} \|\hat{u}(\mathbf{x})-u(\mathbf{x})\|_{2}^{2} \end{aligned}LPDE( i ;Tf)LIC( i ;Ti)LBC( i ;Tb)LData( i ;Tdata)=Tf1xTff(x;x1u^,,xdu^;x1x12u^,,x1xd2u^)22=Ti1xTiu^(x)u(x)22=Tb1xTbB(u^,x)22=Tdata1xTdatau^(x)u(x)22
w f w_{f} wf w i w_{i} wi w b w_{b} wband wd w_{d}wdis the weight. T f \mathcal{T}_{f}Tf T i \mathcal{T}_{i} Ti T b \mathcal{T}_{b} Tb T d a t a \mathcal{T}_{data} TdataRepresents residual points from PDE, initial value, boundary value and true value. Here T f ⊂ Ω \mathcal{T}_{f} \subset \OmegaTfΩ is a predefined set of points to measure the neural network outputu ^ \hat{u}u^ Extent of match to PDE.

3. Solving problem definition

d 2 u   d x 2 = − 0.49 ⋅ sin ⁡ ( 0.7 x ) − 2.25 ⋅ cos ⁡ ( 1.5 x ) u ( − 10 ) = − sin ⁡ ( 7 ) + cos ⁡ ( 15 ) + 1 u ( 10 ) = sin ⁡ ( 7 ) + cos ⁡ ( 15 ) − 1 \begin{aligned} \frac{\mathrm{d}^2 u}{\mathrm{~d} x^2} &=-0.49 \cdot \sin (0.7 x)-2.25 \cdot \cos (1.5 x) \\ u(-10) &=-\sin (7)+\cos (15)+1 \\ u(10) &=\sin (7)+\cos (15)-1 \end{aligned}  dx2d2 andu(10)u(10)=0.49sin ( 0 . 7 x )2.25cos(1.5x)=sin(7)+cos(15)+1=sin(7)+cos(15)1
The real solution is
u : = sin ⁡ ( 0.7 x ) + cos ⁡ ( 1.5 x ) − 0.1 xu:=\sin (0.7 x)+\cos (1.5 x )-0.1 xu:=sin ( 0 . 7 x )+cos(1.5x)0.1x

4. Results display

Please add a picture description

Please add a picture description

Guess you like

Origin blog.csdn.net/weixin_45521594/article/details/127659979
Recommended