Tensor method of model reduction method

Tensor method of model reduction method

Introduction

The introduction of the POD method is as follows:
https://blog.csdn.net/lusongno1/article/details/125944587

The classic POD method cannot deal with nonlinear terms, so many methods have been developed, such as DEIM:
https://blog.csdn.net/lusongno1/article/details/125955245

If there is a set of parameters in the equation we want to consider, and different parameters can get a Matrix, then how should we put the data obtained under multiple sets of parameters together as a priori to model the new set of parameters? What about downgrading? This is the problem to be considered in this article, that is, high-order tensor decomposition and interpolation in parameter dimensions. Simply put, if there is already a snapshot under a set of parameters, then a new parameter is added, and the SVD decomposition under it does not need to find a new snapshot, but only needs to be interpolated from the original parameters.

This blog is a bit difficult to understand, you can take a look at it, and then read some articles. We omit the explanation of the meaning of some conventional symbols, and you can refer to the references for the meaning of symbols that you do not understand. We only present the method, proof analysis and experiments and can read the paper.

basic knowledge

Let's start with some basic definitions. A tensor in an algebraic tensor space can be written as a linear combination of elementary tensors:
v = ∑ i = 1 mvi ( 1 ) ⊗ ⋯ ⊗ vi ( d ) v=\sum_{i=1}^{m} v_ {i}^{(1)} \otimes \cdots \otimes v_{i}^{(d)}v=i=1mvi(1)vi(d)

Note that there is actually no combination coefficient here.

The smallest subspace of a known tensor, which is a set of spaces. It is v ∈ ⨂ v = 1 d U vv \in \bigotimes_{v=1}^{d} U_{v}vv=1dUv中,U ν U_{\nu}Unminimization. That is to say, it can wrap vvThe smallest spatial component of v .

The rank of a second-rank tensor, which is the smallest rrUse r
: u = ∑ i = 1 rsi ⊗ viu=\sum_{i=1}^{r} s_{i} \otimes v_{i}u=i=1rsivi

For second-order tensors, the singular value decomposition is generally written as follows:
u = ∑ i = 1 n σ isi ⊗ viu=\sum_{i=1}^{n} \sigma_{i} s_{i} \otimes v_ {i}u=i=1npisivi

For high-order tensors, its Canonical rank is the extension of the second-order tensor rank, that is, the smallest rrr 使得,
v = ∑ i = 1 r v i ( 1 ) ⊗ ⋯ ⊗ v i ( d ) v=\sum_{i=1}^{r} v_{i}^{(1)} \otimes \cdots \otimes v_{i}^{(d)} v=i=1rvi(1)vi(d)
The so-called α \alphaα is worth the dimensionality of the smallest subspace,α \alphaαinterface ,
rank ⁡ α ( v ) = dim ⁡ ( U α min ⁡ ( v ) ) . \operatorname{rank}_{\alpha}(v)=\operatorname{dim}\left(U_{\alpha}^{\min}(v)\right) .ranka(v)=dim(Uamin(v)).

Tucker Rank is actually a vector, which is defined by the dimension of the smallest subspace. That is, one with Tucker Rank r = ( r ν ) v ∈ D r=\left(r_{\nu}\right)_{v \in D}r=(rn)vDtensor set, which satisfies that in all index sets, the dimension of the corresponding minimum subspace is limited, namely:
T r = { v ∈ X : rank ⁡ v ( v ) = dim ⁡ ( U v min ⁡ ( v ) ) ≤ r ν , v ∈ D } , \mathscr{T}_{r}=\left\{v \in X: \operatorname{rank}_{v}(v)=\operatorname{dim} \left(U_{v}{ }^{\min }(v)\right) \leq r_{\nu}, v \in D\right\},Tr={ vX:rankv(v)=dim(Uvm i n (v))rn,vD},
then an element in the Tucker tensor can be written as:
v = ∑ i 1 = 1 r 1 … ∑ id = 1 rd C i 1 , … , idvi 1 ( 1 ) ⊗ ⋯ ⊗ vid ( d ) v=\sum_ {i_{1}=1}^{r_{1}} \ldots \sum_{i_{d}=1}^{r_{d}} C_{i_{1}, \ldots, i_{d}} v_ {i_{1}}^{(1)} \otimes \cdots \otimes v_{i_{d}}^{(d)}v=i1=1r1id=1rdCi1,,idvi1(1)vid(d)

Tree-Based Rank means that we do not limit the dimensionality of the smallest component subspace corresponding to a single indicator, but we limit the dimensionality of the tensor decomposition space corresponding to a given indicator set, that is, BT
r = { v ∈ X : rank ⁡ α ( v ) = dim ⁡ ( U α min ⁡ ( v ) ) ≤ r α , α ∈ TD } . \mathscr{B} \mathscr{T}_{r}=\left \{v \in X: \operatorname{rank}_{\alpha}(v)=\operatorname{dim}\left(U_{\alpha}^{\min }(v)\right) \leq r_{\ alpha}, \alpha \in T_{D}\right\} .BTr={ vX:ranka(v)=dim(Uamin(v))ra,aTD}.

Note here that a set of indicators is given a limit, and there are as many sets of indicators as there are limits. Tucker Rank is a special case when it aggregates a single indicator for each indicator.

TT (Tensor-train) Rank is also a special case of Tree-Based Rank. It obtains index sets that are ordered, a set of index sets that grow from the tail, that is,
TT r = { v ∈ X : rank ⁡ { k + 1 , … , d } ( v ) ≤ rk } \mathscr{T} \mathscr{T}_{r}=\left\{v \in X: \operatorname{rank}_{\{k+1, \ldots, d\}}(v) \leq r_{k}\right \}TTr={ vX:rank{ k+1,,d}(v)rk}

The following is a brief introduction to Higher Order Singular Value Decomposition (HOSVD). Given an indicator set α \alphaα , the vector space can be divided into two parts. A general truncated SVD can be written as:
u α , r α = ∑ i = 1 r α σ i ( α ) ui ( α ) ⊗ ui ( α c ) u_{\alpha, r_{\alpha}}=\sum_{ i=1}^{r_{\alpha}} \sigma_{i}^{(\alpha)} u_{i}^{(\alpha)} \otimes u_{i}^{\left(\alpha^{ c}\right)}ua , ra=i=1rapi( a )ui( a )ui( ac)

Both the Tucker format and the tree-based Tucker format of HOSVD are implemented based on projection, so I won’t go into details.

Parametric problems and traditional POD methods

Suppose the problem we are solving is as follows,
ut = F ( t , u , α ) , t ∈ ( 0 , T ) , and u ∣ t = 0 = u 0 \mathbf{u}_{t}=F(t, \ mathbf{u}, \boldsymbol{\alpha}), \quad t \in(0, T), \quad \text { and }\left.\mathbf{u}\right|_{t=0}=\ mathbf{u}_{0}ut=F(t,u,a ),t(0,T), and ut=0=u0
We arrange the solution of this problem into a matrix, called snapshots,
Φ pod ( α ) = [ ϕ 1 ( α ) , … , ϕ N ( α ) ] ∈ RM × N \Phi_{\text {pod } }(\boldsymbol{\alpha})=\left[\boldsymbol{\phi}_{1}(\boldsymbol{\alpha}), \ldots, \boldsymbol{\phi}_{N}(\boldsymbol{\phi}_{N}(\boldsymbol{\ alpha})\right] \in \mathbb{R}^{M \times N}Phipod ( a )=[ p1( a ) ,,ϕN( a ) ]RM × N
performs singular value decomposition on the snapshots matrix,
Φ pod ( α ) = U Σ VT \Phi_{\mathrm{pod}}(\boldsymbol{\alpha})=\mathrm{U \Sigma V}^{T}Phipod( a )=UΣVT
takes the first few left singular vectors as a set of bases of the subspace, which we call POD bases.

Higher-order tensor processing for parametric dynamical systems

Now let's deal with it, assuming that the above-mentioned parameter is not one, but many, how should we do SVD decomposition, and how should we get the POD basis?

symbol definition

We might as well assume that the above parameters are located in a high-dimensional box, and the dimension is DDD
A = ⨂ i = 1 D [ α i min ⁡ , α i max ⁡ ] \mathcal{A}=\bigotimes_{i=1}^{D}\left[\alpha_{i}^{\min }, \alpha_{i}^{\max }\right] A=i=1D[ aimin,aimax]
We can cut this high-dimensional box several times in all directions to get some node sets,
A ^ = { α ^ = ( α ^ 1 , … , α ^ D ) T : α ^ i ∈ { α ^ ij } j = 1 , … , ni , i = 1 , … , D } \widehat{\mathcal{A}}=\left\{\widehat{\boldsymbol{\alpha}}=\left(\ widehat{\alpha}_{1}, \ldots, \widehat{\alpha}_{D}\right)^{T}: \widehat{\alpha}_{i} \in\left\{\widehat{ \alpha}_{i}^{j}\right\}_{j=1, \ldots, n_{i}}, i=1, \ldots, D\right\}A ={ a =(a 1,,a D)T:a i{ a ij}j = 1 , , ni,i=1,,D }
The total number of nodes is,
K = ∏ i = 1 D ni K=\prod_{i=1}^{D} n_{i}K=i=1Dni
Each point is a parameter, and each parameter can be taken as snapshots to obtain a matrix. All these matrices are organized together to form a high-order tensor, ( Φ ) : , i
1 , … , i D , k = ϕ k ( α ^ 1 i 1 , … , α ^ D i D ) (\boldsymbol{\Phi})_{:, i_{1}, \ldots, i_{D}, k}=\boldsymbol{ \phi}_{k}\left(\widehat{\alpha}_{1}^{i_{1}}, \ldots, \widehat{\alpha}_{D}^{i_{D}}\right )( F ):,i1,,iD,k=ϕk(a 1i1,,a DiD)
The definition of the Frobenius norm of the tensor is similar to that of the matrix,
∥ Φ ∥ F : = ( ∑ j = 1 M ∑ i 1 = 1 n 1 … ∑ i D = 1 n D ∑ k = 1 N Φ j , i 1 , … , i D , k 2 ) 1 / 2 \|\boldsymbol{\Phi}\|_{F}:=\left(\sum_{j=1}^{M} \sum_{i_ {1}=1}^{n_{1}} \ldots \sum_{i_{D}=1}^{n_{D}} \sum_{k=1}^{N} \boldsymbol{\Phi}_ {j, i_{1}, \ldots, i_{D}, k}^{2}\right)^{1 / 2}ΦF:=(j=1Mi1=1n1iD=1nDk=1NPhij,i1,,iD,k2)1/2
Under this definition, we have to assume a low-order approximation of the tensorΦ ~ \widetilde{\boldsymbol{\Phi}}Phi Define,
∥ Φ ~ − Φ ∥ F ≤ ε ~ ∥ Φ ∥ F \|\assignment{\ball symbol{\Phi}}-\ballsymbol{\Phi}\|_{F} \leq \wideassignment{\varepsilon} \|\bold symbol{\Phi}\|_{F}Phi~ΦFe ΦF
For a tensor, we can define the matrix multiplication vector operation in a certain direction as follows:
( Ψ × ka ) j 1 , … , jk − 1 , jk + 1 , … , jm = ∑ jk = 1 N k Ψ j 1 , … , jmajk \left(\boldsymbol{\Psi} \times_{k} \mathbf{a}\right)_{j_{1}, \ldots, j_{k-1}, j_{k+1 }, \ldots, j_{m}}=\sum_{j_{k}=1}^{N_{k}} \boldsymbol{\Psi}_{j_{1}, \ldots, j_{m}} a_ {j_{k}}( Ps×ka)j1,,jk1,jk+1,,jm=jk=1NkPsj1,,jmajk

$ \times_{k}$ represents the kkthPerform linear combination compression in k directions.

We can define the operator ei ( α ^ ) = ( e 1 i ( α ^ ) , … , enii ( α ^ ) ) T ∈ R ni , i = 1 , … , D \mathbf{ e}^{i}(\widehat{\boldsymbol{\alpha}})=\left(e_{1}^{i}(\widehat{\boldsymbol{\alpha}}), \ldots, e_{n_{ i}}^{i}(\widehat{\boldsymbol{\alpha}})\right)^{T} \in \mathbb{R}^{n_{i}}, i=1, \ldots, Dei(a )=(e1i(a ),,enii(a ))TRni,i=1,,D 为,
e j i ( α ^ ) = { 1  if  α ^ i = α ^ i j 0  otherwise  , j = 1 , … , n i e_{j}^{i}(\widehat{\boldsymbol{\alpha}})=\left\{\begin{array}{ll} 1 & \text { if } \widehat{\alpha}_{i}=\widehat{\alpha}_{i}^{j} \\ 0 & \text { otherwise } \end{array}, \quad j=1, \ldots, n_{i}\right. eji(a )={ 10 if a i=a ij otherwise ,j=1,,ni
In fact, ei \mathbf{e}^{i}ei means that forα ^ = ( α ^ 1 , … , α ^ D ) T \widehat{\boldsymbol{\alpha}}=\left(\widehat{\alpha}_{1}, \ldots, \ widehat{\alpha}_{D}\right)^{T}a =(a 1,,a D)T 'siii dimension, what is its value, thenei \mathbf{e}^{i}eThe first few positions of i are 1, and the others are zero.
With the above symbol definition, we can extract the snapshot on each parameter node as follows:
Φ e ( α ^ ) = Φ × 2 e 1 ( α ^ ) × 3 e 2 ( α ^ ) ⋯ × D + 1 e D ( α ^ ) ∈ RM × N \Phi_{e}(\widehat{\boldsymbol{\alpha}})=\boldsymbol{\Phi} \times_{2} \mathbf{e}^{1}( \widehat{\boldsymbol{\alpha}}) \times_{3} \mathbf{e}^{2}(\widehat{\boldsymbol{\alpha}}) \cdots \times_{D+1} \mathbf{e }^{D}(\widehat{\boldsymbol{\alpha}}) \in \mathbb{R}^{M \times N}Phie(a )=Phi×2e1(a )×3e2(a )×D+1eD(a )RM × N
Similarly, for the approximation tensor, we can also extract in the same way,
Φ ~ e ( α ^ ) = Φ ~ × 2 e 1 ( α ^ ) × 3 e 2 ( α ^ ) ⋯ × D + 1 e D ( α ^ ) ∈ RM × N \widetilde{\Phi}_{e}(\widehat{\boldsymbol{\alpha}})=\widetilde{\boldsymbol{\Phi}} \times_{2} \mathbf{e}^{1}(\widehat{\boldsymbol{\alpha}}) \times_{3} \mathbf{e}^{2}(\widehat{\boldsymbol{\alpha}}) \cdots \ times_{D+1} \mathbf{e}^{D}(\widehat{\boldsymbol{\alpha}}) \in \mathbb{R}^{M \times N}Phi e(a )=Phi ×2e1(a )×3e2(a )×D+1eD(a )RM×N
N ~ = rank ⁡ ( Φ ~ e ( α ^ ) ) \tilde{N}=\operatorname{rank}\left(\widetilde{\Phi}_{e}(\widehat{\boldsymbol{\alpha}})\right) N~=rank(Phi e(a ) ) ,then in other words,
∑ i = 1 N ∥ ϕ i − ∑ j = 1 N ~ ⟨ ϕ i , zj ⟩ zj ∥ l 2 2 ≤ ε ~ 2 ∥ Φ ∥ F 2 \sum _{i= 1}^{N}\left\|\phi_{i}-\sum_{j=1}^{\tilde{N}}\left\long\phi_{i}, \mathbf{z}_{j} \right\rangle \mathbf{z}_{j}\right\|_{\ell^{2}}^{2} \leq \wide assignment{\valuepsilon}^{2}\|\ball symbol{\Phi} \|_{F}^{2}i=1N ϕij=1N~ϕi,zjzj 22e 2ΦF2

Similar to ei \mathbf{e}^{i} aboveeThe way i is defined, we can also define it as parameter boxiiLagrangian interpolation node basis functions in i dimensions,
ei : α → R ni , i = 1 , … , D \mathbf{e}^{i}: \boldsymbol{\alpha} \rightarrow \mathbb{R }^{n_{i}}, \quad i=1, \ldots, Dei:aRni,i=1,,D
e j i ( α ) = { ∏ m = 1 , m ≠ k p ( α ^ i i m − α i ) / ∏ m = 1 , m ≠ k p ( α ^ i i m − α ^ i j ) ,  if  j = i k ∈ { i 1 , … , i p } , 0 ,  otherwise  , e_{j}^{i}(\boldsymbol{\alpha})= \begin{cases}\prod_{\substack{m=1, m \neq k}}^{p}\left(\widehat{\alpha}_{i}^{i_{m}}-\alpha_{i}\right) / \prod_{\substack{m=1, m \neq k}}^{p}\left(\widehat{\alpha}_{i}^{i_{m}}-\widehat{\alpha}_{i}^{j}\right), & \text { if } j=i_{k} \in\left\{i_{1}, \ldots, i_{p}\right\}, \\ 0, & \text { otherwise },\end{cases} eji( a )={ m=1,m=kp(a iimai)/m=1,m=kp(a iima ij),0, if j=ik{ i1,,ip}, otherwise ,
According to the definition of polynomial interpolation, we can say that a function is interpolated on an interpolation node as follows,
g ( α i ) ≈ ∑ j = 1 nieji ( α ) g ( α ^ ij ) , i = 1 , … , D g \left(\alpha_{i}\right) \approx \sum_{j=1}^{n_{i}} e_{j}^{i}(\boldsymbol{\alpha}) g\left(\widehat{ \alpha}_{i}^{j}\right), \quad i=1, \ldots, Dg( ai)j=1nieji( a ) g(a ij),i=1,,D
Under these definitions, for high-dimensional tensors, we can interpolate the dimensions corresponding to the parameters into a quantity. This quantity is a function, Φ ~ e ( α ) = Φ ~ × 2 e
1 ( α ) × 3 e 2 ( α ) ⋯ × D + 1 e D ( α ) ∈ RM × N \widetilde{\Phi}_{e}(\boldsymbol{\alpha})=\tilde{\boldsymbol{\ Phi}} \times_{2} \mathbf{e}^{1}(\boldsymbol{\alpha}) \times_{3} \mathbf{e}^{2}(\boldsymbol{\alpha}) \cdots \ times_{D+1} \mathbf{e}^{D}(\boldsymbol{\alpha}) \in \mathbb{R}^{M \times N}Phi e( a )=Phi~×2e1 (a)×3e2 (a)×D+1eD (a)RM × N
Of course, if we only partially step on each dimension for the parameter box, for example,
A ^ p : = { α ^ = ( α ^ 1 , … , α ^ D ) T : α ^ i ∈ { α ^ ij } j ∈ { i 1 , … , ip } , i = 1 , … , D } ⊂ A ^ \widehat{\mathcal{A}}_{p}:=\left\{\widehat{\ boldsymbol{\alpha}}=\left(\widehat{\alpha}_{1}, \ldots, \widehat{\alpha}_{D}\right)^{T}: \widehat{\alpha}_{ i} \in\left\{\widehat{\alpha}_{i}^{j}\right\}_{j \in\left\{i_{1}, \ldots, i_{p}\right\ }}, i=1, \ldots, D\right\} \subset \widehat{\mathcal{A}}A p:={ a =(a 1,,a D)T:a i{ a ij}j{ i1,,ip},i=1,,D}A
There can also be similar results. After these preparations, let me introduce the selection method of the POD base under the three formats. No more details, just try to make the algorithm clear, with the goal of being able to program. In fact, the core ideas of these three methods are the same, that is, we first decompose the tensor snapshot in some ways, then interpolate the meta-tensor in the parameter part to obtain the center matrix, and finally calculate the center Do a singular value decomposition of the matrix. Select the first nn of the left singular vector after decompositionn number.

CP-TROM

Canonical polyadic (CP) decomposition is the most basic tensor decomposition method. The tensor approximation under this decomposition formulation is simply the following process.

Offline stage:
First of all, we must solve the problem through some traditional numerical methods, and get a tensor snapshot of the solution (for the sake of Sinochem, it may be called a snapshot below). For this snapshot tensor, given the target canonical rank RRR , we use the Alternating Least Squares algorithm (ALS) to get its low-rank CP decomposition,
Φ ≈ Φ ~ = ∑ r = 1 R ur ∘ σ 1 r ∘ ⋯ ∘ σ D r ∘ vr \boldsymbol{\Phi } \approx \widetilde{\boldsymbol{\Phi}}=\sum_{r=1}^{R} \mathbf{u}^{r} \circ \boldsymbol{\sigma}_{1}^{r} \circ \cdots \circ \boldsymbol{\sigma}_{D}^{r} \circ \mathbf{v}^{r}PhiPhi =r=1Rurp1rpDrvr

ALS 算法参考:Harshman R A. Foundations of the PARAFAC procedure: Models and conditions for an" explanatory" multimodal factor analysis[J]. 1970.

Secondly, we get the decomposed ur \mathbf{u}^{r}urV r \mathbf{V}^{r}VIf r is a continuous variable,
U ^ = [ u 1 , ... , u R ] ∈ RM × R , V ^ = [ v 1 , ... , v R ] ∈ RN × R \widehat{\mathrm{U} }=\left[\mathbf{u}^{1}, \ldots, \mathbf{u}^{R}\right]\in \mathbb{R}^{M\times R}, \quad\widehat{ \mathrm{V}}=\left[\mathbf{v}^{1}, \ldots, \mathbf{v}^{R}\right]\in \mathbb{R}^{N\times R}U =[u1,,uR]RM×R,V =[v1,,vR]RN × R
Finally, we decompose the two matrices obtained by QR respectively,
U ^ = URU , V ^ = VRV \widehat{\mathrm{U}}=\mathrm{UR}_{U}, \quad \ widehat{\mathrm{V}}=\mathrm{VR}_{V}U =URU,V =VRV

Online stage:
Specify a reduced space dimension n ≤ R n \leq RnR , and given a set of parameter vectorsα ∈ A \alpha \in \mathcal{A}aA. _
First, we can put the tensor in each dimension at the pointα ∈ A \alpha \in \mathcal{A}aDo interpolation at A , for CP decomposition, in fact, the node basis functions on each dimension are inα ∈ A \alpha \in \mathcal{A}aA takes a value, and makes an inner product with the elementary tensor, turns into some numbers and makes a product, that is,
Φ ~ e ( α ) = ∑ r = 1 R srur ∘ vr ∈ RM × N , with sr = ∏ i = 1 D ⟨ σ ir , ei ( α ) ⟩ ∈ R \widetilde{\Phi}_{e}(\boldsymbol{\alpha})=\sum_{r=1}^{R} s_{r} \mathbf{u} ^{r} \circ \mathbf{v}^{r} \in \mathbb{R}^{M \times N}, \text { with } s_{r}=\prod_{i=1}^{D }\left\langle\boldsymbol{\sigma}_{i}^{r}, \mathbf{e}^{i}(\boldsymbol{\alpha})\right\rangle \in \mathbb{R}Phi e( a )=r=1RsrurvrRM×N, with sr=i=1Dpir,ei (α)R
Next, put thesesr s_rsrSpanning a diagonal matrix S ( α ) = diag ⁡ ( s 1 , … , s R ) \mathrm{S}(\boldsymbol{\alpha})=\operatorname{diag}\left(s_{1}, \ldots , s_{R}\right)S ( a )=diag(s1,,sR) , and construct the core Matrix as follows,
C ( α ) = RU S ( α ) RVT \mathrm{C}(\boldsymbol{\alpha})=\mathrm{R}_{U} \mathrm{~S}(\ boldsymbol{\alpha}) \mathrm{R}_{V}^{T}C ( a )=RU S ( a ) RVT
Next, perform singular value decomposition on the core Matrix,
C ( α ) = U c Σ c V c T \mathrm{C}(\boldsymbol{\alpha})=\mathrm{U}_{c} \Sigma_{c} \mathrm{~V}_{c}^{T}C ( a )=UcSc VcT
Finally, we get the singular value decomposition under this online parameter,
Φ ~ e ( α ) = UC ( α ) VT = ( UU c ) Σ c ( VV c ) T \widetilde{\Phi}_{e}( \boldsymbol{\alpha})=\mathrm{UC}(\boldsymbol{\alpha}) \mathrm{V}^{T}=\left(\mathrm{UU}_{c}\right) \Sigma_{c }\left(\mathrm{VV}_{c}\right)^{T}Phi e( a )=UC ( a ) VT=( UUc)Sc(VVc)T
那么, { β i ( α ) } i = 1 n ⊂ R R \left\{\boldsymbol{\beta}_{i}(\boldsymbol{\alpha})\right\}_{i=1}^{n} \subset \mathbb{R}^{R} { bi( a ) }i=1nRR can be directly taken asU c \mathrm{U}_{c}Ucex nnn columns.

HOSVD-TROM

Compared with CP decomposition, higher order singular value decomposition (higher order SVD, HOSVD) can have a guaranteed minimum property. That is,
∥ Φ − Φ ~ ∥ ≤ D + 2 ∥ Φ − Φ opt ∥ and ∥ Φ − Φ ~ ∥ ≤ ( ∑ i = 1 D + 1 Δ i 2 ) 1 2 \|\boldsymbol{\Phi}- \tilde{\boldsymbol{\Phi}}\| \leq \sqrt{D+2}\left\|\boldsymbol{\Phi}-\boldsymbol{\Phi}^{\mathrm{opt}}\right\| \quad \text { and } \quad\|\boldsymbol{\Phi}-\widetilde{\boldsymbol{\Phi}}\| \leq\left(\sum_{i=1}^{D+1} \Delta_ {i}^{2}\right)^{\frac{1}{2}}ΦPhi~D+2 PhiPhiopt  and ΦPhi (i=1D+1Di2)21
Offline phase:
we are given a snapshot of the tensor and the target precision ε ~ \widetilde{\varepsilon}e .
The offline stage is to calculate the HOSVD decomposition to obtain the decomposition matrix and coefficient (core tensor). There are many algorithms that can be used, such as:
De Lathauwer L, De Moor B, Vandewalle J. A multilinear singular value decomposition[J]. SIAM journal on Matrix Analysis and Applications, 2000, 21(4): 1253-1278.

把得到的分量都拼一拼,
U = [ u 1 , … , u M ~ ] ∈ R M × M ~ ,   V = [ v 1 , … , v N ~ ] ∈ R N × N ~ ,   S i = [ σ i 1 , … , σ i n ~ i ] T ∈ R n ~ i × n i , i = 1 , … , D . \begin{aligned} \mathrm{U} &=\left[\mathbf{u}^{1}, \ldots, \mathbf{u}^{\widetilde{M}}\right] \in \mathbb{R}^{M \times \widetilde{M}}, \quad \mathrm{~V}=\left[\mathbf{v}^{1}, \ldots, \mathbf{v}^{\widetilde{N}}\right] \in \mathbb{R}^{N \times \widetilde{N}}, \\ \mathrm{~S}_{i} &=\left[\boldsymbol{\sigma}_{i}^{1}, \ldots, \boldsymbol{\sigma}_{i}^{\widetilde{n}_{i}}\right]^{T} \in \mathbb{R}^{\widetilde{n}_{i} \times n_{i}}, \quad i=1, \ldots, D . \end{aligned} U Si=[u1,,uM ]RM×M , V=[v1,,vN ]RN×N ,=[ pi1,,pin i]TRn i×ni,i=1,,D.

Note that here S i \mathrm{S}_{i}SiThe expression has a transpose.

Online stage:
given the reduced space dimension n ≤ min ⁡ { M ~ , N ~ } n \leq \min \{\widetilde{M}, \widetilde{N}\}nmin{ M ,N } , and the online parameterα ∈ A \boldsymbol{\alpha} \in \mathcal{A}aA. _
First, like CP, some interpolation expression is still performed on the parameter components to obtain the core Matrix (hereinafter referred to as the central matrix),
C e ( α ) = C × 2 ( S 1 e 1 ( α ) ) × 3 ( S 2 e 2 ( α ) ) ⋯ × D + 1 ( SD e D ( α ) ) ∈ RM ~ × N ~ \mathrm{C}_{e}(\boldsymbol{\alpha})=\mathbf{C} \times_{ 2}\left(\mathrm{~S}_{1} \mathrm{e}^{1}(\boldsymbol{\alpha})\right) \times_{3}\left(\mathrm{~S}_ {2} \mathrm{e}^{2}(\boldsymbol{\alpha})\right) \cdots \times_{D+1}\left(\mathrm{~S}_{D} \mathbf{e} ^{D}(\boldsymbol{\alpha})\right) \in \mathbb{R}^{\widetilde{M} \times \widetilde{N}}Ce( a )=C×2( S1e1 (a))×3(S 2e2 (a))×D+1( SDeD (a))RM ×N
Then in fact,
Φ ~ e ( α ) = UC e ( α ) VT \widetilde{\Phi}_{e}(\boldsymbol{\alpha}) = \mathrm{UC}_{e}(\boldsymbol{\ alpha}) \mathrm{V}^{T}Phi e( a )=UCe( a ) VT
Secondly, do SVD decomposition on the central matrix,
C e ( α ) = U c Σ c V c T \mathrm{C}_{e}(\boldsymbol{\alpha})=\mathrm{U}_{c} \Sigma_{c} \mathrm{~V}_{c}^{T}Ce( a )=UcSc VcT
Finally, the orthogonal decomposition of the matrix after parameter dimension interpolation approximation is obtained ( U \mathrm{U}UV \mathrm{V}V is orthogonal at the beginning, never mind),
Φ ~ e ( α ) = ( UU c ) Σ c ( VV c ) T \widetilde{\Phi}_{e}(\boldsymbol{\alpha})=\ left(\mathrm{UU}_{c}\right) \Sigma_{c}\left(\mathrm{VV}_{c}\right)^{T}Phi e( a )=( UUc)Sc(VVc)T
这时,令 { β i ( α ) } i = 1 n ⊂ R M ~ \left\{\boldsymbol{\beta}_{i}(\boldsymbol{\alpha})\right\}_{i=1}^{n} \subset\mathbb{R}^{\widetilde{M}} { bi( a ) }i=1nRM U c \mathrm{U}_{c} Ucin front of nnn columns will do.

TT DRUM

Tensor Train decomposition is a special tree-based decomposition.

Offline stage:
Given a snapshot tensor and target precision.
Use the algorithm mentioned in the paper
Oseledets I V. Tensor-train decomposition[J]. SIAM Journal on Scientific Computing, 2011, 33(5): 2295-2317.
For a given target accuracy, do the TT decomposition of the tensor snapshot ,
Φ ≈ Φ ~ = ∑ j 1 = 1 r ~ 1 ⋯ ∑ j D + 1 = 1 r ~ D + 1 uj 1 ∘ σ 1 j 1 , j 2 ∘ ⋯ ∘ σ D j D , j D + 1 ∘ vj D + 1 \boldsymbol{\Phi} \approx \widetilde{\boldsymbol{\Phi}}=\sum_{j_{1}=1}^{\tilde{r}_{1}} \cdots \sum_{ j_{D+1}=1}^{\widetilde{r}_{D+1}} \mathbf{u}^{j_{1}} \circ \boldsymbol{\sigma}_{1}^{j_ {1}, j_{2}} \circ \cdots \circ \boldsymbol{\sigma}_{D}^{j_{D}, j_{D+1}} \circ \mathbf{v}^{j_{ D+1}}PhiPhi =j1=1r~1jD+1=1r D+1uj1p1j1,j2pDjD,jD+1vjD+1
Still do some splicing of the obtained quantities,
U = [ u 1 , … , ur ~ 1 ] ∈ RM × r ~ 1 , V = [ v 1 , … , vr ~ D + 1 ] ∈ RN × r ~ D + 1 \mathrm{U}=\left[\mathbf{u}^{1}, \ldots, \mathbf{u}^{\widetilde{r}_{1}}\right] \in \mathbb{R}^ {M \times \widetilde{r}_{1}}, \quad \mathrm{~V}=\left[\mathbf{v}^{1}, \ldots, \mathbf{v}^{\widetilde{ r}_{D+1}}\right] \in \mathbb{R}^{N \times \widetilde{r}_{D+1}}U=[u1,,ur 1]RM×r 1, V=[v1,,vr D+1]RN×r D+1
( S i ) j k q = ( σ i j q ) k , j = 1 , … , r ~ i , k = 1 , … , n i , q = 1 , … , r ~ i + 1 \left(\mathbf{S}_{i}\right)_{j k q}=\left(\boldsymbol{\sigma}_{i}^{j q}\right)_{k}, \quad j=1, \ldots, \widetilde{r}_{i}, \quad k=1, \ldots, n_{i}, \quad q=1, \ldots, \widetilde{r}_{i+1} (Si)jkq=( pijq)k,j=1,,r i,k=1,,ni,q=1,,r i+1
Note that each parameter component here can be spliced ​​into a three-dimensional tensor.

Also, for V \mathrm{V}Put the modulo of V
on the diagonal to construct a quantity, W c = diag ⁡ ( ∥ v 1 ∥ , … , ∥ vr ~ D + 1 ∥ ) ∈ R r ~ D + 1 × r ~ D + 1 \mathrm{ W}_{c}=\operatorname{diag}\left(\left\|\mathbf{v}^{1}\right\|, \ldots,\left\|\mathbf{v}^{\widetilde{ r}_{D+1}}\right\|\right) \in \mathbb{R}^{\widetilde{r}_{D+1} \times \widetilde{r}_{D+1}}Wc=diag( v1 ,, vr D+1 )Rr D+1×r D+1

On-line phase:
given the dimensionality of the reduced space and a set of parameters on-line.
The first step is to interpolate the parameter tensor and do multiplication to get the center matrix:
C e ( α ) = ∏ i = 1 D ( S i × 2 ei ( α ) ) \mathrm{C}_{e} (\boldsymbol{\alpha})=\prod_{i=1}^{D}\left(\mathbf{S}_{i} \times_{2} \mathbf{e}^{i}(\boldsymbol{ \alpha})\right)Ce( a )=i=1D(Si×2ei (α))

Φ ~ e ( α ) = UC e ( α ) VT \width{\Phi}_{e}(\bold symbol{\alpha}) = \mathrm{UC}_{e}(\bold symbol {\alpha}) \mathrm{V}^{T}Phi e( a )=UCe( a ) VThe
second step is to perform singular value decomposition on the scaled center matrix:
C e ( α ) W c = U c Σ c V c T \mathrm{C}_{e}(\boldsymbol{\alpha}) \mathrm {W}_{c}=\mathrm{U}_{c} \Sigma_{c} \mathrm{~V}_{c}^{T}Ce( a ) Wc=UcSc VcT
The third step is to obtain the singular value decomposition of the matrix after interpolation:
Φ ~ e ( α ) = UC e ( α ) W c W c − 1 VT = ( UU c ) Σ c ( VW c − 1 V c ) T . \ widetilde{\Phi}_{e}(\boldsymbol{\alpha})=\mathrm{UC}_{e}(\boldsymbol{\alpha}) \mathrm{W}_{c} \mathrm{~W} _{c}^{-1} \mathrm{~V}^{T}=\left(\mathrm{UU}_{c}\right) \Sigma_{c}\left(\mathrm{VW}_{ c}^{-1} \mathrm{~V}_{c}\right)^{T} .Phi e( a )=UCe( a ) Wc Wc1 VT=( UUc)Sc(VWc1 Vc)T.The
fourth step, even if POD depends on{ β i ( α ) } i = 1 n ⊂ R r ~ 1 \left\{\boldsymbol{\beta}_{i}(\boldsymbol{\alpha})\right\ }_{i=1}^{n} \subset \mathbb{R}^{\tilde{r}_{1}}{ bi( a ) }i=1nRr~1, which is U c \mathrm{U}_{c}Ucex nnn columns.

General parameter sampling

The above methods are given a snapshot of the grid-based parameter space, which is not very economical, and will cause trouble when the parameter area is not a box. In this case, we can express the online parameters as a linear combination of existing parameters,
α = ∑ j = 1 K aj α ^ j , e ( α ) = ( a 1 , … , a K ) T \boldsymbol{\alpha }=\sum_{j=1}^{K} a_{j} \widehat{\boldsymbol{\alpha}}_{j}, \quad \mathbf{e}(\boldsymbol{\alpha})=\left (a_{1}, \ldots, a_{K}\right)^{T}a=j=1Kaja j,e ( a )=(a1,,aK)T
e \mathbf{e}e is the extracted coefficient,
e : α → RK \mathbf{e}: \boldsymbol{\alpha} \rightarrow \mathbb{R}^{K}e:aRKWe
assume that the smooth function satisfies some linearity,
g ( α ) ≈ ∑ j = 1 K ajg ( α ^ j ) g(\boldsymbol{\alpha}) \approx \sum_{j=1}^{K } a_{j} g\left(\widehat{\boldsymbol{\alpha}}_{j}\right)g ( a )j=1Kajg(a j)
Then, for each offline parameter, we can construct a third-order tensor,
( Φ ) ijk = ui ( tk , α ^ j ) , i = 1 , … , M , j = 1 , … , K , k = 1 , … , N (\boldsymbol{\Phi})_{ijk}=u_{i}\left(t_{k}, \widehat{\boldsymbol{\alpha}}_{j}\right), \quad i =1, \ldots, M, \quad j=1, \ldots, K, \quad k=1, \ldots, N( F )ijk=ui(tk,a j),i=1,,M,j=1,,K,k=1,,N
Then, a new parameter is introduced, which can be expressed as a linear combination of existing parameters. The corresponding snapshot is just a linear combination of existing snapshots:
Φ ~ e ( α ) = Φ ~ × 2 e ( α ) \widetilde{\Phi}_{e}(\boldsymbol{\alpha})=\widetilde{\boldsymbol {\Phi}} \times_{2} \mathbf{e}(\boldsymbol{\alpha})Phi e( a )=Phi ×2e ( a )

Is this reasonable?

Simply put, e ( α ) \mathbf{e}(\boldsymbol{\alpha})e ( α ) has changed from the original interpolation function to a linear combination function, and other CP, HOSVD, and TT are done in the same way.

references

1、Model reduction and approximation: theory and algorithms[M]. Society for Industrial and Applied Mathematics, 2017.
2、Mamonov A V, Olshanskii M A. Interpolatory tensorial reduced order models for parametric dynamical systems[J]. Computer Methods in Applied Mechanics and Engineering, 2022, 397: 115122.

Guess you like

Origin blog.csdn.net/lusongno1/article/details/126013084