[GCN] GCN study notes 1

Spectral domain graph convolution

convolution

Convolution definition

Convolution is an important operation in analytical mathematics. Suppose f ( x ) f(x) f(x) g ( x ) g(x) g(x) R \mathbb{R} For the integrable function on R, the continuous form of convolution is defined as follows:

∫ − ∞ ∞ f ( τ ) g ( x − τ ) d τ (1) \int_{-\infin} ^{\infin} f(\tau) g(x - \tau) d\tau {\tag{ 1}}f(τ)g(xτ)dτ(1)

Different functions and different convolution kernels can produce different convolution results.

Discrete space convolution

y n = x ∗ w = ∑ k = 1 k w k x n − k (2) y_n = x * w = \sum_{k=1}^k w_k x_{n-k} \tag{2} andn=xIn=k=1kInkxnk(2)
Please add image description

Introduction to graph convolution

Classic convolutional networks cannot handle graph-structured data. Current ideas for implementing graph convolution:

  • Spectral domain graph convolution
  • spatial graph convolution

convolution theorem

The Fourier transform of the convolution of two signals in the spatial domain is equal to the dot product of the Fourier transform of the two signals in the frequency domain. The dot multiplication (element-wise multiplication) refers to the dot product of the two matrices, Multiply corresponding elements in a vector or sequence. That is:
F [ f 1 ( t ) ⋆ f 2 ( t ) ] = F 1 ( w ) . F 2 ( w ) (3) \mathcal{F} [f_1(t) \star f_2(t)] = F_1(w) . F_2(w) \tag{3} F[f1(t)f2(t)]=F1(w).F2(w)(3)

  • f 1 ( t ) f_1(t) f1(t) f 2 ( t ) f_2(t) f2(t) are two signals in the airspace
  • F 1 ( w ) F_1(w) F1(w) Sum F 2 ( w ) F_2(w)F2(w) This is two signals in the area
  • ⋆ \star represents the convolution operation
  • F \mathcal{F} F represents Fourier transform
  • $.$ represents the product operation

can also be written as:
f 1 ( t ) ⋆ f 2 ( t ) = F − 1 [ F 1 ( w ) . F 2 ( w ) ] (4) f_1 (t)\star f_2(t) = \mathcal{F}^{-1} [F_1(w) . F_2(w)] \tag{4} f1(t)f2(t)=F1[F1(w).F2(w)](4)

  • F − 1 \mathcal{F}^{-1} F1 represents the inverse Fourier transform

Implementation ideas of spectral domain graph convolution

  • General signal x x x Japanese paper core w w w Perform Fourier transform to obtain the frequency domain signal X X X Sum W W IN
  • General area signal X X X Sum W W W Perform product operation to obtain frequency domain signal Y Y AND
  • General area signal Y Y Y Perform inverse Fourier transform to obtain the image signal y y and
  • and andy Immediate delivery signal x x x Japanese paper core w w w Paperback result

How to define the Fourier transform on a graph

Based on graph theory, the Fourier transform on the graph is defined using the Graph Fourier Transform (GFT). The definition of GFT is as follows:
F G ( x ) = U T x (5) \mathcal{F}_G(x) = U^T x \tag{5} FG(x)=INTx(5)

  • x x x This is the signal
  • In UU is the eigenvector matrix of the graph

Photograph component summative formalism:
x ( i ) = ∑ k = 1 N u l ( i ) x ^ ( λ l ) (6) x(i) = \sum_{k=1}^N u_l(i)\hat{x}(\lambda_l) \tag{6} x(i)=k=1Ninl(i)x^(λl)(6)

  • x ( i ) x(i) x(i) Display signal x x x Current point i i i 处的值
  • u l ( i ) u_l(i) inl(i) Display image's special direction amount square U UU 目次 l l l Column number i i i single element
  • x ^ ( λ l ) \hat{x}(\lambda_l)x^(λl) Display signal x x x A special expedition λ l \lambda_l llvalue at

Laplacian Matrix

The Laplacian matrix is ​​an important concept in graph signal processing. It is the frequency domain representation of the graph signal. The Laplacian matrix is ​​defined as follows:
L = D − A (7) L = D - A \tag{7} L=DA(7)

  • D D D 了矩阵, D i i = ∑ j A i j D_{ii} = \sum_j A_{ij} Dii=jAij
  • A A A is the connection, A i j = 1 A_{ij} = 1 Aij=1 Display point i i i harmony point j j j There is, or is not A i j = 0 A_{ij} = 0 Aij=0
  • L LL is the Laplacian matrix, otherwise L i j = 0 L_{ij} = 0 Lij=0,$。

Laplacian matrix calculation example
Insert image description here

  • L i j = − 1 L_{ij} = -1 Lij=1 Display point i i i harmony point j j j 之间边
  • L i i = D i i L_{ii} = D_{ii}Lii=DiiThe Laplacian diagonal value is equal to the degree matrix diagonal value
  • L i j = 0 L_{ij} = 0 Lij=0 Display point i i i harmony point j j j There has been a loss

Properties of Laplacian Matrix

The Laplacian matrix is ​​a positive semidefinite matrix, that is, x T L x ≥ 0 x^T L x \geq 0 xTLx0, inside x x x 是任意向量。证明过程:
x T L x = x T D x − x T A x = ∑ i = 1 N d i x i 2 − ∑ i , j = 1 N a i j x i x j = 1 2 ∑ i , j = 1 N a i j ( x i − x j ) 2 ≥ 0 (8) x^T L x = x^T D x - x^T A x = \sum_{i=1}^N d_i x_i^2 - \sum_{i,j=1}^N a_{ij} x_i x_j = \frac{1}{2} \sum_{i,j=1}^N a_{ij} (x_i - x_j)^2 \geq 0 \tag{8} xTLx=xTDxxTAx=i=1Ndixi2i,j=1Naijxixj=21i,j=1Naij(xixj)20(8)

  • A symmetric matrix of order n must have n linearly independent real eigenvalues
  • The eigenvectors corresponding to different eigenvalues ​​of the symmetric matrix are orthogonal to each other. The matrix composed of these rescued eigenvectors is an orthogonal matrix.
  • The eigenvalues ​​of the Laplacian matrix are all non-negative, and at least one eigenvalue is 0
  • The eigenvector with the eigenvalue 0 of the Laplacian matrix corresponds to the number of connected components of the graph, that is, L L The number of zero eigenvalues ​​of L is equal to the number of connected components of the graph.

Spectral Decomposition of Laplacian Matrix

Eigen decomposition, also known as spectral decomposition, is a decomposition in linear algebra, which decomposes a matrix into the form of eigenvectors and eigenvalues. The spectral decomposition of the Laplacian matrix is ​​as follows:
L = U Λ U T (9) L = U \Lambda U^T \tag{9} L=UΛUT(9)

  • In UU is the eigenvector matrix of the Laplacian matrix
  • Λ \LambdaΛ is the eigenvalue matrix of the Laplacian matrix
  • U T U^T INT U U Transpose matrix of U

After decomposing the Laplace spectrum, an n-order symmetric matrix must have n linearly independent eigenvectors that are orthogonal to each other. The matrix composed of these orthogonal eigenvectors is an orthogonal matrix. Therefore, the eigenvector matrix of the Laplacian matrix U U U Correctional exchange, immediately U T U = I U^T U = I INTU=I, inside I I I This is the square measure.

Laplacian matrix and Laplacian operator

The Laplacian matrix and the Laplacian operator are two different concepts, but there is a certain connection between them. The definition of the Laplacian operator is as follows:
Δ f = ∇ ⋅ ∇ f = ∑ i = 1 n ∂ 2 f ∂ x i 2 (10) \Delta f = \nabla \cdot \ nabla f = \sum_{i=1}^n \frac{\partial^2 f}{\partial x_i^2} \tag{10} Δf=f=i=1nxi22f(10)

  • ∇ \nabla is the gradient operator
  • ∇ ⋅ \print \cdot Koresandanshi
  • D \DeltaΔ is the Laplacian operator
  • f f f is function
  • x i x_i xi is the first i i i self-change amount
  • n nn is the number of independent variables
  • ∂ 2 f ∂ x i 2 \frac{\partial^2 f}{\partial x_i^2} xi22f is a function f f f self-change amount x i x_i xiThe second-order partial derivative of
  • ∑ i = 1 n ∂ 2 f ∂ x i 2 \sum_{i=1}^n \frac{\partial^2 f}{\partial x_i^2} i=1nxi22f is a function f f f The sum of second-order partial derivatives with respect to all independent variables
  • Δ f \Delta fΔf 是函数 f f Laplacian operator of f

The Laplacian operator of the graph signal is defined as follows:
Δ G f = U Δ f = U ∇ ⋅ ∇ f = U ∑ i = 1 n ∂ 2 f ∂ x i 2 (11) \Delta_G f = U \Delta f = U \nabla \cdot \nabla f = U \sum_{i=1}^n \frac{\partial^2 f}{\partial x_i^2} \tag{11} DGf=UΔf=Uf=INi=1nxi22f(11)

  • Δ G f \Delta_G fDGf This is the signal f f Laplacian operator of f
  • In UU is the eigenvector matrix of the Laplacian matrix
  • Δ f \Delta fΔf 是函数 f f Laplacian operator of f
  • ∇ ⋅ ∇ f \fold \cdot \fold ff 是函数 f f Laplacian operator of f

How to get the above formula? For the Laplacian operator of a two-dimensional image, we can write it in the following form:
Δ f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 (12) \ Delta f = \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} \tag{12} Δf=x22f+y22f(12)
The discrete form of the Laplacian operator can be written in the following form:
Δ f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 = f ( x + 1 , y ) + f ( x − 1 , y ) − 2 f ( x , y ) h 2 + f ( x , y + 1 ) + f ( x , y − 1 ) − 2 f ( x , y ) h 2 (13) \Delta f = \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} = \frac{f(x+1,y) + f(x-1,y) - 2f(x,y)} {h^2} + \frac{f(x,y+1) + f(x,y-1) - 2f(x,y)}{h^2} \tag{13} Δf=x22f+y22f=h2f(x+1,y)+f(x1,y)2f(x,y)+h2f(x,and+1)+f(x,and1)2f(x,y)(13)
inside h h h is the step size. Write the above formula into matrix form:
Δ f = 1 h 2 [ 0 1 0 1 − 4 1 0 1 0 ] [ f ( x − 1 , y ) f ( x , y ) f ( x + 1 , y ) ] + 1 h 2 [ 0 1 0 1 − 4 1 0 1 0 ] [ f ( x , y − 1 ) f ( x , y ) f ( x , y + 1 ) ] (14 ) \Delta f = \frac{1}{h^2} \begin{bmatrix} 0 & 1 & 0 \\ 1 & -4 & 1 \\ 0 & 1 & 0 \end{bmatrix } \begin{bmatrix} f(x-1,y) \\ f(x,y) \\ f(x+1,y) \end{bmatrix} + \frac{1}{h^2} \begin {bmatrix} 0 & 1 & 0 \\ 1 & -4 & 1 \\ 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} f(x,y-1) \\ f (x,y) \\ f(x,y+1) \end{bmatrix} \tag{14} Δf=h21 010141010 f(x1,y)f(x,y)f(x+1,y) +h21 010141010 f(x,and1)f(x,y)f(x,and+1) (14)

We set h = 1 here and write the above equation in matrix convolution form:

Δ f ( x ) = [ 0 1 0 1 − 4 1 0 1 0 ] ⋆ f ( x ) (16) \Delta f(x) = \begin{bmatrix} 0 & 1 & 0 \\ 1 & -4 & 1 \\ 0 & 1 & 0 \end{bmatrix}\star f(x)\tag{16}Δf(x)= 010141010 f(x)(16)
It can be seen that the Laplacian operator of a two-dimensional image, It is equal to the sum of the differences between surrounding nodes and its own node. For the Laplacian operator of a graph signal, the Laplacian operator of a node can be defined as the sum of the differences between its connected nodes and its own node. We can write it in the following form:
Δ G f = ∑ j = 1 N A i j ( f i − f j ) = ∑ j = 1 N A i j f i − ∑ j = 1 N A i j f j = ∑ j = 1 N A i j f i − ∑ j = 1 N A j i f j (17) \ Delta_G f = \sum_{j=1}^N A_{ij} (f_i - f_j) = \sum_{j=1}^N A_{ij} f_i - \sum_{j=1}^N A_{ij} f_j = \sum_{j=1}^N A_{ij} f_i - \sum_{j=1}^N A_{ji} f_j \tag{17} DGf=j=1NAij(fifj)=j=1NAijfij=1NAijfj=j=1NAijfij=1NAjifj(17)

  • A i j A_{ij} AijDisplay point i i i harmony point j j The weight of the edge between j
  • ( f i − f j ) (f_i - f_j) (fifj) Display point i i i harmony point j j The difference between j

写成矩阵形式有:
Δ G f = [ Δ f 1 Δ f 2 ⋮ Δ f N ] = [ A 11 f 1 − ∑ j = 1 N A 1 i f j ⋮ A n n f n − ∑ j = 1 N A N i f j ] = D f − A f = L f (18) \begin{aligned} \Delta_G f =& \begin{bmatrix} \Delta f_1 \\ \Delta f_2 \\ \vdots \\ \Delta f_N \end{bmatrix} = & \begin{bmatrix} A_{11}f_1 - \sum_{j=1}^N A_{1i} f_j \\ \vdots \\ A_{nn}f_n - \sum_{j=1}^N A_{Ni} f_j \\ \end{bmatrix} = & Df - Af =Lf\tag{18} \end{aligned} DGf= Δf1Δf2ΔfN = A11f1j=1NA1ifjAnnfnj=1NANifj =DfAf=Lf(18)

  • L LL is the Laplacian matrix
  • True signal f f f Left multiplied by a Laplacian matrix can get the signal f f The Laplacian of f.

Graph Fourier Transform

Signal representation on the diagram

The signal on the graph is generally expressed as a vector. Assume there are n nodes. The signal on the picture is recorded as:
x = [ x 1 x 2 ⋮ x n ] (19) x = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{ bmatrix} \tag{19} x= x1x2xn (19)

  • x i ∈ R x_i \in \mathbb{R} xiR Display point i i i Upper signal.

Classical Fourier Transform

The definition of classical Fourier transform is as follows:
The Fourier transform formula is as follows:
X ( ω ) = ∫ − ∞ ∞ x ( t ) e − j ω t d t (20) X(\omega) = \int_{-\infin}^{\infin} x(t) e^{-j \omega t} dt \tag{20} X(ω)=x(t)etdt(20)
Official information below:
X ( ω ) = ∑ n = 0 N − 1 x ( n ) e − j ω n (21) X(\omega) = \sum_{n=0}^{N-1} x(n) e ^{-j \omega n} \tag{21} X(ω)=n=0N1x(n)ejωn(21)

The classic Fourier transform represents a function as a linear combination of several orthogonal basis functions. For the signal on the graph, if we want to perform a Fourier transform, it is natural for us to think that we must also find a set of orthogonal bases and express the signal on the graph through the linear combination of this set of orthogonal bases.

Square format:
x = U x ^ (23) x = U \hat{x} \tag{23} x=INx^(23)

  • x x x represents the signal on the graph
  • In UU represents the eigenvector matrix of the graph
  • x ^ \hat{x} x^ Display signal x x x's Furiha series

How do you ask x ^ \hat{x} xWhat about ^? We can left-multiply both sides of the above equation at the same time U T U^T INT,得到:
U T x = U T U x ^ = I x ^ = x ^ (24) U^T x = U^T U \hat{x} = I \hat{x} = \hat{x} \tag{24} INTx=INTUx^=Ix^=x^(24)

  • I I I Display square screen
  • x ^ \hat{x}x^ Display signal x x x's Furiha series
  • In UU represents the eigenvector matrix of the graph
  • x xx represents the signal on the graph
  • U T U^T INT 电影 U U Transpose matrix of U

Therefore, we can get:
x ^ = U T x (25) \hat{x} = U^T x \tag{25} x^=INTx(25)

  • x ^ \hat{x}x^ Display signal x x x's Furiha series
  • In UU represents the eigenvector matrix of the graph
  • x xx represents the signal on the graph
  • U T U^TINT 电影 U U Transpose matrix of U
  • x ^ \hat{x}x^ Display signal x x x's Furiha series

Properties of Eigenvector Basis

  • The eigenvalues ​​of the Laplacian matrix occupy positions similar to frequencies.
    • The larger the eigenvalue, the higher the frequency of the corresponding eigenvector.
    • The smaller the eigenvalue, the lower the frequency of the corresponding eigenvector.
    • The eigenvector with eigenvalue 0 corresponds to the number of connected components of the graph, that is, L L The number of zero eigenvalues ​​of L is equal to the number of connected components of the graph.
  • The eigenvectors of the Laplacian matrix serve as the positions of the basis functions.
    • The 0 eigenvalue corresponds to a constant eigenvector, which is similar to the constant term in the Fourier transform.
    • The eigenvectors corresponding to low eigenvalues ​​are relatively smooth, and the eigenvectors corresponding to high eigenvalues ​​transform more violently. The two correspond to low-frequency basis functions and high-frequency basis functions.

How to understand these two conclusions?
We use graph Laplacian quadratic form to define the smoothness of the signal. It represents the sum of the squared differences of the signals of two nodes connected by an edge multiplied by the weight. The smaller the value, the smoother it is.

x ⊤ L x = 1 2 ∑ i , j = 1 N A i j ( x i − x j ) 2 (28) x^{\top} L x = \frac{1}{2}\sum_{i,j=1}^N A_{ij} (x_i - x_j)^2 \tag{28} xLx=21i,j=1NAij(xixj)2(28)

又会
x ⊤ L x = x ⊤ U Λ U ⊤ x = x ^ ⊤ Λ x ^ = ∑ k = 1 N λ k x ^ 2 ( λ k ) ( 29) x^{\top} L x = x^{\top} U \Lambda U^{\top} x = \hat{x}^{\top} \Lambda \hat{x} = \sum_{k =1}^N \lambda_k \hat{x}^2(\lambda_k) \tag{29} xLx=xUΛUx=x^Λx^=k=1Nlkx^2(λk)(29)
Therefore, the smaller the corresponding eigenvalue, the smoother the corresponding eigenvector.

Summarize

Using graph Fourier transform, the signal defined on the graph node can be transformed into x ∈ R n x\in\mathbb{R}^n xRn Convert from spatial domain to spectral domain.

  • Transformation from spatial domain to spectral domain: x → x ^ = U T x x\rightarrow \hat{x} = U^T x xx^=INTx
  • Conversion from spectral domain to spatial domain: x ^ → x = U x ^ \hat{x} \rightarrow x = U \hat{x} x^x=INx^
  • Naka L = U Λ U − 1 L = U \Lambda U^{-1} L=UΛU1

convolution theorem

  • Convolution theorem in space domain: x ⋆ h → X ( ω ) H ( ω ) x\star h \rightarrow X(\omega) H(\omega) < /span>xhX(ω)H(ω)

x ⋆ g = F − 1 ⊙ [ X ( ω ) H ( ω ) ] = U ( U ⊤ x ⊙ U ⊤ g ) (30) x \star g = \mathcal{F}^{-1} \odot [X(\omega) H(\omega)] = U(U^{\top}x \odot U^{\top}g) \tag{30} xg=F1[X(ω)H(ω)]=U(UxINg)(30)
If you express this formula in matrix multiplication form, drop the harmand product. At the same time, we usually don’t care about what the filter signal g looks like in the spatial domain, but only care about its situation in the frequency domain.
Let
g θ = ( U ⊤ g ) = diag ( λ g ) = diag ( ( ^ λ ) ) g_{\theta} = (U^ {\top}g) = \text{diag} (\lambda g) =\text{diag} (\hat(\lambda)) gθ=(Ug)=diag(λg)=diag((^λ))

The formula is equivalent to the following formula:
x ⋆ g = U diag ( g ^ ) U ⊤ x (31) x \star g =U \text{diag} (\hat{g}) U^{\top} x \tag{31} xg=Udiag(g^)Ux(31)

  • diag ( g ^ ( λ i ) ) \text{diag}(\hat{g}(\lambda_i))diag(g^(λi)) is a component of spectral domain convolution.

Three classic graph convolution models

Feature maps of different layers in a convolutional network:
Insert image description here

SCNN

main idea:

  • Use a learnable diagonal matrix to replace the convolution kernel in the spectral domain to implement graph convolution operations.

ChebNet

Chebyshev polynomials
Chebyshev polynomials are a class of orthogonal polynomials with important applications. They are defined as follows:
T 0 ( x ) = 1 , T 1 ( x ) = x , T n + 1 ( x ) = 2 x T n ( x ) − T n − 1 ( x ) (32) T_0(x) = 1, T_1(x) = x, T_{ n+1}(x) = 2xT_n(x) - T_{n-1}(x) \tag{32} T0(x)=1,T1(x)=x,Tn+1(x)=2xTn(x)Tn1(x)(32)

矩阵形式为:
T 0 ( L ) = I , T 1 ( L ) = L , T n + 1 ( L ) = 2 L T n ( L ) − T n − 1 ( L ) (32) T_0(L) = I, T_1(L) = L, T_{n+1}(L) = 2LT_n(L) - T_{n-1}(L) \tag{32} T0(L)=I,T1(L)=L,Tn+1(L)=2LTn(L)Tn1(L)(32)

Polynomial interpolation:
f ( x ) = ∑ k = 0 K − 1 α k x k − 1 (33) f(x) = \sum_{k=0}^{ K-1} \alpha_k x^{k-1} \tag{33} f(x)=k=0K1akxk1(33)
Chebyshev interpolation replaces the power function with the Chebyshev polynomial Term:
f ( x ) = ∑ k = 0 K − 1 α k T k ( x ) (34) f(x) = \sum_{k=0}^{K-1 } \alpha_k T_k(x) \tag{34} f(x)=k=0K1akTk(x)(34)

The core idea of ​​ChebNet:

  • Chebyshev polynomials are used to approximate the convolution kernel in the spectral domain to implement graph convolution operations.
  • Using the recursive properties of Chebyshev polynomials, the approximation degree of the convolution kernel in the spectral domain can be controlled within an arbitrary accuracy range.

x ⋆ g θ = U g θ U ⊤ x = U ∑ k = 0 K β k T k ( Λ ) U ⊤ x = ∑ k = 0 K β k T k ( L ) ( U Λ U ⊤ ) x = ∑ k = 0 K β k T k ( L ) x (35) \begin{aligned} x \star g_{\theta} =& Ug_{\theta}U^{\top}x \\ =& U \sum_{k=0}^{K}\beta_k T_k(\Lambda) U^{\top} x \\ =& \sum_{k=0}^{K}\beta_k T_k(L)(U \Lambda U^{\top}) x \\ =& \sum_{k=0}^{K}\beta_k T_k(L) x \\ \end{aligned} \tag{35} xgθ====UgθINxINk=0KbkTk(Λ)Uxk=0KbkTk(L)(UΛU)xk=0KbkTk(L)x(35)

  • The convolution kernel only has K+1 learnable parameters. Generally, k is much smaller than n, so the number of parameters is greatly reduced.
  • After using Chebyshev polynomials to replace the convolution in the spectral domain, eigenvalue decomposition is no longer needed.
  • The convolution kernel has strict spatial locality. At the same time, k is the receptive field radius of the convolution kernel. That is, the K-order nearest neighbor node of the central vertex is used as the domain node.
    In this way, we do not need to calculate the Laplacian decomposition and directly use the Laplacian matrix to perform the convolution operation.

GCN

The core idea of ​​GCN:

  • First-order Chebyshev polynomials are used to approximate the convolution kernel in the spectral domain to implement graph convolution operations.
  • By utilizing the recursive properties of first-order Chebyshev polynomials, the approximation degree of the convolution kernel in the spectral domain can be controlled within an arbitrary accuracy range.

x ⋆ g θ = U g θ U ⊤ x = ∑ k = 0 k β k T k ( L ^ ) x = ∑ k = 0 1 β k T k ( L ^ ) x = β 0 T 0 ( L ^ ) x + β 1 T 1 ( L ^ ) x = ( β 0 + β 1 L ^ ) x = ( β 0 + β 1 ( L − I n ) ) x = ( β 0 − β 1 ( D − 1 / W D − 1 / 2 ) ) x = ( θ ( D − 1 / 2 W D − 1 / 2 + I n ) ) x (36) \begin{aligned} x \star g_{\theta} =& Ug_{\theta}U^{\top}x \\ =& \sum_{k=0}^{k}\beta_k T_k(\hat{L})x \\ =& \sum_{k=0}^1 \beta_k T_k(\hat{L})x \\ =& \beta_0 T_0(\hat{L})x + \beta_1 T_1(\hat{L})x \\ =& (\beta_0 + \beta_1 \hat{L})x =& (\beta_0 + \beta_1(L - I_n))x \\ =& (\beta_0 - \beta_1(D^{-1/2WD^{-1/2}}))x \\ =& (\theta(D^{-1/2WD^{-1/2}} + I_n))x \\\end{aligned} \tag{36}xgθ=======UgθINxk=0kbkTk(L^)xk=01bkTk(L^)xb0T0(L^)x+b1T1(L^)x(β0+b1L^)x=(β0b1(D1/2WD1/2))x(θ(D1/2WD1/2+In))x(β0+b1(LIn))x(36)

Guess you like

Origin blog.csdn.net/qq_30340349/article/details/134491356