2.4 Coordinates

先给 V V 定义一个ordered basis B = { α 1 , , α n } \mathscr B=\{\alpha_1,\dots,\alpha_n\} ,其中次序是有用的(即一个sequence),那么对于 α V \alpha\in V ,有唯一的一个n-tuple ( x 1 , , x n ) (x_1,\dots,x_n) 使得 α = i = 1 n x i α i \alpha=\sum_{i=1}^nx_i\alpha_i , 于是称 x i x_i 是 the i i th coordinate of α \alpha relative to the ordered basis B = { α 1 , , α n } \mathscr B=\{\alpha_1,\dots,\alpha_n\} . 总结:事实上 V V 的每个ordered basis 决定了一个一一对应: α ( x 1 , , x n ) \alpha\to(x_1,\dots,x_n) ,并且可以保证加法、数乘也在 F n F^n 中对应。
我们将 α ( x 1 , , x n ) \alpha\to(x_1,\dots,x_n) 得到的 ( x 1 , , x n ) (x_1,\dots,x_n) 记为 [ α ] B [\alpha]_{\mathfrak B} , 这个记号在讨论basis的变换时很方便。接下来就进入本节的核心内容:change of basis。如果有 V V 的两组basis: B = { α 1 , , α n } , B = { α 1 , , α n } \mathscr B=\{\alpha_1,\dots,\alpha_n\}, \mathscr B'=\{\alpha_1',\dots,\alpha_n'\} ,每个 α j \alpha_j' 都是 V V 中的向量,因此有唯一的scalars P i j P_{ij} 使得 α j = i = 1 n P i j α i , 1 j n \alpha_j'=\sum_{i=1}^nP_{ij}\alpha_i,1\leq j\leq n ,如果我们令 [ α ] B = ( x 1 , , x n ) [\alpha]_{\mathscr B'}=(x_1',\dots,x_n') ,那么可得到
α = j = 1 n x j α j = j = 1 n x j i = 1 n P i j α i = j = 1 n i = 1 n ( P i j x j ) α i = i = 1 n ( j = 1 n P i j x j ) α i \alpha=\sum_{j=1}^nx_j'\alpha_j'=\sum_{j=1}^nx_j'\sum_{i=1}^nP_{ij}\alpha_i=\sum_{j=1}^n\sum_{i=1}^n(P_{ij}x_j')\alpha_i=\sum_{i=1}^n\left(\sum_{j=1}^nP_{ij}x_j'\right)\alpha_i
根据 [ α ] B [\alpha]_{\mathscr B} 的唯一性,可知 j = 1 n P i j x j = x i \sum_{j=1}^nP_{ij}x_j'=x_i ,如果记矩阵 P = ( P i j ) P=(P_{ij}) ,那么 [ α ] B = P [ α ] B [\alpha]_{\mathscr B}=P[\alpha]_{\mathscr B'} 或者 [ α ] B = P 1 [ α ] B [\alpha]_{\mathscr B'}=P^{-1}[\alpha]_{\mathscr B} ,其中 P P 的可逆性来源于 [ α ] B = 0 [ α ] B = 0 [\alpha]_{\mathscr B'}=0\Leftrightarrow [\alpha]_{\mathscr B}=0 ,因此根据第一章的Theorem 7, P P 可逆。上述内容则是这一章的Theorem 7的结论,注意到对 P P P j = [ α j ] B , j = 1 , , n P_j=[\alpha_j']_{\mathscr B},j=1,\dots,n
以上的讨论从另一个角度,可以得到Theorem 8,即如果先假设 P P 是可逆的,则对于 V V 的一个ordered basis B \mathscr B , 可以有另一个唯一的ordered basis B \mathscr B' 使得 [ α ] B = P [ α ] B [\alpha]_{\mathscr B}=P[\alpha]_{\mathscr B'} 或者 [ α ] B = P 1 [ α ] B [\alpha]_{\mathscr B'}=P^{-1}[\alpha]_{\mathscr B} V V 中每一个 α \alpha 成立。
Example 18 是standard basis的一个例子。Example 19实际上是二维空间中的rotation变换。从Example 20中可以总结一些规律,如果说矩阵 P P 是可逆的,那么 P j = [ α j ] B , j = 1 , , n P_j=[\alpha_j']_{\mathscr B},j=1,\dots,n ,特别当 B {\mathscr B} 是standard basis时, P j P_j 就直接是新的ordered basis B {\mathscr B'} 中的 α j \alpha_j' ,求任何一个向量在这个新的basis B {\mathscr B'} 中的坐标,就用 [ α ] B = P 1 [ α ] B [\alpha]_{\mathscr B'}=P^{-1}[\alpha]_{\mathscr B} 即可,特别是 X = ( x 1 , , x n ) X=(x_1,\cdots,x_n)' B {\mathscr B'} 中的坐标是 P 1 X P^{-1}X .

Exercises

1. Show that the vectors

α 1 = ( 1 , 1 , 0 , 0 ) , α 2 = ( 0 , 0 , 1 , 1 ) α 3 = ( 1 , 0 , 0 , 4 ) , α 4 = ( 0 , 0 , 0 , 2 ) \alpha_1=(1,1,0,0),\quad \alpha_2=(0,0,1,1) \\\alpha_3=(1,0,0,4),\quad \alpha_4=(0,0,0,2)

form a basis for R 4 R^4 . Find the coordinates of each of the standard basis vectors in the ordered basis { α 1 , α 2 , α 3 , α 4 } \{\alpha_1,\alpha_2,\alpha_3,\alpha_4\} .

Solution: We have
[ 1 0 1 0 a 1 0 0 0 b 0 1 0 0 c 0 1 4 2 d ] [ 1 0 1 0 a 0 0 1 0 b a 0 1 0 0 c 0 0 4 2 d c ] [ 1 0 1 0 a 0 0 1 0 a b 0 1 0 0 c 0 0 2 1 d c 2 ] [ 1 0 0 0 b 0 1 0 0 c 0 0 1 0 a b 0 0 0 1 d c 2 + 2 b 2 a ] \begin{aligned}&\begin{bmatrix}1&0&1&0&a\\1&0&0&0&b\\0&1&0&0&c\\0&1&4&2&d\end{bmatrix}\rightarrow\begin{bmatrix}1&0&1&0&a\\0&0&-1&0&b-a\\0&1&0&0&c\\0&0&4&2&d-c\end{bmatrix}\\ \rightarrow &\begin{bmatrix}1&0&1&0&a\\0&0&1&0&a-b\\0&1&0&0&c\\0&0&2&1&\frac{d-c}{2}\end{bmatrix} \rightarrow\begin{bmatrix}1&0&0&0&b\\0&1&0&0&c\\0&0&1&0&a-b\\0&0&0&1&\frac{d-c}{2}+2b-2a\end{bmatrix}\end{aligned}
Thus the four vectors are linearly independent, since dim R 4 = 4 \dim R^4=4 , we see they form a basis for R 4 R^4 .
by the augmented matrix above, we have
ϵ 1 = ( 1 , 0 , 0 , 0 ) = α 3 2 α 4 ϵ 2 = ( 0 , 1 , 0 , 0 ) = α 1 α 3 + 2 α 4 ϵ 3 = ( 0 , 0 , 1 , 0 ) = α 2 1 / 2 α 4 ϵ 4 = ( 0 , 0 , 0 , 1 ) = 1 / 2 α 4 \begin{aligned}\epsilon_1&=(1,0,0,0)=α_3-2α_4 \\ \epsilon_2&=(0,1,0,0)=α_1-α_3+2α_4 \\ \epsilon_3&=(0,0,1,0)=α_2-1/2 α_4 \\ \epsilon_4&=(0,0,0,1)=1/2 α_4\end{aligned}

2. Find the coordinate matrix of the vector ( 1 , 0 , 1 ) (1,0,1) in the basis of C 3 C^3 consisting of the vectors ( 2 i , 1 , 0 ) , ( 2 , 1 , 1 ) , ( 0 , 1 + i , 1 i ) (2i,1,0),(2,-1,1),(0,1+i,1-i) , in that order.

Solution:
[ 2 i 2 0 1 1 1 1 + i 0 0 1 1 i 1 ] [ 1 1 1 + i 0 0 1 1 i 1 0 2 + 2 i 2 2 i 1 ] [ 1 1 1 + i 0 0 1 1 i 1 0 0 2 2 i 1 2 i ] [ 1 0 2 1 0 1 1 i 1 0 0 1 ( 3 + i ) / 4 ] [ 1 0 0 ( 1 + i ) / 2 0 1 0 i / 2 0 0 1 ( 3 + i ) / 4 ] \begin{aligned}\begin{bmatrix}2i&2&0&1\\1&-1&1+i&0\\0&1&1-i&1\end{bmatrix}&\rightarrow\begin{bmatrix}1&-1&1+i&0\\0&1&1-i&1\\0&2+2i&2-2i&1 \end{bmatrix}\rightarrow\begin{bmatrix}1&-1&1+i&0\\0&1&1-i&1\\0&0&-2-2i&-1-2i \end{bmatrix}\\&\rightarrow\begin{bmatrix}1&0&2&1\\0&1&1-i&1\\0&0&1&(3+i)/4\end{bmatrix}\rightarrow\begin{bmatrix}1&0&0&-(1+i)/2\\0&1&0&i/2\\0&0&1&(3+i)/4\end{bmatrix}\end{aligned}
thus the coordinate matrix is [ ( 1 + i ) / 2 i / 2 ( 3 + i ) / 4 ] \begin{bmatrix}-(1+i)/2\\i/2\\(3+i)/4\end{bmatrix}

3. Let B = { α 1 , α 2 , α 3 } {\mathscr B}=\{\alpha_1,\alpha_2,\alpha_3\} be the ordered basis for R 3 R^3 consisting of

α 1 = ( 1 , 0 , 1 ) , α 2 = ( 1 , 1 , 1 ) , α 3 = ( 1 , 0 , 0 ) \alpha_1=(1,0,-1),\quad \alpha_2=(1,1,1),\quad \alpha_3=(1,0,0)

What are the coordinates of the vectors ( a , b , c ) (a,b,c) in the ordered basis B {\mathscr B} ?

Solution:
[ 1 1 1 a 0 1 0 b 1 1 0 c ] [ 1 1 1 a 0 1 0 b 0 2 1 a + c ] [ 1 1 1 a 0 1 0 b 0 0 1 a + c 2 b ] [ 1 0 0 b c 0 1 0 b 0 0 1 a + c 2 b ] \begin{aligned}\begin{bmatrix}1&1&1&a\\0&1&0&b\\-1&1&0&c\end{bmatrix}&\rightarrow\begin{bmatrix}1&1&1&a\\0&1&0&b\\0&2&1&a+c\end{bmatrix}\\&\rightarrow\begin{bmatrix}1&1&1&a\\0&1&0&b\\0&0&1&a+c-2b\end{bmatrix}\\&\rightarrow\begin{bmatrix}1&0&0&b-c\\0&1&0&b\\0&0&1&a+c-2b\end{bmatrix}\end{aligned}
thus the coordinate matrix is [ b c b a + c 2 b ] \begin{bmatrix}b-c\\b\\a+c-2b\end{bmatrix} .

4. Let W W be the subspace of C 3 C^3 spanned by α 1 = ( 1 , 0 , i ) \alpha_1=(1,0,i) and α 2 = ( 1 + i , 1 , 1 ) \alpha_2=(1+i,1,-1)
( a ) Show that α 1 \alpha_1 and α 2 \alpha_2 form a basis for W W .
( b ) Show that the vectors β 1 = ( 1 , 1 , 0 ) \beta_1=(1,1,0) and β 2 = ( 1 , i , 1 + i ) \beta_2=(1,i,1+i) are in W W and from another basis for W W .
( c ) What are the coordinates of α 1 \alpha_1 and α 2 \alpha_2 in the ordered basis { β 1 , β 2 } \{\beta_1,\beta_2\} for W W ?

Solution:
( a ) It’s obvious that α 1 α_1 and α 2 α_2 spans W W , and they’re linearly independent since they’re not proportionate with each other, thus they form a basis for W W .
( b ) We have β 1 = i α 1 + α 2 β_1=-iα_1+α_2 and β 2 = ( 2 i ) α 1 + i α 2 β_2=(2-i) α_1+iα_2 , thus β 1 , β 2 W β_1,β_2\in W , since they are linearly independent and we already know dim W = 2 \dim W=2 , β 1 , β 2 β_1,β_2 form a basis for W W .
( c ) We have P = [ i 2 i 1 i ] P=\begin{bmatrix}-i&2-i\\1&i\end{bmatrix} , and thus P 1 = 1 2 [ 1 i 3 + i 1 + i i 1 ] P^{-1}=\frac{1}{2}\begin{bmatrix}1-i&3+i\\1+i&i-1\end{bmatrix} , let B = { α 1 , α 2 } , B = { β 1 , β 2 } \mathscr B=\{α_1,α_2\},\mathscr B'=\{β_1,β_2\} , then
[ α 1 ] B = [ 1 0 ] , [ α 2 ] B = [ 0 1 ] [α_1]_{\mathscr B}=\begin{bmatrix}1\\0\end{bmatrix},\quad [α_2 ]_{\mathscr B}=\begin{bmatrix}0\\1\end{bmatrix}
thus
[ α 1 ] B = 1 2 [ 1 i 3 + i 1 + i i 1 ] [ 1 0 ] = [ ( 1 i ) / 2 ( 1 + i ) / 2 ] , [ α 2 ] B = 1 2 [ 1 i 3 + i 1 + i i 1 ] [ 0 1 ] = [ ( 3 + i ) / 2 ( i 1 ) / 2 ] [α_1]_{\mathscr B'}=\dfrac{1}{2} \begin{bmatrix}1-i&3+i\\1+i&i-1\end{bmatrix} \begin{bmatrix}1\\0\end{bmatrix}=\begin{bmatrix}(1-i)/2\\(1+i)/2\end{bmatrix},\\ [α_2]_{\mathscr B' }=\dfrac{1}{2} \begin{bmatrix}1-i&3+i\\1+i&i-1\end{bmatrix} \begin{bmatrix}0\\1\end{bmatrix}=\begin{bmatrix}(3+i)/2\\(i-1)/2\end{bmatrix}

5. Let α = ( x 1 , x 2 ) \alpha=(x_1,x_2) and β = ( y 1 , y 2 ) \beta=(y_1,y_2) be the vectors in R 2 R^2 such that

x 1 y 1 + x 2 y 2 = 0 , x 1 2 + x 2 2 = y 1 2 + y 2 2 = 1 x_1y_1+x_2y_2=0,\quad x_1^2+x_2^2=y_1^2+y_2^2=1

Prove that B = { α , β } {\mathscr B}=\{\alpha,\beta\} is a basis for R 2 R^2 . Find the coordinates of the vectors ( a , b ) (a,b) in the ordered basis B = { α , β } {\mathscr B}=\{\alpha,\beta\} . (The conditions on α \alpha and β \beta say, geometrically, that α \alpha and β \beta are perpendicular and each has length 1.)

Solution: To show B = { α , β } \mathscr B=\{α,β\} is a basis for R 2 R^2 , it’s enough to show they’re linearly independent, assume they are linearly dependent, then α = k β α=kβ or β = k α β=kα , first let α = k β α=kβ , then x 1 = k y 1 , x 2 = k y 2 x_1=ky_1,x_2=ky_2 , thus we have x 1 y 1 + x 2 y 2 = k y 1 2 + k y 2 2 = 0 x_1 y_1+x_2 y_2=ky_1^2+ky_2^2=0 , since y 1 2 + y 2 2 = 1 y_1^2+y_2^2=1 , we have k = 0 k=0 , so x 1 = x 2 = 0 x_1=x_2=0 , but this contradicts x 1 2 + x 2 2 = 1 x_1^2+x_2^2=1 . If we let β = k α β=kα , we could similarly reach a contradiction.
let γ = ( a , b ) γ=(a,b) , and let B = { ϵ 1 , ϵ 2 } \mathscr B'=\{\epsilon_1,\epsilon_2 \} be the standard basis in R 2 R^2 , then [ γ ] B = [ a b ] [γ]_{\mathscr B' }=\begin{bmatrix}a\\b\end{bmatrix} , and it’s easy to see α = x 1 ϵ 1 + x 2 ϵ 2 α=x_1 \epsilon_1+x_2 \epsilon_2 and β = y 1 ϵ 1 + y 2 ϵ 2 β=y_1 \epsilon_1+y_2 \epsilon_2 , thus P = [ x 1 y 1 x 2 y 2 ] P=\begin{bmatrix}x_1&y_1\\x_2&y_2 \end{bmatrix} , and [ γ ] B = P 1 [ γ ] B [γ]_{\mathscr B}=P^{-1} [γ]_{\mathscr B'} , when y 2 0 y_2\neq 0 , we shall have
[ x 1 y 1 1 0 x 2 y 2 0 1 ] [ 1 0 x 1 x 2 x 2 y 2 0 1 ] [ 1 0 x 1 x 2 0 y 2 x 1 x 2 x 1 2 ] [ 1 0 x 1 x 2 0 1 x 1 x 2 y 2 x 1 2 y 2 ] \begin{bmatrix}x_1&y_1&1&0\\x_2&y_2&0&1\end{bmatrix}\rightarrow\begin{bmatrix}1&0&x_1&x_2\\x_2&y_2&0&1\end{bmatrix}\rightarrow\begin{bmatrix}1&0&x_1&x_2\\0&y_2&-x_1 x_2&x_1^2 \end{bmatrix}\rightarrow\begin{bmatrix}1&0&x_1&x_2\\0&1&\frac{-x_1 x_2}{y_2}&\frac{x_1^2}{y_2}\end{bmatrix}
If y 2 = 0 y_2=0 , then y 1 0 y_1\neq 0 and so x 1 = 0 x_1=0 , it follows x 2 0 x_2\neq 0 , in this case we have
[ x 1 y 1 1 0 x 2 y 2 0 1 ] [ x 2 0 0 1 0 y 1 1 0 ] [ 1 0 0 1 / x 2 0 1 1 / y 1 0 ] \begin{bmatrix}x_1&y_1&1&0\\x_2&y_2&0&1\end{bmatrix}\rightarrow\begin{bmatrix}x_2&0&0&1\\0&y_1&1&0\end{bmatrix}\rightarrow\begin{bmatrix}1&0&0&1/x_2\\0&1&1/y_1&0\end{bmatrix}
thus P 1 = [ x 1 x 2 ( x 1 x 2 ) / y 2 ( x 1 2 ) / y 2 ] , y 2 0 P^{-1}=\begin{bmatrix}x_1&x_2\\-(x_1 x_2)/y_2 &(x_1^2)/y_2 \end{bmatrix},y_2\neq 0 and P 1 = [ 0 1 / x 2 1 / y 1 0 ] , y 2 = 0 = x 1 P^{-1}=\begin{bmatrix}0&1/x_2 \\1/y_1 &0\end{bmatrix},y_2=0=x_1 , and
[ γ ] B = [ x 1 x 2 ( x 1 x 2 ) / y 2 ( x 1 2 ) / y 2 ] [ a b ] = [ x 1 a + x 2 b x 1 y 2 ( x 1 b x 2 a ) ] , y 2 0 [ γ ] B = [ 0 1 / x 2 1 / y 1 0 ] [ a b ] = [ b / x 2 a / y 1 ] , x 1 = y 2 = 0 [γ]_B=\begin{bmatrix}x_1&x_2\\-(x_1 x_2)/y_2 &(x_1^2)/y_2 \end{bmatrix}\begin{bmatrix}a\\b\end{bmatrix}=\begin{bmatrix}x_1 a+x_2 b\\ \dfrac{x_1}{y_2} (x_1 b-x_2 a)\end{bmatrix},\quad y_2\neq 0 \\ [γ]_B=\begin{bmatrix}0&1/x_2 \\1/y_1 &0\end{bmatrix}\begin{bmatrix}a\\b\end{bmatrix}=\begin{bmatrix}b/x_2 \\a/y_1 \end{bmatrix},x_1=y_2=0

6. Let V V be the vector space over the complex numbers of all functions from R R into C C , i.e., the space of all complex-valued functions on the real line. Let f 1 ( x ) = 1 , f 2 ( x ) = e i x , f 3 ( x ) = e i x f_1(x)=1,f_2(x)=e^{ix},f_3(x)=e^{-ix} .
( a ) Prove that f 1 , f 2 , f 3 f_1,f_2,f_3 are linearly independent.
( b ) Let g 1 ( x ) = 1 , g 2 ( x ) = cos x , g 3 ( x ) = sin x g_1(x)=1,g_2(x)=\cos x,g_3(x)=\sin x . Find an invertible 3 × 3 3\times 3 matrix P P such that g j = i = 1 3 P i j f i g_j=\sum_{i=1}^3P_{ij}f_i .

Solution:
( a ) Suppose c 1 f 1 + c 2 f 2 + c 3 f 3 = 0 c_1 f_1+c_2 f_2+c_3 f_3=0 , from e i x = cos x + i sin x e^{ix}=\cos⁡x+i \sin⁡x and e i x = cos x i sin x e^{-ix}=\cos⁡x-i\sin⁡x we know that
c 1 + c 2 cos x + c 3 cos x = 0 , c 2 sin x c 3 sin x = 0 c_1+c_2 \cos⁡x+c_3 \cos⁡x=0,\quad c_2 \sin⁡x-c_3 \sin⁡x=0
from the second equation we know c 2 = c 3 c_2=c_3 , thus c 1 = 2 c 2 cos x , x R c_1=-2c_2\cos⁡x,\forall x\in R , if c 2 0 c_2\neq 0 , then c 1 c_1 is not constant, a contradiction. Thus c 2 = 0 c_2=0 , and it follows c 1 = 0 , c 3 = 0 c_1=0,c_3=0 .
( b ) We have
g 1 = f 1 , g 2 = 1 2 ( f 2 + f 3 ) , g 3 = 1 2 i ( f 2 f 3 ) g_1=f_1,\quad g_2=\dfrac{1}{2} (f_2+f_3 ),\quad g_3=\dfrac{1}{2i}(f_2-f_3)
thus the invertible matrix P P is P = [ 1 0 0 0 1 2 1 2 i 0 1 2 1 2 i ] P=\begin{bmatrix}1&0&0\\0&\dfrac{1}{2}&\dfrac{1}{2i}\\0&\dfrac{1}{2}&\dfrac{1}{2i} \end{bmatrix}

7. Let V V be the (real) vector space of all polynomial functions from R R into R R of degree 2 or less, i.e., the space of all functions f f of the form f ( x ) = c 0 + c 1 x + c 2 x 2 f(x)=c_0+c_1x+c_2x^2 . Let t t be a fixed real number and define

g 1 ( x ) = 1 , g 2 ( x ) = x + t , g 3 ( x ) = ( x + t ) 2 g_1(x)=1,\quad g_2(x)=x+t,\quad g_3(x)=(x+t)^2

Prove that B = { g 1 , g 2 , g 3 } {\mathscr B}=\{g_1,g_2,g_3\} is a basis for V V . If

f ( x ) = c 0 + c 1 x + c 2 x 2 f(x)=c_0+c_1x+c_2x^2

what are the coordinates of f f in this ordered basis B {\mathscr B} ?

Solution: Suppose c 1 g 1 + c 2 g 2 + c 3 g 3 = 0 c_1 g_1+c_2 g_2+c_3 g_3=0 , then
c 1 + c 2 ( x + t ) + c 3 ( x + t ) 2 = 0 , x R c 3 x 2 + ( 2 c 3 + c 2 ) x + ( t 2 c 3 + t c 2 + c 1 ) = 0 , x R c_1+c_2 (x+t)+c_3 (x+t)^2=0,\quad \forall x\in R \\ c_3 x^2+(2c_3+c_2 )x+(t^2 c_3+tc_2+c_1 )=0,\quad \forall x\in R
assume c 1 , c 2 , c 3 c_1,c_2,c_3 is not all zero, then the above function is a quadratic function in real coefficients, and it can’t have more than two roots in R R , a contradiction. Thus we must have c 1 = c 2 = c 3 = 0 c_1=c_2=c_3=0 , so { g 1 , g 2 , g 3 } \{g_1,g_2,g_3 \} is linearly independent.
Let f = c 0 + c 1 x + c 2 x 2 f=c_0+c_1 x+c_2 x^2 , then we can write
f = c 2 ( x + t ) 2 + ( c 1 2 c 2 ) ( x + t ) + c 0 c 1 t + 2 c 2 t c 2 t 2 = c 2 g 3 + ( c 1 2 c 2 ) g 2 + ( c 0 c 1 t + 2 c 2 t c 2 t 2 ) g 1 \begin{aligned}f&=c_2 (x+t)^2+(c_1-2c_2 )(x+t)+c_0-c_1 t+2c_2 t-c_2 t^2\\&=c_2 g_3+(c_1-2c_2 ) g_2+(c_0-c_1 t+2c_2 t-c_2 t^2 ) g_1\end{aligned}
thus { g 1 , g 2 , g 3 } \{g_1,g_2,g_3\} spans V V , and is a basis of V V .
From proofs above we can see
[ f ] B = [ c 0 c 1 t + 2 c 2 t c 2 t 2 c 1 2 c 2 c 2 ] [f]_{\mathscr B}=\begin{bmatrix}c_0-c_1 t+2c_2 t-c_2 t^2\\c_1-2c_2\\c_2 \end{bmatrix}

发布了77 篇原创文章 · 获赞 14 · 访问量 2799

猜你喜欢

转载自blog.csdn.net/christangdt/article/details/103310338