foreword
" Linear Algebra (3) Linear Equations & Vector Spaces " I understand linear spaces by solving linear equations. This chapter looks at it from another angle
what is space
Everyone is more familiar: the plane Cartesian coordinate system is the most common two-dimensional space. The
space is composed of infinitely many coordinate points.
Each coordinate point is a vector.
- Conversely, it can also be said: 2-dimensional space is composed of infinitely many 2-dimensional vectors
- Similarly, in 3D space, each 3D coordinate point is a 3D vector
- Then the same reason: there are infinitely many 3D vectors in 3D space, or 3D space is composed of infinitely many 3D vectors
All vectors in the space can be expressed as e 1 ⃗ , e 2 ⃗ , . . . , en ⃗ \vec{e_{1}},\vec{e_{2}},...,\vec{e_ {n}}e1,e2,...,enThe linear combination of , if there is a vector denoted as: a ⃗ \vec{a}a
a ⃗ = k 1 ⋅ e 1 ⃗ + k 2 ⋅ e 2 ⃗ + . . . + kn ⋅ en ⃗ ,k 1 , k 2 , . . . , kn can be solved\vec{a}=k_{1 }·\vec{e_{1}}+k_{2}·\vec{e_{2}}+...+k_{n}·\vec{e_{n}}, k_{1},k_{ 2},...,k_{n} have solutionsa=k1⋅e1+k2⋅e2+...+kn⋅en,k1,k2,...,knIt is
said that these vectorse 1 ⃗ , e 2 ⃗ , . . . , en ⃗ \vec{e_{1}},\vec{e_{2}},...,\vec{e_{ n}}e1,e2,...,enfor this space base
Definition and properties of linear space
vector addition
[ x 1 y 1 ] + [ x 2 y 2 ] = [ x 1 + x 2 y 1 + y 2 ] = [ 2 + 3 4 + 1 ] \begin{bmatrix} x_1 \\ y_1 \end{bmatrix} + \begin{bmatrix} x_2 \\ y_2 \end{bmatrix} = \begin{bmatrix} x_1 + x_2 \\ y_1+ y_2 \end{bmatrix} = \begin{bmatrix} 2 + 3 \\ 4+ 1 \end{bmatrix} [x1y1]+[x2y2]=[x1+x2y1+y2]=[2+34+1]
Number and vector multiplication
[ x y ] ∗ 2 = [ 2 x 2 y ] \begin{bmatrix} x \\ y \end{bmatrix} * 2 = \begin{bmatrix} 2x \\ 2y \end{bmatrix} [xy]∗2=[2x _2 y]
Dimensions , Coordinates and Basis
A concept of linear independence appears here, and the concept of linear independence here is similar to the linear independence in the vector space, but the scope of the vector becomes wider.
- The basis of the n-dimensional linear space V is not unique. Any n linearly independent vectors in V are a basis of V
- vector a ⃗ \vec{a}atarget ( a 1 , a 2 , . . . an ) (a_1,a_2,...a_n)(a1,a2,...an)在( ε 1 , ε 2 , . . . ε n ) (\varepsilon_1,\varepsilon_2,...\varepsilon_n)( e1,e2,... ehn) basis, is unique and definite
How to determine the dimension and basis of linear space
Euclidean space
Euclidean space is a type of space, a special collection. Elements in a Euclidean set: ordered tuples of real numbers
Example: (2,3)(2,4)(3,4)(3,5) is an ordered 2-tuple of real numbers
- Order means: such as (2,3) and (3,2) are two different elements
- That is: the real numbers in each element are sequential
- A real number means: the numbers in each element are ∈ R
- Tuple means: each element is composed of several ordered numbers
- Such as: 2 numbers form = 2 tuples, n numbers form = n tuples
Euclidean set = ordered tuple of real numbers = set of n-dimensional coordinate points
So, the Euclidean space is the space we use from small to large
Euclidean space conforms to 8 theorems of space
subspace
A subspace is a part of a whole space. But it is also a space and must satisfy the definition of a vector space.
intersection of subspaces
sum of subspaces
V 1 , V 2 V_1,V_2 of the subspaceV1,V2The union of is not a simple addition of elements, resulting in "the union of subspaces does not belong to the subspace".
So define the sum of the subspaces
direct sum of subspaces
Direct sums of subspaces are special sums. The basis requires that each subspace is independent of each other.
The entire linear space can be regarded as a big cake.
- Straight and decomposition is to cut the cake into small pieces, each small piece of cake is a subspace, there is no intersection between all the small cakes, and they can be assembled into the whole cake.
- The sum of the subspaces is that the cake was not cut properly when dividing the cake, and the small cakes cannot form the whole cake (the intersection between the subspaces is not empty).
inner product space
In the previous content, we abstractly introduced vectors, matrices, and linear transformations in linear space. But in geometry, the vector also has the modulus of the vector, the inner product operation of the vector, etc. In order to introduce operations such as the modulus of vectors and the inner product of vectors, we introduce the "definition of inner product". That is, inner product space = linear space + inner product definition.
angle between vectors
cos θ = cos ( α − β ) = cos ( α ) cos ( β ) + sin ( α ) sin ( β ) = x 1 x 1 2 + y 1 2 ∗ x 2 x 2 2 + y 2 2 + y 1 x 1 2 + y 1 2 ∗ y 2 x 2 2 + y 2 2 \cos\theta = \cos(\alpha-\beta) =\cos(\alpha)\cos(\beta) + \sin(\alpha)\sin(\beta)=\cfrac{x_1}{\sqrt{\gdef\bar#1{#1^2} \bar{x_1} + \bar{y_1} }} * \ cfrac{x_2}{\sqrt{\gdef\bar#1{#1^2} \bar{x_2} + \bar{y_2} }} + \cfrac{y_1}{\sqrt{\gdef\bar#1{ #1^2} \bar{x_1} + \bar{y_1} }} * \cfrac{y_2}{\sqrt{\gdef\bar#1{#1^2} \bar{x_2} + \bar{y_2 } }}cosi=cos ( a−b )=cos ( α )cos ( b )+sin ( a )sin ( b )=x12+y12x1∗x22+y22x2+x12+y12y1∗x22+y22y2
cos θ = x 1 x 2 + y 1 y 2 x 1 2 + y 1 2 x 2 2 + y 2 2 = a ⃗ ∗ b ⃗ ∣ a ⃗ ∣ ∣ b ⃗ ∣ \cos\theta = \cfrac{x_1x_2+y_1y_2}{\sqrt{\gdef\bar#1{#1^2} \bar{x_1} + \bar{y_1}}\sqrt{\gdef\bar#1{#1^2} \bar{x_2} + \bar{y_2}}} = \cfrac{\vec{a} *\vec{b}}{|\vec{a} ||\vec{b}|} cosi=x12+y12x22+y22x1x2+y1y2=∣a∣∣b∣a∗b
The above-mentioned a, b vectors are only in the 2-dimensional coordinate system. If the coordinate system is converted to n-dimensional, that is, the vector a is (x1, x2, x3...xn) and the vector b is (y1, y2, y3...yn)
cos θ = ∑ i = 1 n ( xi ∗ yi ) ∑ i = 1 nxi 2 ∑ i = 1 nyi 2 = [ a , b ] [ a , a ] [ b , b ] \cos\theta = \cfrac{\ sum_{i=1}^n(x_i*y_i)}{\sqrt{\sum_{i=1}^n\gdef\bar#1{#1^2} \bar{x_i}}\sqrt{\sum_ {i=1}^n\gdef\bar#1{#1^2} \bar{y_i}}}=\cfrac{[a,b]}{\sqrt{[a,a]}\sqrt{[ b,b]}}cosi=∑i=1nxi2∑i=1nyi2∑i=1n(xi∗yi)=[a,a][b,b][a,b]
The angle between two vectors θ \thetaθ =90°, that is, the twovectors are orthogonal.
Two vectors are orthogonal to each other, and these two vectors are combined into a set of vectors, which is called an orthogonal vector set
Orthogonal basis
If ∣ en ∣ = 1 |e_n|=1∣en∣=1 , it is calledan orthonormal basis
Schmidt solves for an orthogonal basis
Through a simple projection method, you can find the orthogonal basis of a basis
Known a set of basis { KaTeX parse error: Expected 'EOF', got '}' at position 18: …lpha_1,\alpha_2}̲ Find its orthogonal basis
- 令β 1 = α 1 \beta_1=\alpha_1b1=a1
- Get β 1 \beta_1b1The unit basis on is β 1 [ β 1 , β 1 ] \cfrac{\beta_1}{\sqrt{[\beta_1,\beta_1]}}[ b1,b1]b1
- Calculate α 1 \alpha_1a1In beta 1 \beta_1b1projection on
- Calculate the projection length, [ α 2 , β 1 ] [ α 2 , α 2 ] [ β 1 , β 1 ] ∗ [ α 2 , α 2 ] \cfrac{[\alpha_2,\beta_1]}{\sqrt{[\ alpha_2,\alpha_2]}\sqrt{[\beta_1,\beta_1]}} *\sqrt{[\alpha_2,\alpha_2]}[ a2,a2][ b1,b1][ a2,b1]∗[ a2,a2]
- The projection is length * β 1 \beta_1b1The identity basis on [ α 2 , β 1 ] [ β 1 , β 1 ] ∗ β 1 \cfrac{[\alpha_2,\beta_1]}{[\beta_1,\beta_1]} *\beta_1[ b1,b1][ a2,b1]∗b1
- The orthonormal basis is α 2 − [ α 2 , β 1 ] [ β 1 , β 1 ] ∗ β 1 \alpha_2 - \cfrac{[\alpha_2,\beta_1]}{[\beta_1,\beta_1]} *\ beta_1a2−[ b1,b1][ a2,b1]∗b1
- 正交基组载{ α 2 − [ α 2 , β 1 ] [ β 1 , β 1 ] ∗ β 1 , [ α 2 , β 1 ] [ β 1 , β 1 ] ∗ β 1 \alpha_2 - \cfrac{ [\alpha_2,\beta_1]}{[\beta_1,\beta_1]} *\beta_1,\cfrac{[\alpha_2,\beta_1]}{[\beta_1,\beta_1]} *\beta_1 a2−[ b1,b1][ a2,b1]∗b1,[ b1,b1][ a2,b1]∗b1}
If it is three-dimensional
Quadrature Complement
Definition: Let UUU isVVV的子空间,则 U ⊥ = { v ∈ V : ∀ u ∈ U < v , u > = 0 } U^\perp =\{v\in V : \forall u\in U \left< v,u\right> =0 \} U⊥={ v∈V:∀u∈U⟨v,u⟩=0 } call itUUOrthogonal complement of U. ∀ u \forall u∀ u meansall uin the set
- U ⊥ U^\perp U⊥ isVVsubspace of V ;
- V ⊥ = { 0 } V^\perp=\{0\}V⊥={ 0 } and{ 0 } ⊥ = V \{0\}^\perp=V{ 0}⊥=V
- U ⊥ ∩ U = { 0 } U^\perp \cap U = \{0\} U⊥∩U={ 0};
- Like U , WU,WU,W isVVA subset of V , and U ⊆ WU\sube WU⊆W ,则 W ⊥ ⊆ U ⊥ W^\perp \sube U^\perp W⊥⊆U⊥
Theorem: Orthogonal decomposition of finite-dimensional subspace: V = U ⊕ U ⊥ V = U \oplus U^\perpV=U⊕U⊥
- ( U ⊥ ) ⊥ = U (U^\perp)^\perp=U (U⊥)⊥=U
- dim V = dim U + dim U ⊥ \dim V = \dim U + \dim U^\perp dimV=dimU+dimU⊥
How to solve the basis of the orthogonal complement?
- Suppose dim V = 3 , dim U = 2 and the basis set is [ { 1 , 0 , 0 } , { 0 , 1 , 0 } ] dim V = 3 , dim U = 2 and the basis set is [\{1,0 ,0\},\{0,1,0\}]dimV=3,d im U=2 and the basis set is [{ 1 ,0,0},{ 0,1,0}]
- Get matrix A = [ 1 0 0 0 1 0 0 0 0 ] A=\begin{bmatrix} 1 &0&0 \\ 0&1&0 \\ 0&0&0 \end{bmatrix}A= 100010000
- Set U ⊥ U^\perpU⊥的基组 x ⃗ = [ x y z ] \vec{x}=\begin{bmatrix} x\\ y\\ z \end{bmatrix} x= xyz
- getA x = 0 Ax=0Ax=0 homogeneous equations, you generally solve it as {0,0,1}
The basis of the orthogonal complement is the solution of the equation system, the number of solutions = dim V - R(A)
main reference
" Euclidean space is a vector space "
" What is a generated space "
" Intersection and sum of subspaces "
" 3.10 Operations of subspaces "
" Orthogonal basis and orthonormal basis "
" How to understand Schmidt (Schmidt) Orthogonalization " Orthogonal complements
( orthogonal complements) "