Definition and Theorems of Chapter 2

Definition. A vector space (or linear space) consists of the following:

  1. a field F F of scalars;
  2. a set V V of objects, called vectors;
  3. a rule (or operation), called vector addition, which associates with each pair of vectors α , β {\alpha},{\beta} in V V a vector α + β \alpha+\beta in V V , called the sum of α {\alpha} and β {\beta} , in such a way that
    ( a ) addition is commutative, α + β = β + α \alpha+\beta=\beta+\alpha ;
    ( b ) addition is associative, α + ( β + γ ) = ( α + β ) + γ {\alpha}+(\beta +\gamma)=(\alpha+\beta)+\gamma ;
    ( c ) there is a unique vector 0 in V V , called the zero vector, such that α + 0 = α \alpha+0=\alpha for all α {\alpha} in V V ;
    ( d ) for each vector α {\alpha} in V V there is a unique vector α -{\alpha} in V V such that α + ( α ) = 0 {\alpha}+(-{\alpha})=0 ;
  4. a rule (or operation), called scalar multiplication, which associates with each scalar c c in F F and vector α \alpha in V V a vector c α c{\alpha} in V V , called the product of c c and α \alpha , in such a way that
    ( a ) 1 α = α 1\alpha=\alpha for every α {\alpha} in V V ;
    ( b ) ( c 1 c 2 ) α = c 1 ( c 2 α ) (c_1c_2)\alpha=c_1(c_2\alpha) ;
    ( c ) c ( α + β ) = c α + c β c(\alpha+\beta)=c\alpha+c\beta ;
    ( d ) ( c 1 + c 2 ) α = c 1 α + c 2 α (c_1+c_2)\alpha=c_1\alpha+c_2\alpha .

Definition. A vector β \beta in V V is said to be a linear combination of the vectors α 1 , , α n {\alpha}_1,\dots,{\alpha}_n in V V provided there exist scalars c 1 , , c n c_1,\dots,c_n in F F such that
β = c 1 α 1 + + c n α n = i = 1 n c i α i \beta=c_1{\alpha}_1+\cdots+c_n{\alpha}_n=\sum_{i=1}^nc_i{\alpha}_i

Definition. Let V V be a vector space over the field F F . A subspace of V V is a subset W W of V V which is itself a vector space over F F with the operation of vector addition and scalar multiplication on V V .

Theorem 1. A non-empty subset W W of V V is a subspace of V V if and only if for each pair of vectors α , β {\alpha},{\beta} in W W and each scalar c c in F F the vector c α + β c\alpha+\beta is in W W .

Lemma. If A A is an m × n m\times n matrix over F F and B , C B,C are n × p n\times p matrices over F F then
A ( d B + C ) = d ( A B ) + A C , d F A(dB+C)=d(AB)+AC,\quad \forall d\in F

Theorem 2. Let V V be a vector space over the field F F . Then intersection of any collection of subspaces of V V is a subspace of V V .

Definition. Let S S be a set of vectors in a vector space V V . The subspace spanned by S S is defined to be the intersection W W of all subspaces of V V which contains S S . When S S is a finite set of vectors, S = { α 1 , α 2 , , α n } S=\{\alpha_1,\alpha_2,\dots,\alpha_n\} , we shall simply call W W the subspace spanned by the vectors α 1 , α 2 , , α n \alpha_1,\alpha_2,\dots,\alpha_n .

Theorem 3. The subspace spanned by a non-empty subset S S of a vector space V V is the set of all linear combinations of vectors in S S .

Definition. If S 1 , S 2 , , S k S_1,S_2,\dots,S_k are subsets of a vector space V V , the set of all sums α 1 + α 2 + + α k {\alpha}_1+{\alpha_2}+\dots+{\alpha}_k of vectors α i {\alpha}_i in S i S_i is called the sum of the subsets S 1 , S 2 , , S k S_1,S_2,\dots,S_k and is denoted by S 1 + S 2 + + S k S_1+S_2+\dots+S_k or by i = 1 k S i \sum_{i=1}^kS_i .

Definition. Let V V be a vector space over F F . A subset S S of V V is said to be linearly dependent (or simply, dependent) if there exist distinct vectors α 1 , α 2 , , α n {\alpha}_1,{\alpha}_2,\dots,\alpha_n in S S and scalars c 1 , c 2 , , c n c_1,c_2,\dots,c_n in F F , not all of which are 0, such that
c 1 α 1 + + c n α n = 0 c_1{\alpha}_1+\cdots+c_n{\alpha}_n=0
A set which is not linearly depenent is called linearly independent. If the set S S contains only finitely many vectors α 1 , α 2 , , α n {\alpha}_1,{\alpha}_2,\dots,\alpha_n , we sometimes say that α 1 , α 2 , , α n {\alpha}_1,{\alpha}_2,\dots,\alpha_n are dependent (or independent) instead of saying S S is dependent (or independent).

Definition. Let V V be a vector space. A basis for V V is a linealy independent set of vectors in V V which spans the space V V . The space V V is finite-dimensional if it has a finite basis.

Theorem 4. Let V V be a vector space which is spanned by a finite set of vectors β 1 , β 2 , , β m {\beta}_1,{\beta}_2,\dots,{\beta}_m . Then any independent set of vectors in V V is finite and contains no more than m m elements.
Corollary 1. If V V is a finite-dimensional vector space, then any two bases of V V have the same (finite) number of elements.
Corollary 2. Let V V be a finite-dimensional vector space and let n = dim V n=\dim V . Then
( a ) any sunset of V V which contains more than n n vectors is linearly dependent;
( b ) no subset of V V which contains fewer than n n vectors can span V V .

Theorem 5. If W W is a subspace of a finite-dimensional vector space V V , every linearly independent subset of W W is finite and is part of a (finite) basis for W W .
Corollary 1. If W W is a proper subspace of a finite-dimensional vector space V V , then W W is finite-dimensional and dim W < dim V \dim W<\dim V .
Corollary 2. In a finite-dimensional vector space V V every non-empty linearly independent set of vectors is part of a basis.
Corollary 3. Let A A be an n × n n\times n matrix over a field F F , and suppose the row vectors of A A form a linearly independent set of vectors in F n F^n . Then A A is invertible.

Theorem 6. If W 1 W_1 and W 2 W_2 are finite-dimensional subspaces of a vector space V V , then W 1 + W 2 W_1+W_2 is finite-dimensional and
dim W 1 + dim W 2 = dim ( W 1 W 2 ) + dim ( W 1 + W 2 ) \dim W_1+\dim W_2=\dim(W_1\cap W_2)+\dim(W_1+W_2)

Definition. If V V is a finite-dimensional vector space, an ordered basis for V V is a finite sequence of vectors which is linearly independent and spans V V .

Theorem 7. Let V V be an n n -dimensional vector space over the field F F , and let B \mathfrak B and B \mathfrak B' be two ordered bases of V V . Then there is a unique, necessarily invertible, n × n n\times n matrix P P with entries in F F such that
[ α ] B = P [ α ] B [ α ] B = P 1 [ α ] B [{\alpha}]_{\mathfrak B}=P[{\alpha}]_{\mathfrak B'}\qquad [{\alpha}]_{\mathfrak B'}=P^{-1}[{\alpha}]_{\mathfrak B}
for every vector α \alpha in V V . Then columns of P P are given by
P j = [ α j ] B , j = 1 , , n P_j=[{\alpha}_j']_{\mathfrak B},\qquad j=1,\dots,n

Theorem 8. Suppose P P is an n × n n\times n invertilbe matrix over F F . Let V V be an n n -dimensional vector space over F F , and let B \mathfrak B be an ordered basis of V V . Then there is a unique basis B \mathfrak B' of V V such that
[ α ] B = P [ α ] B [ α ] B = P 1 [ α ] B [{\alpha}]_{\mathfrak B}=P[{\alpha}]_{\mathfrak B'}\qquad [{\alpha}]_{\mathfrak B'}=P^{-1}[{\alpha}]_{\mathfrak B}
for every vector α \alpha in V V .

Theorem 9. Row-equivalent matrices have the same row space.

Theorem 10. Let R R be a non-zero row-reduced echelon matrix. Then the non-zero row vectors of R R form a basis for the row space of R R .

Theorem 11. Let m m and n n be positive integers and let F F be a field. Suppose W W is a subspace of F n F^n and dim W m \dim W\leq m . Then there is precisely one m × n m\times n row-reduced echelon matrix over F F which has W W as its row space.
Corollary. Each m × n m\times n matrix A A is row-equivalent to one and oonly one row-reduced echelon matrix.
Corollary. Let A A and B B be m × n m\times n matrices over the field F F . Then A A and B B are row-equivalent if and only if they have the same row space.

发布了77 篇原创文章 · 获赞 14 · 访问量 2780

猜你喜欢

转载自blog.csdn.net/christangdt/article/details/104430648
今日推荐