Understanding Matrix (1)

Reprinted from Meng Yan's understanding matrix

Not long ago, for ulterior motives, chensh wanted to act as a teacher and teach others linear algebra. So I got caught talking with him a few times about some of the retreats in linear algebra. Apparently, Chensh felt that it would be difficult to keep himself from being considered neurotic by the strong student when he taught linear algebra.

Poor chensh, who made you go to this minefield? ! The color makes the wisdom faint!

Linear algebra courses, whether you start with determinants or directly with matrices, are fraught with confusion from the start. For example, the Tongji linear algebra textbook (now in its fourth edition), which is the most widely used in the teaching of general engineering colleges and departments across the country, introduces the strange concept of "no one before, no one later", and then uses the inverse number. A very non-intuitive definition of determinant is given, followed by some silly properties and exercises of determinant - multiply this row by a coefficient and add it to another row, and then subtract that column, and it's a lot of fun , but I just can't see the use of this thing at all. Most of the students with mediocre qualifications like me are a little dizzy here: even if this is a vague thing, they start to perform in a circle of fire. This is too "nonsense"! So some people started skipping classes, and more people started copying homework. This is the trick, because the subsequent development can be described with a twist and turns. Following this nonsensical determinant, is the appearance of an equally nonsensical but great incomparable guy - the matrix is ​​here! Years later, I realized that when the teacher foolishly used square brackets to enclose a bunch of silly numbers, and said casually: "This thing is called a matrix", my math career kicked off. What a tragic and tragic scene! Since then, in almost everything that has something to do with the word "learning", the matrix guy has never been absent. For me, the idiot who didn't get linear algebra in one shot, the unsolicited arrival of the matrix boss often made me sullen and bloody. For a long time, when I saw the matrix in my reading, it was like Ah Q saw a fake foreign devil, rubbed his forehead and walked away.

Actually, I am not an exception. The average engineering student will find it difficult to learn linear algebra. This is the case both at home and abroad. Swedish mathematician Lars Garding said in his famous book Encounter with Mathematics: " If you are not familiar with the concept of linear algebra, you are going to study the natural sciences, which now seems to be almost illiterate. " However, "According to the current international standards, linear algebra is Expressed through axioms, it is a second-generation mathematical model, ..., which brings difficulties in teaching." In fact, when we started to learn linear algebra, we entered "the first In the category of "second-generation mathematical model", this means that the expression and abstraction of mathematics have undergone a comprehensive evolution. For those of us who are studying, it is strange that such a drastic paradigm shift is carried out without explicit notice.

Most engineering students usually understand and proficiently use linear algebra only after they have studied some follow-up courses, such as numerical analysis, mathematical programming, and matrix theory. Even so, even if many people are proficient in using linear algebra as a tool for scientific research and application work, they are not clear about the seemingly basic questions raised by many beginners of this course. For example:

* What exactly is a matrix? A vector can be thought of as a representation of an object with n mutually independent properties (dimensions), what is a matrix? If we think of a matrix as an expansion of a new composite vector consisting of a set of column (row) vectors, why is this expansion so widely used? In particular, why are expansions that are more than two-dimensional so useful? If each element in the matrix is ​​also a vector, then we expand it again to become a three-dimensional cubic matrix, isn't it more useful?

* Why are the multiplication rules for matrices specified this way? Why does such a bizarre multiplication rule work so well in practice? Isn't it amazing that many seemingly unrelated problems all boil down to matrix multiplication? Could it be that under the seemingly inexplicable rules of matrix multiplication, there are some essential laws of the world? If so, what are these essential laws?

* What exactly is a determinant? Why are there such weird calculation rules? What is the relationship between the determinant and its corresponding square matrix? Why only square matrices have corresponding determinants, but general matrices do not (don't think this question is stupid, if necessary, it is not impossible to define determinants for mxn matrices, the reason why it is not done is because there is no need for it, But why is this not necessary)? Moreover, the calculation rules of the determinant seem to have no intuitive connection with any calculation rules of the matrix. Why does it determine the properties of the matrix in many aspects? Is all this just a coincidence?

* Why can the matrix be calculated in blocks? The matter of block computing seems so random, why is it feasible?

* For matrix transpose operation A T , there is (AB) T = B T A T , and for matrix inversion operation A -1 , there is (AB) -1 = B -1 A -1 . Why do two operations that seem to be completely unrelated have similar properties? Is this just a coincidence?

* Why is it said that the matrix obtained by P -1 AP is "similar" to the A matrix? What does "similar" mean here?

* What is the nature of eigenvalues ​​and eigenvectors? Their definitions are surprising, because Ax = λx, the effect of a large matrix is ​​only equivalent to a small number λ, which is indeed a bit strange. But why use "features" or even "intrinsic" to define? What exactly do they engrave?

这样的一类问题,经常让使用线性代数已经很多年的人都感到为难。就好像大人面对小孩子的刨根问底,最后总会迫不得已地说“就这样吧,到此为止”一样,面对这样的问题,很多老手们最后也只能用:“就是这么规定的,你接受并且记住就好”来搪塞。然而,这样的问题如果不能获得回答,线性代数对于我们来说就是一个粗暴的、不讲道理的、莫名其妙的规则集合,我们会感到,自己并不是在学习一门学问,而是被不由分说地“抛到”一个强制的世界中,只是在考试的皮鞭挥舞之下被迫赶路,全然无法领略其中的美妙、和谐与统一。直到多年以后,我们已经发觉这门学问如此的有用,却仍然会非常迷惑:怎么这么凑巧?

我认为,这是我们的线性代数教学中直觉性丧失的后果。上述这些涉及到“如何能”、“怎么会”的问题,仅仅通过纯粹的数学证明来回答,是不能令提问者满意的。比如,如果你通过一般的证明方法论证了矩阵分块运算确实可行,那么这并不能够让提问者的疑惑得到解决。他们真正的困惑是:矩阵分块运算为什么竟然是可行的?究竟只是凑巧,还是说这是由矩阵这种对象的某种本质所必然决定的?如果是后者,那么矩阵的这些本质是什么?只要对上述那些问题稍加考虑,我们就会发现,所有这些问题都不是单纯依靠数学证明所能够解决的。像我们的教科书那样,凡事用数学证明,最后培养出来的学生,只能熟练地使用工具,却欠缺真正意义上的理解。

自从1930年代法国布尔巴基学派兴起以来,数学的公理化、系统性描述已经获得巨大的成功,这使得我们接受的数学教育在严谨性上大大提高。然而数学公理化的一个备受争议的副作用,就是一般数学教育中直觉性的丧失。数学家们似乎认为直觉性与抽象性是矛盾的,因此毫不犹豫地牺牲掉前者。然而包括我本人在内的很多人都对此表示怀疑,我们不认为直觉性与抽象性一定相互矛盾,特别是在数学教育中和数学教材中,帮助学生建立直觉,有助于它们理解那些抽象的概念,进而理解数学的本质。反之,如果一味注重形式上的严格性,学生就好像被迫进行钻火圈表演的小白鼠一样,变成枯燥的规则的奴隶。

对于线性代数的类似上述所提到的一些直觉性的问题,两年多来我断断续续地反复思考了四、五次,为此阅读了好几本国内外线性代数、数值分析、代数和数学通论性书籍,其中像前苏联的名著《数学:它的内容、方法和意义》、龚昇教授的《线性代数五讲》、前面提到的Encounter with Mathematics(《数学概观》)以及Thomas A. Garrity的《数学拾遗》都给我很大的启发。不过即使如此,我对这个主题的认识也经历了好几次自我否定。比如以前思考的一些结论曾经写在自己的blog里,但是现在看来,这些结论基本上都是错误的。因此打算把自己现在的有关理解比较完整地记录下来,一方面是因为我觉得现在的理解比较成熟了,可以拿出来与别人探讨,向别人请教。另一方面,如果以后再有进一步的认识,把现在的理解给推翻了,那现在写的这个snapshot也是很有意义的。

因为打算写得比较多,所以会分几次慢慢写。也不知道是不是有时间慢慢写完整,会不会中断,写着看吧。

--------------------------------------------------------------------------

今天先谈谈对线形空间和矩阵的几个核心概念的理解。这些东西大部分是凭着自己的理解写出来的,基本上不抄书,可能有错误的地方,希望能够被指出。但我希望做到直觉,也就是说能把数学背后说的实质问题说出来。

首先说说空间(space),这个概念是现代数学的命根子之一,从拓扑空间开始,一步步往上加定义,可以形成很多空间。线形空间其实还是比较初级的,如果在里面定义了范数,就成了赋范线性空间。赋范线性空间满足完备性,就成了巴那赫空间;赋范线性空间中定义角度,就有了内积空间,内积空间再满足完备性,就得到希尔伯特空间。

总之,空间有很多种。你要是去看某种空间的数学定义,大致都是“存在一个集合,在这个集合上定义某某概念,然后满足某些性质”,就可以被称为空间。这未免有点奇怪,为什么要用“空间”来称呼一些这样的集合呢?大家将会看到,其实这是很有道理的。

我们一般人最熟悉的空间,毫无疑问就是我们生活在其中的(按照牛顿的绝对时空观)的三维空间,从数学上说,这是一个三维的欧几里德空间,我们先不管那么多,先看看我们熟悉的这样一个空间有些什么最基本的特点。仔细想想我们就会知道,这个三维的空间:1. 由很多(实际上是无穷多个)位置点组成;2. 这些点之间存在相对的关系;3. 可以在空间中定义长度、角度;4. 这个空间可以容纳运动,这里我们所说的运动是从一个点到另一个点的移动(变换),而不是微积分意义上的“连续”性的运动,

上面的这些性质中,最最关键的是第4条。第1、2条只能说是空间的基础,不算是空间特有的性质,凡是讨论数学问题,都得有一个集合,大多数还得在这个集合上定义一些结构(关系),并不是说有了这些就算是空间。而第3条太特殊,其他的空间不需要具备,更不是关键的性质。只有第4条是空间的本质,也就是说,容纳运动是空间的本质特征。

认识到了这些,我们就可以把我们关于三维空间的认识扩展到其他的空间。事实上,不管是什么空间,都必须容纳和支持在其中发生的符合规则的运动(变换)。你会发现,在某种空间中往往会存在一种相对应的变换,比如拓扑空间中有拓扑变换,线性空间中有线性变换,仿射空间中有仿射变换,其实这些变换都只不过是对应空间中允许的运动形式而已。

因此只要知道,“空间”是容纳运动的一个对象集合,而变换则规定了对应空间的运动。

下面我们来看看线性空间。线性空间的定义任何一本书上都有,但是既然我们承认线性空间是个空间,那么有两个最基本的问题必须首先得到解决,那就是:

1. 空间是一个对象集合,线性空间也是空间,所以也是一个对象集合。那么线性空间是什么样的对象的集合?或者说,线性空间中的对象有什么共同点吗?

2. 线性空间中的运动如何表述的?也就是,线性变换是如何表示的?

我们先来回答第一个问题,回答这个问题的时候其实是不用拐弯抹角的,可以直截了当的给出答案。线性空间中的任何一个对象,通过选取基和坐标的办法,都可以表达为向量的形式。通常的向量空间我就不说了,举两个不那么平凡的例子:

L1. 最高次项不大于n次的多项式的全体构成一个线性空间,也就是说,这个线性空间中的每一个对象是一个多项式。如果我们以x0, x1, ..., xn为基,那么任何一个这样的多项式都可以表达为一组n+1维向量,其中的每一个分量ai其实就是多项式中x(i-1)项的系数。值得说明的是,基的选取有多种办法,只要所选取的那一组基线性无关就可以。这要用到后面提到的概念了,所以这里先不说,提一下而已。

L2. 闭区间[a, b]上的n阶连续可微函数的全体,构成一个线性空间。也就是说,这个线性空间的每一个对象是一个连续函数。对于其中任何一个连续函数,根据魏尔斯特拉斯定理,一定可以找到最高次项不大于n的多项式函数,使之与该连续函数的差为0,也就是说,完全相等。这样就把问题归结为L1了。后面就不用再重复了。

所以说,向量是很厉害的,只要你找到合适的基,用向量可以表示线性空间里任何一个对象。这里头大有文章,因为向量表面上只是一列数,但是其实由于它的有序性,所以除了这些数本身携带的信息之外,还可以在每个数的对应位置上携带信息。为什么在程序设计中数组最简单,却又威力无穷呢?根本原因就在于此。这是另一个问题了,这里就不说了。

下面来回答第二个问题,这个问题的回答会涉及到线性代数的一个最根本的问题。

线性空间中的运动,被称为线性变换。也就是说,你从线性空间中的一个点运动到任意的另外一个点,都可以通过一个线性变化来完成。那么,线性变换如何表示呢?很有意思,在线性空间中,当你选定一组基之后,不仅可以用一个向量来描述空间中的任何一个对象,而且可以用矩阵来描述该空间中的任何一个运动(变换)。而使某个对象发生对应运动的方法,就是用代表那个运动的矩阵,乘以代表那个对象的向量。

简而言之,在线性空间中选定基之后,向量刻画对象,矩阵刻画对象的运动,用矩阵与向量的乘法施加运动。

是的,矩阵的本质是运动的描述。如果以后有人问你矩阵是什么,那么你就可以响亮地告诉他,矩阵的本质是运动的描述。(chensh,说你呢!)

可是多么有意思啊,向量本身不是也可以看成是n x 1矩阵吗?这实在是很奇妙,一个空间中的对象和运动竟然可以用相类同的方式表示。能说这是巧合吗?如果是巧合的话,那可真是幸运的巧合!可以说,线性代数中大多数奇妙的性质,均与这个巧合有直接的关系

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325440708&siteId=291194637