毕业设计外文文献翻译院系:数学与计算机科学学院年级专业:12级数学与应用数学姓名:施钰桢学号:121301025有限维向量空间本文译自:Paul R.Halmos. Finite-Dimensional Vector Spaces. Library of Congress Cataloging in Publication Data, 1916.§2.向量空间现在我们来到了这本书的基本概念这里.有如下的定义,令F是一个数域; F中的元素叫做标量.定义:向量空间V满足以下公理.(A)对于任意的x和y,在V中有唯一确定的向量与它们对应,称为x与y的和,并记作x+ y,满足:(1)交换律,x+ y= y+x.(2)结合律,x+(y+z)=(x+y)+z.(3)在V中存在一个独特的向量0(称为原点),对于任意向量x使得+0.xx=(4)对于任意向量x,V中存在一个向量-x与之对应,使得x+(- x)= 0. (B)对于任意α和x,其中α是一个标量,x是V中的向量,在V中有唯一确定的向量与它们对应,称为α和x的积,记作αx,有如下公式:(1)乘法结合律,α(βx)=(αβ)x.(2)对每个向量x有,1x=x.(C)(1)向量乘法分配律,α(x+ y)=αx+αy.(2)标量乘法分配律,(α+β)x=αx+βx.这些公理在逻辑上不要求是独立的,它们只是其中的一个特性,为了方便我们研究.向量空间V和数域F之间的关系通常被描述为V是F上的向量空间.如果F是实数域R,V被称为实数向量空间;同样,如果F是Q或者是E,我们称V为有理向量空间或复数向量空间.§3.实例在讨论关于公理的影响前,我们举一些例子.在整个操作中我们将一遍又一遍参考这些例子,并利用现有的符号.(1)让1E (=E )为复数的集合,如果我们令x +y 和αx 为普通的复数加 法和乘法,则1E 为复数域上的向量空间.(2)p 为所有多项式的集合,变量t 为复数系.我们解释到复数多项式的加法和乘法能写成向量加法和乘法,就称p 为复数向量空间;原点在多项式p 中恒等于零.这本书的典型例子即实例(1)太过简单实例(2)太复杂.我们再举一个例子,复数向量空间(我们将在后面看到)一般足够为我们所用.(3)设n E ,n =1,2,…, 是所有n 元复数的集合.如果x =(1ξ,...,n ξ)和 y =(1η,...,n η),根据定义,有x +y = (1ξ+1η,…, n ξ+n η),x α= (α1ξ,…, αn ξ),0 = (0,…,0),-x = (-1ξ,…,- n ξ).这些§2中的真理(A ),(B )与(C )都很容易满足,所以n E 是一个复数向量空间,被称为n 维复坐标空间.(4)对于每个正整数n ,设n P 为所有多项式的集合(复系数,如实例(2))当维≤n -1时,多项式恒等于零. (通常都是以维来讨论的,这多项式的维是没有定义,所以我们不能说维≤n -1)跟线性运算的解释一样(加法和标量乘法)如(2)n P 是一个复数向量空间.(5)与n E 相近的所有n 维实数为n R ,它跟n E 的加法和标量乘法的定义相同,但现在我们只考虑实数标量α,空间n R 是一个实数向量空间,它会被称为n 维实坐标空间.(6)前述所有实例可以进行推广.例如,(1)中的一个明显的概括,可以说每个所述的数域可以被视为其自身的向量空间.一个常见的推广(3)和(5)是由任意数域F和n维F元素构成集合nF;相同情况下线性运算的正式定义为F=E.(7)根据定义,数域至少具有两个元素;一个是向量空间.由于每个向量空间包含原点,实际上(即除符号)一个向量空间只具有一个向量,这个是最简单的向量空间将用 来表示.(8)如果在所有实数域R中,加法和有理实数乘法的定义跟之前是一样的,那么R为实数向量空间.(9)如果在所有复数的集合E中,加法和复实数乘法的定义跟之前是一样的,那么E为复数向量空间. (将此例与(1)对比,他们有很大的不同)§4.评论公理和符号的评论.数域和向量空间的公理存在惊人的相似之处(和同样惊人的差异).在这两种情况下,公理(A)中描述了该系统的加法结构,公理(B)描述了其乘法结构,公理(C)说明两种结构的联系,(在§1和§2)中的公理(A)的交换律是代数中较为熟悉的术语; (§2)中公理(B)及(C)中承认了标量作为运算的符号.我们顺便提下,如果标量是元素(而不是数域),相应的向量空间的广义概念称为模.特殊实向量空间(如2R和3R)是熟悉的几何图形.似乎在这个阶段坚持R以外的数域显然是没有任何理由的,特别是复数域E.我们希望读者愿意去相信它,我们后面尽量使用复数性质(共轭,代数包闭),并且,我们的研究结果为希尔伯特空间的复数的推广在向量空间中的两个应用程序现代(量子力学)物理和数学中发挥重要的作用.它的一大缺点就是画图难度大.对普通图(阿根图)1E和2R 图是无法区别的,用图形表示似乎是超出人能达到的范围.因此我们不得不使用E和n R,例如将2E称为平面.一些图像语言来区分n最后,我们对符号评论.我们观察到的符号0具有两个含义:一个作为标量,一个作为向量.为了使情况变的不那么糟,我们将在后面引入线性泛函与线性变换来给它下定义.幸运的是,0 的各种解释由此可得知,紧记这句话,我们就不会混淆了.练习1.证明,如果x 和y 是向量,α是一个标量,则下面的关系成立.0),(00),(0),(=∙=-=+αc b xx a00),(=∙x d (观察到相同的符号被用在这个等式的两边;左侧它表示的是标量,右侧它表示的是向量.)(e) 如果αx =0,那么α=0或x =0(或α=0且x =0)(f) –x =(-1) x(g) y +(x -y )=x (这里x -y =x +(-y ))2. 如果P 是素数,则nP Z 是向量空间(cf. §1,例3);在这个向量空间里有多少个向量?3. 设V 是所有(命令)对数或实数的集合.如果),(21ξξ=x 和),(21ηη=y 为V 中的元素,有x +y = ),(2211ηξηξ++αx = ),(01αξ 0 =(0,0) -x = ),(21--ξξ在线性操作的这些定义中V 是一个向量空间吗?为什么?4. 有时一个向量空间中的一个子集,本身就是一个向量空间(线性操作已经给出).例如,向量空间3E 和3E 中子集V 组成的向量),,(321ξξξ,有(a )1ξ是实数(b )1ξ=0(c )1ξ=0或2ξ=0(d )1ξ+2ξ=0(e )1ξ+2ξ=1在这种情况下V 是向量空间?5. 考虑到向量空间p 和p 中的子集V 组成的向量(多项式)x ,有(a )x 有3维(b )2x (0)=x (1)(c )x (t )≥0,(0 ≤ t ≤ 1)(d )x (t )=x (1-t ),t 取任意数在这种情况下V 是向量空间?§5.线性关系既然我们已经描述了向量空间,我们一定会对空间中的元素关系感到兴趣. 我们对求和符号描述几句.如果一组向量被赋于相应的一组指标i 即为i x ,如果没有必要或者不方便对指标进行详细说明,我们将简单地讲一组向量{i x }.(我们承认相同的向量可能有两个不同的指标与之相对应.因此,应该说,重要的不是向量出现在{i x }中,而是它们是如何出现的.) 如果考虑指标集是有限的,我们就把相应的向量之和记为i i x ∑(或者,可更加明确的记为∑=n1i i x ).为了避免频繁和繁琐,承认i i x ∑作为一般理论的总结是一个好主意,即使之前没有指标i 作总结,或者更准确地说,即考虑指标集是空的.(当然,在这种情况下是没有向量和,或者更准确地说, {i x }也是空的).这种“空的和”很显然定义为向量0.定义:如果存在一组相对应的标量{i a },且它们不全为0,使得0=∑ii i x a 则称有限向量组{i x }是线性相关.另一方面,如果使0=∑ii i x a 当且仅当i a =0(对于任意i ),则称{i x }是线性无关.这个定义的说法是指在空集情况下,虽然可能产生矛盾,但其它部分的理论还是相对吻合的.其结果表明,空集向量是线性无关的.实际上,如果没有指标i ,那么从向量中挑出一部分并分配给选定的非零标量使其之和消失是不可能的.问题不在于避免赋值为零,而在于分配指标.注意,这种观点表明,空集不是线性相关;线性无关的定义与直接否定线性相关的定义是等价的,这个说法还是需要一些直观理由.最简单的说法是“0=∑ii i x a 当且仅当i a =0(对于任意i )”.假使没有指标i ,则另有说法:“如果0=∑ii i x a 没有指标i ,则i a ≠0”,如果不存在指标i ,这个版本显然是如此.线性相关和无关是向量集的属性,它是司空见惯的,然而对于向量本身的形容词应用,我们有时会说“一组线性无关的向量”而不是“线性无关的一组向量”.它也可以简单的说向量x 的线性相关和无关,并不一定要是有限集.为了深入了解线性相关的意义,我们用已有的向量空间的例子来学习.(1) 如果x 和y 是1E 中的任意两个向量,则x 和y 是一组线性相关.如果x = y =0,这个并不重要;如果不是这样的,那么我们有y x +(-x )y =0的关系.很明显, 因为每组包含一个线性相关的子集本身就是线性相关,这表明在1E 的各个组中包含多个元素是一组线性相关.(2) 在空间P 情况下,向量y x ,和z 的定义,有,1)(),1()(,1)(2t t z t t t y t t x -=-=-=例如,当x +y -z =0则线性相关.然而对于无限的向量集,,,,210 x x x 的定义,有,,)(,)(,1)(2210 t t x t t x t x ===是一组线性无关的,如果存在形式01100=+++n n x a x a x a那么,我们就得到一个多项式的恒等式:010=+++n n t a t a a由此可得 010====n a a a(3) 正如我们前面提到的,空间n E 是我们要研究的原型,让我们来看看,当n = 3,对于那些熟悉的高维几何,线性相关的概念 (或者,更确切的说,与其类似的(3R ))在这个领域有一个具体的几何意义.我们只提及在几何语言中若两个向量是线性相关的,当且仅当它们是共线.(如果有人认为一个向量不是作为空间中的一个点,而是作为一个箭头指向即从原点指向某个给定的点,由于两次忽略了“原点”,所以前面的句子应该修改下).我们目前介绍了向量空间中线性流形(或向量子空间)的概念,并在这方面,我们将偶尔使用这类几何语言.§6.线性组合我们说,只要ii i x a x ∑=,则x 是}{i x 一个线性组合;我们将没有任何进一步的解释,能使这个术语的所用语句更加简单.因此我们说,假如x 是}{i x 的线性组合,那么在{}i x 中x 是线性无关的;我们留给读者证明,如果{}i x 是线性无关的,它的充分必要条件是x 是}{i x 的一个线性组合.需要注意的是,按照空的和的定义,原点是空集的线性组合,而且这是此向量的唯一属性.下面的定理是关于线性相关的基本结果.定理:非零向量组)2(,,1n k x x k ≤≤ 是线性相关的,充分必要条件是向量组中至少有一个向量,可由其余个向量线性表示.证明.假设)2(,,1n k x x k ≤≤ 是线性相关的,则存在一组不为零的a ,使011=++k k x a x a成立,我们不妨设0≠k a ,我们就可得到1-1,,k x x 之间是线性相关.即111--++-=k kk k k x a a x a a x 因此k x 可由其余向量线性表示. 这证明了必要性条件;充分性是显而易见的,因为正如我们之前所说的,每组包含一个线性相关的子集其本身就是线性相关.§7.基定义:在向量空间V 中,存在一组线性无关的集合x ,使得V 中的每个元素都可以用x 线性表示,则称集合x 为向量空间V 中的一个基.如果向量空间V 含有一个有限的基,则称V 是有限维的.除了偶尔考虑这些例子以外,在这本书中,我们主要将注意力集中于有限维向量空间.例如基,我们再次转向向量空间P 和n E .在向量空间P 中,集合}{n x 其中 2,1,0,)(n ==n t t x n ,是它的一个基;通过定义,每一个多项式是n x 的一个有限维的线性组合.此外P 有无限维的基,对给定的任意有限多项式,我们可以发现有比他们的维数高的多项式;后者多项式显然不是前者的线性组合.关于基础一个例子,在n E 中向量n i x i ,,1, =,i x 的第j 个坐标定义成ij δ (在这里我们第一次使用克罗内克积δ;它是由当j i =,ij δ=1和当j i ≠,ij δ=0定义的).因此,我们认为,向量)1,0,0(),0,1,0(),0,0,1(321===x x x 是3E 中的一组基.不难看出,它们是线性无关的,公式332211321),,(x x x x ξξξξξξ++==证明了在3E 中每一个x 是它们的一个线性组合.一般在有限维向量空间V 中,,基},,{n 1x x 中的每个x 可以写成如下形式:ii i x x ∑=ξ 我们断言,x 是由ξ唯一确定.这一说法的依据是线性相关理论中所使用到的论证.如果我们有ii i x x ∑=η,那么可以通过减法,可得 0)=-∑i ii i x ηξ( 由于i x 是线性无关的,这意味着0-i i =ηξ其中n i ,,1 =,换言之,η和ξ是一样的.参考文献[1] N.Bourbaki, Algebre; Chap.Ⅱ(Algebre lineaire ), Paris, 1947,and Chap. Ⅲ(Algebre mullilineaire), Paris, 1948.[2] B.L. Van Der Waerden, Modern algebra, New York, 1953.[3] S.Banch, Theorie des operations lineaires, Warszawa, 1932.[4] F.Riesz and B. Sz.-Nagy, Functional analysis, New York, 1955.[5] P.R.Halmos, Introduction to Hilbert space, New York, 1951.[6] M.H.Stone,Linear transformations in Hilbert space, New York, 1932.[7] R.Courant and D.Hilbert, Methods of mathematical phsics,New York,1953.[8] J.V on Neumann,Mathematical foundations of quantum mechanics,Princeton,1955.Finite-Dimensional Vector Spaces§2. Vector spacesWe come now to the basic concept of this book. For the definition that follows we assume that we are given a particular field F ; the scalars to be used are to be elements of F .Definition. A vector space is a set V of elements called vectors satisfying the follo-wing axioms.(A)To every pair , x and y,of vectors in ʋthere corresponds a vector x + y, called the sum of x and y , in such a way that(1) addition is commutative , x + y = y + x(2) addition is associative, x+(y+z)= (x+y)+z(3) there exists in V a unique vector 0 (called the origin) such that x+ 0 =xfor every vector x, and(4) to every vector x in V there corresponds a unique vector -x such thatx+(-x)=0(B)To every pair , α and x,where α is a scalar and x is a vector in V, there corresponds a vector αx in V, called the product of α and x,in sucha way that(1) multiplication by scalars is associative, α(βx)=(αβ) x, and(2) 1x=x for every vector x.(C)(1) Multiplication by scalars is distributive with respect to vector addition,α(x+y)= αx+αy, and(2) multiplication by vectors is distributive with respect to scalar addition, (α+β)x=αx+βxThese axioms are not claimed to be logically independent ; they are merely a co-nvenient characterization of the objects we wish to study . the relation between a vector spaces V and the underlying field F is usually described by saying that V is a vector space over F . if F is the field R of real numbers , V is called a real vector space ; similarly if F is Q or if F is E, we speak of rational vector spaces or com-plexvector space.§3. ExamplesBefore discussing the implications of the axioms,we give some examples. We shall refer to these examples over and over again,and we shall use the notation established here throughout the rest of our work.(1)Let 1E(=E)be the set of all complex numbers;if we interpret x+y and ax as ordinary complex numerical addition and multiplication.1E becomesa complex vector space.(2) Let p be the set of all polynomials , with complex coefficients, in a variable t. To make p into a complex vector space , We interpret vector addition and scal ar multiplication as the ordinary addition of two polynomials and the multiplication ofa polynomial by a complex number; the origin in p is the polynomial identically zero.Example (1) is too simple and example (2) is too complicated to be typical of the main contents of this book .we give now another example of complex vector spaces which ( as we shall see later ) is general enough for all our purposes.(3)Let n E ,n =1,2,…, be the set of all n -tuples of complex numbers. ifx =(n ξξ,,1) and y =(n ηη,, 1) are elements of n E ,we write ,by definition, x +y = (1ξ+1η,…, n ξ+n η),x α= (α1ξ,…, αn ξ),0 = (0,…,0),-x = (-1ξ,…,- n ξ).It is easy to verity that all parts of our axioms (A),(b).and(c), §2,are satisfied, so that n E is a complex vector space; it will be called n-adimensional comple x coordinate space.(4) For each positive integer n , let n p be the set of all polynomials (with comple x coefficients , as in example (2)) of degree ≤n -1, together with the polynomial id entically zero . (in the usual discussion of degree ,the degree of this polynomial is no t defined , so that We cannot say that it has degree ≤n -1. ) With the same interpreta tion of the linear operations (addition and scalar multiplication ) as in ( 2 ) , n p i s a complex vector space.(5) A close relative of n E is the set n R of all n-tuples of real numbers. W ith the same formal definitions of addition and scalar multiplication as for n E , ex cept that now we consider only real scalar α, the space n R is A real vector spa-ce; it will be called n-dimensional real coordinate space.(6) All the preceding examples can be generalized. Thus, for instance, an obviou s generalization of (1) can be described by saying that every field may be regarded as a vector space over itself. A common generalization of ( 3 ) and ( 5 ) atarts with a n arbitrary field F and forms the set F of n-tuples of elements of F ; the formal d efinitions of the Linear operations are the same as for the case F =E .(7) A field , by definition , has at least two elements ; A vector space however, may have only one . Since every vector space contains an origin, there is essentially (i.e , except for notation ) only one vector space having only one vector . This most trivial vector space will be denoted by θ.(8) If , in the set R of all real numbers, addition is defined as usual and multipli cation of a real number by a rational number is difined as usual, then R becomes a r ational vector space.(9) If, in the set E of all complex numbers, addition is defined as usual and mu ltiplication of a complex number by a real number is defined as usual, then E beco mes a real vector space. ( Compare this example with (1); they are quite different.)§4. CommentsA few comments are in order on our axioms and notation .There are striking simila rities ( and equally striking differences ) between the axioms for a field and the axio ms for a vector space over a field. in both cases, the axioms ( A ) describe the additiv e structure of the system , the axioms (B) describe Its multiplicative structure , and the axioms ( C ) describe the connection between the two structures, Those familiar with algebraic terminology will have recognized the axioms ( A ) (in both 1 and 2 ) as the defining conditions of an abelian ( commutative ) group; the axioms (B ) and (C ) (in §2 ) express the fact that the group admits scalars as operators. We mention in passing that if the scalars are elements of a ring ( instead of A field) , the general ized concept corresponding to a vector space is called A module.Special real vector spaces ( such as 2R and 3R ) are familiar in geometry. There seems at this stage to be no excuse for Our apparently uninteresting insistence on fiel ds other than R , and, in particular , on the fielde E of complex numbers .We hope that the reader is wulling to take it on faith that we shall have to make use of deep properties of complex numbers later ( conjugation , algebraic closure ) , and that in bo th the applications of vector spaces to modern (quantum mechanical ) physics and th e mathematical generalization of our results to Hilbert space complex numbers play an important role. Their one great disadvantage is the difficulty of drawing pictur es ; the ordinary picture ( Argand diagram ) of 1E is indistinguishable from that of 2R , and a graphic representation of 2E seems to be out of human reach. On the occ asions when we have to use pictoral language we shall therefore use the terminology of n R in n E , and speak of 2E, for example as a plane.Finally we comment on notation . We observe that the symbol 0 has been used in two meanings : Once as a scalar and once as a vector. To make the situation worse, we shall later, when we Introduce linear functionals and Linear transformations , give it still other meanings. Fortunately the relations among the various interpretatio ns of 0 are such that , after this word of waning , no confusion should arise fromthis practice.Exercises1.Prove that if x and y are vectors and if is a scalar,then the following relations hold.0),(00),(0),(=∙=-=+αc b xx a00),(=∙x d (observe that the same symbol is used on both sides of this equation ;on the left it denotes a scalar, on the right it denotes a vector.)(e) if αx =0,then either α=0 or x =0(or both )(f) –x =(-1) x(g) y +(x -y )=x (Here x -y =x +(-y ))2. if P is a prime, then n P Z is a vector space over P Z (cf. §1,ex.3); howmany vectors are there in this vector space?3. let V be the set of all (ordered) pairs or real numbers. If ),(21ξξ=x a ),(21ηη=y are elements of V , writex +y = ),(2211ηξηξ++αx = ),(01αξ 0 =(0,0)-x = ),(21--ξξ Is V a vector space with respect to these definitions of the linear operations?why?4. Sometimes a subset of a vector apace is itself a vector space (with respe ct to the linear operations already given). Consider, for example, the vector space 3E and the subsets V of 3E consisting of those vectors (321,,ξξξ) for which(a )1ξ is real,(b )1ξ=0,(c )either 1ξ=0 or 2ξ=0,(d )1ξ+2ξ=0(e )1ξ+2ξ=1In which of these cases is V a vector space?5. Consider the vector space p and the subsets V of p consisting of those vectors (polynomials) x for which(a )x has degree 3,(b )2x (0)=x (1),(c )x (t )≥0 whenever 0 ≤ t ≤ 1,(d )x (t )=x (1-t )for all t .In which of these cases is V a vector space?§5. Linear dependenceNow that we have described the spaces we shall work with, we must specify the relations among the element of those spaces that will be of interest to us.We begin with a few words about the summation notation . if corresponding to each of a set of indices i there is gaven a vector i x , and if it is not necessary or n ot convenient to specify the set of indices exactly, we shall simply speak of a set {}i x of vectors . (we admit the possibility that the same vector corresponds to two di stinct Indices. in all honesty, therefore, it should be stated that what is important is not which vectors appear in {}i x , but how they appear. ) If the index-set under cons ideration is finite , we shall denote the sum of the corresponding vector by ∑i i x (or,when desirable , by a more explicit symbol such as ∑=n i i x 1) . In order to avoid freque nt and fussy case distinctions, it is a good Idea to admit Into the general theory sums s uch as ∑i i x even when there are no indices i to be summed over , Or , more precis ely , even when the index – setunder consideration is empty . ( in that case of cours e, there are no vectors to sum , or , more precisely , the set {}i x is also empty. ) The value of such an " empty sum " is defined , naturally enough, to be the vector 0.Definition. A finite set {}i x of vectors is linearly dependent if there exists a c orresponding set {}i a of scalars, not all zero,such that0=∑i i ix aIf, on the other hand, 0=∑i i i x a implies that i a = 0 for each i , the {}i x is l inearly independent.The wording of this definition is intended to cover the case of the empty set; th e result in that case , though passibly paradoxical, dovetails very satisfactorily with the rest of the theory. The result is that the empty set of vectors is linearly ind ependent . indeed, if there are no indices i , then it is not possible to pick out some of them and to assign to the selected ones a non-zero scalar so as to make a certain sum vanish. The trouble is not in avoiding the assignment of zero ; it is infinding an index to which something can be as assigned. Note that this argument shows that th e empty set is not linearly dependent ; for the reader not acquainted with arguig by“vacuous implication,” the equivalence of the definition of linear independence wit h the straightforward negation of the definition of linear dependence needs a little additional intuitive justification. The easiest way to feel comfortable about the assertion “0=∑ii i x a implies That i a =0 for each i ,” in case there are no ind ices i , is to rephrase it this Way:”if 0=∑i i i x a , then there is no index i forwhich i a ≠0.” This version is obviously true if there is no index i at allLinear dependence and Independence are properties of sets of vectors; it is custo mary, however , to apply the adjectives to vectors themselves, and thus we shall som etimes say " a set of linearly independent vectors” instead of “a linearly independe nt set of vectors.” it will be convenient also to speak of the linear dependence and i ndependence of a not necessarily finite set , x , of vectors. We shall say that x is line arly independent if every finite subset of x is such; otherwise x is linearly depende nt.To gain insight into the meaning of linear dependence , let us study the examples o f vector spaces that we already have.(1) If x and y are any two vectors in 1E , then x and y form a linearly dependent set. If x =y =0, this is trivial; if not, then we have, for example, th e relation y x +(-x )y =0. Since it is clear that every set containing a linearly dependent subset is itself linearly dependent, this shows that in 1E erery set c ontaining more than one element is a linearly dependent set.(2) More interesting is the situation in the space P the vectors y x ,, and z defined by,1)(),1()(,1)(2t t z t t t y t t x -=-=-=are, for example,linearly deendent, since x +y -z =0. However, the infinite set o f vectors ,,,,210 x x x ,defined by,,)(,)(,1)(2210 t t x t t x t x ===is a linearly independent set, for if we hadany relation of the form01100=+++n n x a x a x athen we should have a polynomial identity010=+++n n t a t a aWhence 010=+==n a a a(3) As we mentioned before , the spaces n E are the prototype of what we want to study ; let us examine, for example , the case n =3. To those familiar with higher -dimensional geometry , the notion of linear dependence in this space ( or , more properly speaking , in its real analogue 3R ) has a concrete geometric meaning, which w e shall only mention, In geometrical language, two vectors are linearly dependent i f and only if they are on collinear with the origin .(If one thinks of a vector not as a point in a space but as an arrow pointing from the origin to some given poi nt, the preceding sentence should be modified by crossing out the phrase “With the or igin " both times that it occurs. ) We shall presently Introduce the notion of linear manifolds (or vector subspaces) in a vector space, and, in that connection , we s hall occasionally use the language suggested by such geometrical considerations.§6.Linear combinationsWe shall say, whenever ii i x a x ∑=, that x is a linear combination of {}i x ;we shall use without any further explanation all the simple grammatical implic ations of this terminology. Thus we shall say, in case x is a linear combinatio n of {}i x , that x is linearly independent on {}i x ; we shall leave to the reade r the proof that if {}i x is linearly independent, then a necessary and sufficient condition that x be a linear combination of {}i x is that the enlarged set, obta ined by adjoining x to {}i x , be linearly dependent. Note that, in accordance with the definition of an empty sum, the origin is a linear combination of the emptyset of vectors; it is, mareover, the only vector with this property.The following theorem is the fundamental result concerning linear dependenc e.Theorem. The set of non-zero vectors n x x ,,1 is linearly dependent if an d only if some n k x k ≤≤2,, is a linear combination of the preceding ones. Proof. Let us suppose that the vectors n x x ,,1 are linearly dependent, an d let k be the first integer between 2 and n for which k x x ,,1 are linea rly dependent. (If worse comes to worst, our assumption assures us that n k =will do.) Then011=++k k x a x afor a suitable set of a’s (not all zero); moreover, whatever the a’s, we cannot have 0=k a , for then we should have a linear dependence relation among 11,,-k x x contrary to the definition of k . Hence111--++-=k kk k k x a a x a a x as was to be proved. This proves the necessity of our condition; sufficiency is clear since, as we remarked before, every set containing a linearly dependent s et is itself such.§7. BasesDefinition. A (linear) basis (or a coordinate system) in a vector space V is a set x of linearly independent vectors such that every vector in V is a linear combination of elements of x . a vector space V is finitedimensional if it has a finite basis.Except for the occasional consideration of examples we shall restrict our atte ntion, throughout this book, to finite-dimensional vector spaces.For example of bases we turn again to the spaces P and n E . in P , the se t }{n x , Where 2,1,0,)(n ==n t t x n , is a basis; every polynomial is, by definit ion, a linear combination of a finite number of n x . Moreover P has no finite basis, for, given any finite set of polynomials, we can find a polynomial of h igher degree than any of them; this latter polynomial is obviously not a linear combination of the former ones.An example of a basis in n E is the set of vectors n i x i ,,1, =, defined by the condition that the j -th coordinate of i x is ij δ. (Here we use for the first time the popular kronecker δ; it is defined by ij δ=1 if j i = and ij δ=0 if j i ≠.) Thus we assert that in 3E the vectors )0,1,0(),0,0,1(21==x x and )1,0,0(3=x form a basis. It is easy to see that they are linearly independent; t he formula332211321),,(x x x x ξξξξξξ++==proves that every x in 3E is a linear combination of them.In a general finite-dimendional vector space V , with basis },,{n 1x x , we know thatevery x can be written in the formii i x x ∑=ξ we assert that the s 'ξ are uniquely determined by x . The proof of this asserti on is an argument often used in the theory of linear dependence. If we had ii i x x ∑=η, then we should have, by subtraction, 0)=-∑i i i i x ηξ(Since the i x are linearly independent, this implies that 0-i i =ηξ for n i ,,1 =; in other words, the s 'ξ are the same as the s 'η.RECOMMENDED READING[1] N.Bourbaki, Algebre; Chap.Ⅱ(Algebre lineaire ), Paris, 1947,and Chap. Ⅲ(Algebre mullilineaire), Paris, 1948.[2] B.L. Van Der Waerden, Modern algebra, New York, 1953.[3] S.Banch, Theorie des operations lineaires, Warszawa, 1932.[4] F.Riesz and B. Sz.-Nagy, Functional analysis, New York, 1955.[5] P.R.Halmos, Introduction to Hilbert space, New York, 1951.[6] M.H.Stone,Linear transformations in Hilbert space, New York, 1932.[7] R.Courant and D.Hilbert, Methods of mathematical phsics,New York,1953.[8] J.V on Neumann,Mathematical foundations of quantum mechanics,Princeton,1955.。