当前位置:文档之家› (完整版)哈工大选修课LINEARALGEBRA试卷及答案,推荐文档

(完整版)哈工大选修课LINEARALGEBRA试卷及答案,推荐文档

LINEAR ALGEBRAANDITS APPLICATIONS 姓名:易学号:成绩:1. Definitions(1) Pivot position in a matrix;(2) Echelon Form;(3) Elementary operations;(4) Onto mapping and one-to-one mapping;(5) Linearly independence.2. Describe the row reduction algorithm which produces a matrix in reduced echelon form.3. Find the matrix that corresponds to the composite transformation of a scaling by 0.3, 33⨯a rotation of , and finally a translation that adds (-0.5, 2) to each point of a figure.90︒4. Find a basis for the null space of the matrix361171223124584A ---⎡⎤⎢⎥=--⎢⎥⎢⎥--⎣⎦5. Find a basis for Col of the matrixA 1332-9-2-22-822307134-111-8A ⎡⎤⎢⎥⎢⎥=⎢⎥⎢⎥⎣⎦6. Let and be positive numbers. Find the area of the region bounded by the ellipse a b whose equation is22221x y a b +=7. Provide twenty statements for the invertible matrix theorem.8. Show and prove the Gram-Schmidt process.9. Show and prove the diagonalization theorem.10. Prove that the eigenvectors corresponding to distinct eigenvalues are linearly independent.Answers:1. Definitions(1) Pivot position in a matrix:A pivot position in a matrix A is a location in A that corresponds to a leading 1 in the reduced echelon form of A. A pivot column is a column of A that contains a pivot position.(2) Echelon Form:A rectangular matrix is in echelon form (or row echelon form) if it has the following three properties:1. All nonzero rows are above any rows of all zeros.2. Each leading entry of a row is in a column to the right of the leading entry of the row above it.3. All entries in a column below a leading entry are zeros.If a matrix in a echelon form satisfies the following additional conditions, then it is in reduced echelon form (or reduced row echelon form):4. The leading entry in each nonzero row is 1.5. Each leading 1 is the only nonzero entry in its column.(3) Elementary operations:Elementary operations can refer to elementary row operations or elementary column operations.There are three types of elementary matrices, which correspond to three types of row operations (respectively, column operations):1. (Replacement) Replace one row by the sum of itself anda multiple of another row.2. (Interchange) Interchange two rows.3. (scaling) Multiply all entries in a row by a nonzero constant.(4) Onto mapping and one-to-one mapping:A mapping T : R n → R m is said to be onto R m if each b in R m is the image of at least one x in R n.A mapping T : R n → R m is said to be one-to-one if each b in R m is the image of at most one x in R n.(5) Linearly independence:An indexed set of vectors {V1, . . . ,V p} in R n is said to be linearly independent if the vector equationx 1v 1+x 2v 2+ . . . +x p v p = 0Has only the trivial solution. The set {V 1, . . . ,V p } is said to be linearly dependent if there exist weights c 1, . . . ,c p , not all zero, such thatc 1v 1+c 2v 2+ . . . +c p v p = 02. Describe the row reduction algorithm which produces a matrix in reduced echelon form.Solution:Step 1:Begin with the leftmost nonzero column. This is a pivot column. The pivot position is at the top.Step 2:Select a nonzero entry in the pivot column as a pivot. If necessary, interchange rows to move this entry into the pivot position.Step 3:Use row replacement operations to create zeros in all positions below the pivot.Step 4:Cover (or ignore) the row containing the pivot position and cover all rows, if any, above it. Apply steps 1-3 to the submatrix that remains. Repeat the process until there all no more nonzero rows to modify.Step 5:Beginning with the rightmost pivot and working upward and to the left, create zeros above each pivot. If a pivot is not 1, make it 1 by scaling operation.3. Find the matrix that corresponds to the composite transformation of a scaling by 0.3, 33⨯a rotation of , and finally a translation that adds (-0.5, 2) to each point of a figure.90︒Solution:If ψ=π/2, then sin ψ=1 and cos ψ=0. Then we have⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡−−→−⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡110003.00003.01y x y x scale ⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡-−−→−110003.00003.010*******y x Rotate⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡-⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡-−−−→−110003.00003.0100001010125.0010001y x Translate The matrix for the composite transformation is⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡-⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡-10003.00003.0100001010125.0010001⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡--=10003.00003.0125.0001010⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡--=125.0003.003.004. Find a basis for the null space of the matrix361171223124584A ---⎡⎤⎢⎥=--⎢⎥⎢⎥--⎣⎦Solution:First, write the solution of A X=0 in parametric vector form:A ~ , ⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡---000023021010002001x 1-2x 2 -x 4+3x 5=0x 3+2x 4-2x 5=00=0The general solution is x 1=2x 2+x 4-3x 5, x 3=-2x 4+2x 5, with x 2, x 4, and x 5 free.⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎣⎡-+⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎣⎡-+⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎣⎡=⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎣⎡+--+=⎥⎥⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎢⎢⎣⎡10203012010001222325425454254254321x x x x x x x x x x x x x x x xu v w=x 2u+x 4v+x 5w (1)Equation (1) shows that Nul A coincides with the set of all linear conbinations of u, v and w. That is, {u, v, w}generates Nul A. In fact, this construction of u, v and w automatically makes them linearly independent, because (1) shows that 0=x 2u+x 4v+x 5w only if the weights x 2, x 4, and x 5 are all zero.So {u, v, w} is a basis for Nul A.5. Find a basis for Col of the matrixA 1332-9-2-22-822307134-111-8A ⎡⎤⎢⎥⎢⎥=⎢⎥⎢⎥⎣⎦Solution:A ~ , so the rank of A is 3.⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡---07490012002300130001Then we have a basis for Col of the matrix:A U = , v = and w = ⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡0001⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡0013⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡--07496. Let and be positive numbers. Find the area of the region bounded by the ellipse a b whose equation is22221x y a b+=Solution:We claim that E is the image of the unit disk D under the linear transformation Tdetermined by the matrix A=, because if u= , x=, and x = Au, then ⎥⎦⎤⎢⎣⎡b a 00⎥⎦⎤⎢⎣⎡21u u ⎥⎦⎤⎢⎣⎡21x x u 1 = and u 2 = a x 1bx 2It follows that u is in the unit disk, with , if and only if x is in E , with12221≤+u u1)()(2221≤+x a x . Then we have{area of ellipse} = {area of T (D )}= |det A| {area of D} = ab π(1)2 = πab7. Provide twenty statements for the invertible matrix theorem.Let A be a square matrix. Then the following statements are equivalent. That is, for a n n ⨯given A, the statements are either all true or false.a. A is an invertible matrix.b. A is row equivalent to the identity matrix.n n ⨯c. A has n pivot positions.d. The equation Ax = 0 has only the trivial solution.e. The columns of A form a linearly independent set.f. The linear transformation x Ax is one-to-one.→g. The equation Ax = b has at least one solution for each b in R n .h. The columns of A span R n .i. The linear transformation x Ax maps R n onto R n .→j. There is an matrix C such that CA = I.n n ⨯k. There is an matrix D such that AD = I.n n ⨯l. A T is an invertible matrix.m. If , then 0A ≠()()T11T A A --=n. If A, B are all invertible, then (AB)* = B *A *o. T**T )(A )(A =p. If , then 0A ≠()()*11*A A--=q. ()*1n *A 1)(A --=-r. If , then ( L is a natural number )0A ≠()()L 11L A A--=s. ()*1n *A K)(KA --=-t. If , then 0A ≠*1A A1A =-8. Show and prove the Gram-Schmidt process.Solution:The Gram-Schmidt process:Given a basis {x 1, . . . , x p } for a subspace W of R n , define11x v = 1112222v v v v x x v ⋅⋅-= 222231111333v v v v x v v v v x x v ⋅⋅-⋅⋅-=. ..1p 1p 1p 1p p 2222p 1111p p p v v v v x v v v v x v v v v x x v ----⋅-⋅⋅⋅-⋅⋅-⋅⋅-=Then {v 1, . . . , v p } is an orthogonal basis for W. In additionSpan {v 1, . . . , v p } = {x 1, . . . , x p } for pk ≤≤1PROOFFor , let W k = Span {v 1, . . . , v p }. Set , so that Span {v 1} = Span p k ≤≤111x v ={x 1}.Suppose, for some k < p, we have constructed v 1, . . . , v k so that {v 1, . . . , v k } is an orthogonal basis for W k . Define1k w 1k 1k x proj x v k +++-=By the Orthogonal Decomposition Theorem, v k+1 is orthogonal to W k . Note that proj Wk x k+1 is in W k and hence also in W k+1. Since x k+1 is in W k+1, so is v k+1 (because W k+1 is a subspace and is closed under subtraction). Furthermore, because x k+1 is not in W k = Span {x 1, . . . , 0v 1k ≠+x p }. Hence {v 1, . . . , v k } is an orthogonal set of nonzero vectors in the (k+1)-dismensional space W k+1. By the Basis Theorem, this set is an orthogonal basis for W k+1. Hence W k+1 = Span {v 1, . . . , v k+1}. When k + 1 = p, the process stops.9. Show and prove the diagonalization theorem.Solution:diagonalization theorem:If A is symmetric, then any two eigenvectors from different eigenspaces are orthogonal.PROOFLet v 1 and v 2 be eigenvectors that correspond to distinct eigenvalues, say,and . To 1λ2λshow that , compute0v v 21=⋅ Since v 1 is an eigenvector2T 12T 11211v )(Av v )v (λv v λ==⋅()()2T 12T T 1Av v v A v ==)(221v v Tλ= 2122T 12v v λv v λ⋅==Hence , but , so ()0v v λλ2121=⋅-()0λλ21≠-0v v 21=⋅10. Prove that the eigenvectors corresponding to distinct eigenvalues are linearly independent.Solution:If v 1, . . . , v r are eigenvectors that correspond to distinct eignvalues λ1, . . . , λr of an n n ⨯matrix A.Suppose {v 1, . . . , v r } is linearly dependent. Since v 1 is nonzero, Theorem, Characterization of Linearly Dependent Sets, says that one of the vectors in the set is linear combination of the preceding vectors. Let p be the least index such that v p +1 is a linear combination of he preceding (linearly independent) vectors. Then there exist scalars c 1, . . . ,c p such that (1)1p p p 11v v c v c +=+⋅⋅⋅+Multiplying both sides of (1) by A and using the fact that Av k = λk v k for each k, we obtain111+=+⋅⋅⋅+p p p Av Av c Av c (2)11111++=+⋅⋅⋅+p p p p p v v c v c λλλMultiplying both sides of (1) by and subtracting the result from (2), we have1+p λ (3)0)()(11111=-+⋅⋅⋅+-++p p p p c v c λλλλSince {v 1, . . . , v p } is linearly independent, the weights in (3) are all zero. But none of the factors are zero, because the eigenvalues are distinct. Hence for i = 1, . . . , p.1+-p i λλ0=i cBut when (1) says that , which is impossible. Hence {v 1, . . . , v r } cannot be linearly 01=+p v dependent and therefore must be linearly independent.。

相关主题