当前位置:文档之家› 数理经济学1

数理经济学1

1Basic Mathematical Concepts 1.1Concave,quasi-concave and homogenous functionsGiven two sets X and Y ,if each element of X can be associated with an element of Y ,which we denote by f (x );then we say f is a function from X into Y ,denoted by f :X !Y:The sets X and Y are said to be the domain and range of f;respectively.A set X is convex if for any x;x 02X;tx +(1 t )x 02X for all t 2[0;1].(Note:there is no such a thing as concave set.)A function f :X !R ,where X is a convex set,is concave convex if,for any x;y 2X;and 1 0;f ( x +(1 )y ) f (x )+(1 )f (y )If f is twice continuously di¤erentiable,then it is concave if and only if the Hessian of f 0B @f 11(x ) f 1n (x ).........f n 1(x ) f nn (x )1C A is negative (positive)semi-de…nite for all x 2X (see below).A function f :R n +!R is quasiconcave if the upper contours sets of the function f x :f (x )>a g are convex sets for all value of a .A function f :R n +!R is quasiconconvex if f is quasi-concave.An alternative de…nition for quasiconcave function is that f is quasiconcave quasiconvex if and only if for any two distinct points x and x 0in the domain of f ,and for 0< <1,f (x ) f x 0 =)f x +(1 )x 0 f (x ) f (x 0) If f is twice continuously di¤erentiable,then it is quasi-concave if and only if the naturally ordered principal minor of the bordered Hessian of f 0B B B @0f 1(x ) f n (x )f 1(x )f 11(x ) f 1n (x )............f n (x )f n 1(x ) f nn (x )1C C C A has signs0f 1(x )f 2(x )f 1(x )f 11(x )f 12(x )f 2(x )f 21(x )f 22(x ) 0;:::; 0f 1(x ) f j (x )f 1(x )f 11(x ) f 1j (x )............f j (x )f j 1(x )f jj (x )0 0 if j is odd even for all j n:1A function f :R n +!R is homogeneous of degree k if f (t x )=t k f (x )for all t >0:Ineconomics,many useful functions are homogenous of degree 0or 1,e.g.the demand function of a consumer is homogeneous of degree zero in prices and income.Euler’s Theorem:Suppose f is a homogeneous function of degree k .Let x =t x .Di¤erenti-ating f (t x )=t k f (x )to get@f (t x )@t =nX i =1@f (t x )@x i @x i @t =kt k 1f (x )The equation holds for all t .In particular,the equation is true for t =1;n X i =1@f (x )@x i x i =kf (x ):Consider an application of the Euler Theorem.Let f (K;L )be a production function where K and L are capital and labor respectively.For constant returns to scale,k =1and assume the price of the output is equal to one.In a competitive market,the factor price is equal to the value of marginal product.The Euler theorem gives f (K;L )=@f @K K +@f @L L =rK +wL where r and w are the rental and wage,respectively.The total value of factor payment exactly exhausts the value of output.1.2Taylor SeriesIt will be convenient to examine the properties of the extrema of a function by approximating the function by a polynomial.Given a function f (x );we may approximate it by a polynomial at the point x 0:P (x )=a 0+a 1 x x 0 +a 2 x x 0 2+a 3 x x 0 3+The derivatives of the polynomial aredP (x )dx =a 1+2a 2 x x 0 +3a 3 x x 0 2+ d 2P (x )dx 2=2a 2+2 3a 3 x x 0 + ;etcIf f and P have the same value at x 0,then a 0=f (x 0):If f and P also have the same derivatives at x 0,then the coe¢cients a 1;a 2;:::can be solved and thereforef (x )=f x 0 + x x 0d f dx x 0 + x x 0 22d 2f dx 2 x 0 + x x 0 33!d 3f dx 3x 0 + This is called the Taylor series expansion of f at point x 0.2Taylor Theorem:Let f :X !R ,where X is an open subset of R and f is continuously di¤erentiable of order n +1.Assume x 02X:Then for every h =0,x 0+h 2X;there exists x 1=x 0+ h for some 2(0;1)such thatf x 0+h =f x 0 +h d f dx x 0 + +h n n !d n f dx n x 0 +h n +1n +1!d n +1f dx n +1 x 1 The Taylor theorem tell us that f can be approximated by a polynomial of degree n and h n +1n +1!d n +1f dx n +1 x1 is the error involved in using this approximation.The theorem can be extended to multi-variable functions in the following way.Consider a multi-variable function f and we ap-proximate it at x 0 x 01;:::;x 0n .Let y (t )=f x 01+th 1;:::;x 0n +th n where h 1;:::;h n might take arbitrary values so that any point in the domain can be reached.Approximate y at t =0y (t )=y (0)+t dy dt (0)+t 22d 2y dt 2(0)+ Note that when t =0,x =x 0.Moreover,y (0)=f x0 dy dt (0)=n X i =1@f @x i x 0 h i d 2y dt 2(0)=n X i =1n X j =1f ij x 0 h i h j...Thereforef x 0+t h f x 0 +t X f i x 0 h i +t 22n X i =1n X j =1f ij x 0 h i h j +higher order terms.1.3Quadratic FormConsider A =264a 11:::a 1n .........a n 1:::a nn 375and x =264x 1...x n 375:Then x 0A x =P n i =1P n j =1a ij x i x j is called a quadratic form.Note that the second-order term inthe Taylor series contains a quadratic form.The matrix A is(a )negative de…nite if x 0A x <0for all x =0:3(b )positive de…nite if x 0A x >0for all x =0:(c )negative semide…nite if 0 x 0A x for all x :(d )positive semide…nite if x 0A x 0for all x :Given a square matrix A;if we form a new matrix by eliminate k columns and the same numbered k rows of A ,the determinant of this submatrix is called the minor of A .The naturally ordered (principal)minor (NOPM)of A are a 11 a 11a 12a 21a 22 a 11a 12a 13a 21a 22a 23a 31a 32a 33 and so ing the naturally ordered minor,we can easily test whether a matrix is negative or positive de…nite.A matrix A is symmetric if a ij =a ji :A symmetric matrix A is(a )positive de…nite if and only if the naturally ordered principal minor are all positive.a 11>0 a 11a 12a 21a 22 >0 a 11a 12a 13a 21a 22a 23a 31a 32a 33 >0and so on.(b )negative de…nite if and only if the naturally ordered principal minor of order k have sign ( 1)k for k =1;:::;n:a 11<0 a 11a 12a 21a 22 >0 a 11a 12a 13a 21a 22a 23a 31a 32a 33<0and so on.For optimization problem with constraints,we only want x A x to have a de…nite sign for some set of restricted values of x .We say A is positive de…nite subject to bx =0if x A x >0for all x =0satisfying bx =0:The other de…nition are extended in a natural way.we have to check the bordered matrix.A n n symmetric matrix A bordered by a vector b is (c )positive de…nite subject to constraint bx =0if and only if the naturally ordered border-preserving principal minors are all negative. 0b 1b 2b 1a 11a 12b 2a 21a 22 <0, 0b 1b 2b 3b 1a 11a 12a 13b 2a 21a 22a 23b 3a 31a 32a 33<0and so on.(d )negative de…nite subject to constraint bx =0if and only if the naturally ordered border-preserving principal minors have sign ( 1)k for k =2;:::;n . 0b 1b 2b 1a 11a 12b 2a 21a 22 >0, 0b 1b 2b 3b 1a 11a 12a 13b 2a 21a 22a 23b 3a 31a 32a 33 <0and so on.42Static Optimization 2.1Unconstrained OptimizationWe want to investigate the necessary and su¢cient conditions for y =f (x 1;:::;x n )to have a maximum at some point.First Order Necessary Condition :Let x =(x 1;:::;x n )be an interior point in the domain of the function f .If x is a maximum,then by the Taylor theorem,f (x +t h )=f (x )+tX f i (x )h i +t 22n X i =1n X j =1f ij (x + t h )h i h j f (x )for some 2(0;1):Orf (x +t h ) f (x )t =X f i (x )h i +t 2n X i =1n X j =1f ij (x + t h )h i h j 0for any h 2R n .As t !0;f (x +t h ) f (x )t !P f i (x )h i 0and this is true for any h :In particular,for h 1>0and h 2=:::=h n =0;we have f 1(x ) 0in order to satisfy the inequalty.But forh 1<0and h 2=:::=h n =0,we must have f 1(x ) 0:This implies that f 1(x )=0:By the same token,this is true for all other …rst-order derivatives.The …rst order necessary conditions for y =f (x )to be a maximum isf i (x )=0for any i =1;:::;n:Note that when the extremum point is on the boundary of the domain,the …rst derivative needs not be zero.For example,f :R +!R such that f (x )=1 x:In this case the maximum point is zero while d f (0)dx = 1:We have a corner solution for this maximization problem.Second Order Su¢cient Condition :x =f (x 1;:::;x n )is a maximum iff i (x )=0for any i =1;:::;n:n X i =1n X j =1f ij (x )h i h j <0for all vectors h =0Proof:By the Taylor theorem,f (x +t h )=f (x )+tX f i (x )h i +t 22n X i =1n X j =1f ij (x + t h )h i h j If the …rst order condition is satis…ed,f (x +t h ) f (x )=t 22n X i =1n X j =1f ij (x + t h )h i h j <05for small t:So x is a local maximum.By the second order condition,the Hessian matrix of f is negative de…nite.A function y =f (x 1;:::;x n )has a stationary value at x ;i.e.@f @x (x )=0:Suppose the naturally ordered principal minors (NOPM)alternate in sign,i.e.f 11(x )<0; f 11(x )f 12(x )f 21(x )f 22(x ) >0;:::;( 1)j f 11(x ) f 1j (x )...f j 1(x ) f jj (x ) >0for all j =1;:::;n;then x is a local maximum point.If the NOPM are all positive,then xis a local minimum point.To see why the signs of the NOPM has this pattern,consider the two-variable case:y =f (x 1;x 2).Take total di¤erential:dy =f 1dx 1+f 2dx 2andd 2y =f 11(dx 1)2+2f 12dx 1dx 2+f 22(dx 2)2=f 11(dx 1)2+2f 12dx 1dx 2+f 212f 11(dx 2)2+f 22(dx 2)2 f 212f 11(dx 2)2=f 11 dx 1+f 12f 11dx 2 2+f 22f 11 f 212f 11(dx 2)2For a maximum,we have d 2y <0for all dx 1and dx 2;If f 11>0;we can choose dx 1=0and dx 2=0;we must have d 2y >0and get a contradiction.Thus f 11<0.If f 22f 11 f 212<0;then we choose dx 1= f 12f 11dx 2;again we have d 2y >0and a contradiction.Therefore,f 22f 11 f 212>0:Example 1:A pro…t maximizing …rm employs n factors to produce one good.The objective function is =pf (x 1;:::;x n )n X i =1w i x i where y =f (x 1;:::;x n )is the production function.The …rst order necessary conditions are:i =pf i w i =0i =1;:::;nThe su¢cient condition for a maximum is that the successive principal minors of[ ij ]=0B @pf 11:::pf 1n .........pf n 1:::pf nn1C A 6alternate in sign.Since p >0;this implies that thesuccessive naturally ordered principal minorsof the Hessian of f 264f 11:::f 1n .........f n 1:::f nn375alternate in sign.The second order condition is satis…ed for concave production function.2.2Constrained OptimizationFor optimization with equality constraints,the solution is usually found by using the Lagrange-multiplier method.The idea is to transform the constrained optimization problem into a form such that the …rst order conition for unconstrained optimization can be applied.A typical maximization problem take the following form.maximize y =f (x 1;:::;x n )subject tog (x 1;:::;x n )=0First Order Necessary Condition :At a local maximum x ;there is such thatL j (x )=f j (x )+ g j (x )=0;j =1;:::;n:where the variables is known as the Lagrange multiplier.Second Order Su¢cient Condition:A su¢cient condition for a maximum isL j (x ; )=f j (x )+ g j (x )=0;j =1;:::;n:and the matrixL xx 2664@2L @x 1@x 1@2L @x 1@x n .........@2L @x n @x 1 @2L @x n @x n 3775is negative de…nite subject to a linear constraint,i.e.h 0L xx h <0for all h satisfying @g @x 1h 1+ +@g @x nh n =0This can be shown by considering the Taylor series of LL (x +t h )=L (x )+t n X i =1L i (x )h i +t 22n X i =1n X j =1L ij (x + t h )h i h j L (x )7To fully understand the …rst order condition,you need to know some linear alegbra.See Silberberg and Suen The Structure of Economics for the details of the …rst order condition.The second order condition can be checked by considering the bordered Hessian matrix D 2L 266664@2L @ 2@2L @ @x 1 @2L @ @x n @2L @ @x 1@2L @x 1@x 1 @2L @x 1@x n ......@2L @ @x n @2L @x n @x 1 @2L @x n @x n377775Suppose that at x ;the …rst order condition is satis…ed.It can be shown that x is a local maximum point if 0@2L (x )@ @x 1@2L (x )@ @x 2@2L (x )@ @x 1@2L (x )@x 1@x 1@2L (x )@x 1@x 2@2L (x )@ @x 2@2L (x )@x 2@x 1@2L (x )@x 2@x 2 >0; 0@2L (x )@ @x 1@2L (x )@ @x 2@2L (x )@ @x 3@2L (x )@ @x 1@2L (x )@x 1@x 1@2L (x )@x 1@x 2@2L (x )@x 1@x 3@2L (x )@ @x 2@2L (x )@x 2@x 1@2L (x )@x 2@x 2@2L (x )@x 2@x 3@2L (x )@ @x 3@2L (x )@x 3@x 1@2L (x )@x 3@x 2@2L (x )@x 3@x 3 <0; etc:The NOPM of the bordered Hessian must alternate in sign.For a minimum,all the NOPM are negative.2.3Nonlinear ProgrammingIn many economic problems,we have to deal with inequality constraints rather than equality constraints.These problems can be analyzed by nonlinear programming method.max f (x )subject to g k (x ) 0,k =1;:::;m:De…ne the Lagrange function byL (x ; )=f (x )+m X k =1 k g k (x )The set of points satisfying all the constraints x :g k (x ) 0;k =1;:::;m is called the feasibleset.If g k (x )=0at a particular x in the feasible set,we say constraint k is binding;otherwise,we say constraint k is slack.The Lagrange function has a saddle point if there exists 0and x such that L (x ; ) L (x ; ) L (x ; )for all x and all 0:That is,L achieve a maximum in the x directions,and a minimum in the directions.Saddle Point Theorem :If L has a saddle point (x ; ),then f has a global maximum at x subject to g k (x ) 0,k =1;:::;m:8Proof .Since (x ; )is a saddle point,f (x )+m X k =1 kg k (x ) f (x)+m X k =1 k g k (x ) f (x )+m X k =1 k g k (x )From the second inequality,P m k =1 k g k (x ) P m k =1 k g k (x ):Suppose g k (x )<0for some k;then we can make k su¢ciently large,thereby making P m k =1 k g k (x )small enough to violate P m k =1 k g k (x ) P m k =1 k g k (x ):Hence g k (x ) 0for all k:Since k 0and g k (x ) 0for all k;P m k =1 k g k (x ) 0:But the saddle point property holds for all 0,thus it holds for =0;and thus P m k =1 k g k (x ) 0:Combining the last two inequalities to get P m k =1 k g k (x )=0.Substitute this equation into the …rst inequality of the saddle point condition.Thus f (x ) f (x )for all x satisfying g k (x ) 0,k =1;:::;m .In general,a maximum needs not be a saddle point.However,if f and all g k are concave,andthere exists an b xsuch that g k (b x )>0for all k (this condition is known as the Slater constraint quali…cation ),then the saddle point condition is also necessary for any point to be a maximum of the nonlinear programming problem.The necessity of maximum to be a saddle point under the Slater condition is …rst proven by Uzawa.As the proof is quite di¢cult,we will not show it here.If you are interested,you can …nd the proof in Silberberg and Wing.A condition analogous to the First-Order Condition for classical optimization can be derived from the saddle point condition if f and g k are all continuously di¤erentiable.We have the Kuhn-Tucker conditions .@L (x ; )@x i=0for all i k 0;g k (x ) 0;and k g k (x )=0for all k:The conditions k g k (x )=0for all k are called the complementary slackness conditions .The K-T conditions are useful in the search for a solution of non-linear programming problems.In general,the Kuhn-Tucker condition is not su¢cient for the saddle point condition.If f and g are concave functions,however,the K-T conditions imply that the Lagrange function has a saddle point.Theorem (i )Suppose all f;g k are di¤erentiable.If (x ; )is a saddle point of L ,it satis…es the Kuhn-Tucker conditions.(ii )Suppose f;g k are concave functions.If (x ; )it satis…es the Kuhn-Tucker conditions,it is a saddle point of L .(i )Since k 0,g k (x ) 0;and P m k =1 k g k (x )=0;we have k g k (x )=0for all k .Furthermore,as L (x ; ) L (x ; )for all x ,x is a global maximum point of L ( ; );thus @L (x ; )@x i =0for all i:(ii )If f and g are concave,then L is also concave in x and L (x ;)+@L (x ; )@x (x x ) L (x ; )9=0for all i,L(x ; ) L(x; ).Moreover,g k(x ) 0;implies P m k=1 k g k(x ) Since@L(x ; )@x i0for all 0.Thus P m k=1 k g k(x ) P m k=1 k g k(x )=0and L(x ; ) L(x ; ).Combining all the results above,we obtain the following result.Theorem Suppose all f;g k are di¤erentiable and concave and the Slater constraint quali…-cation holds.Then x is a maximum if and only if there exist 0such that(x ; )satis…es the Kuhn-Tucker condition.Therefore for concave programming,the K-T conditons are su¢cient for a global maximum. Moreover,when the Slater’s constraint quali…cation holds the K-T conditons are also necessary conditions for a global maximum.Example:Xiao Ming consumes candy c and fruit f and has a utility function u(c;f)=2ln c+ ln f:Suppose he can spend no more than30Yuan and the prices of candy and fruit are1Yuan and5Yuan respectively.However,his mother does not allow him to consume more than15unit10of candy.His problem can be formulated as the followingmax2ln c+ln fsubject toc+5f 30c 15The Lagrange function is2ln c+ln f+ 1(30 c 5f)+ 2(15 c)Then the Kuhn-Tucker conditions are2c 1 2=01f 5 1=01 0; 1(30 c 5f)=02 0; 2(15 c)=0>0;thus complementary slackness implies30 c 5f=0:If From the second equation, 1=15f2=0;then2c 15f=0or c=10f:Then30 15f=0and c=20and f=2:This violates the constraint c 15:Therefore, 2>0and by complementary slackness,c=15:Finally,use the budget constraint to obtain f=3.2.4Constraint Quali…cationRecall that for concave programming f;g k concave ;under the Slater constraint quali…cation, the Kuhn-Tucker condition is necessary for an optimum.(i.e.if the non-linear programming problem has an optimum,the optimum point must satisfy the Kuhn-Tucker condition)Is this also true for a problem with some non-concave constraint g k?Consider an example.max f(x1;x2)=x1s:t:g1(x1;x2)=x1 0;g2(x1;x2)=x2 0g3(x1;x2)=(1 x1)3 x2 011Here the third constraint is not concave and the Slater condition is satis…ed e.g.at(0:5;0:1).The Lagrangian function and the Kuhn-Tucker condition for this problem areL=x1+ (1 x1)3 x2 + 1x1+ 2x2@L@x1=1 3 (1 x1)2+ 1=0@L@x2= + 2=0(1 x1)3 x2 0; 0; (1 x1)3 x2 =0x1 0; 1 0;x1 1=0x2 0; 2 0;x2 2=0Obviously,the problem has a maximum point at(1;0):But for x1=1; 1=0by complementaryslackness.Then@L@x1=1 3 (1 1)2+0=1and the Kuhn-Tucker condition is violated.So forproblems with non-concave constraint functions,some other constraint quali…cation is needed tomake the Kuhn-Tucker condition a necessary condition.We consider a constraint quali…cation…rst used by Kuhn and Tucker.Intuitively the Kuhn-Tucker constraint quali…cation(KTCQ)islike this.If we move away from the optimal point x and do not want to reduce the values of the e¤ective constraints,then we must not move outside the feasible set.Formally,for any dxsatisfyingdg k(x )=@g k(x )@x1dx1+ +@g k(x )@x ndx n 0if g k(x )=0there exists a curve satisfying three properties:(1)the curve start at x ,(2)it is contained in the feasible set and(3)it is targent to the vector dx at x :Under KTCQ,it can be shown that the Kuhn-Tucker condition must hold at any local optimum.Since f and g are not concave,we can only look for a local optimum rather than a global optimum.Note that KTCQ fails to hold in the previous example.To see this,note that g1(1;0)=1;g2(1;0)=0;g3(1;0)=0;and thus only g2;g3are e¤ective constraints.Evaluate the gradients of g2and g3at the optimum(1;0):@g2 @x1(1;0)=0;@g2@x2(1;0)=1@g3 @x1(1;0)=0;@g3@x2(1;0)= 1We can choose(a;0)or( b;0)for dx for any a 0and b>0:Both vectors give@g k(x )@x1dx1+@g k(x ) @x2dx2=0;k=2;3:But the vector(a;0)is pointing away from the feasible set and doesnot satisfy KTCQ.Surprisingly,if we add one more constraint in this example,KTCQ may hold. For example,take g4(x)=2 2x1 x2 0:The feasible set remains intact and g4is a also12an e¤ective constraint at(1;0).But now@g4(1;0)@x1(a)+@g4(1;0)@x2(0)<0so that we do not need toconsider(a;0):Thus KTCQ holds and the Kuhn-Tucker condition holds at(1;0):See Chiang for details.2.5Quasi-concave ProgrammingThe assumption of concave functions sometimes may be too strong for some economic applica-tion.For example,if the production technology of a…rm exhibits variable returns to scale,the production function may not be concave.(For consumer theory,in many situations,preferences can be represented by concave utility function).Luckily,the theory of concave programming can be extended to problems with quasi-concave objective and constraint functions.This kind of problem is referred to as quasi-concave programming.For quasi-concave programming,if the constraint functions are concave and there exists an x 0such that g k(x)>0for all k,the Kuhn-Tucker condition is a necessary condition for an optimum.However,for quasi-concave programming problem,the Kuhn-Tucker condition may not be a su¢cient condition.In that case,the solution we get from the Kuhn-Tucker condition may turn out not to be an optimum point.The following example illustrates.max(x 1)3s:t:2 x 0;x 0:The Lagrangian function and the Kuhn-Tucker condition for this problem isL=(x 1)3+ 1(2 x)+ 2x@L@x=3(x 1)2 1+ 2=01 0;2 x 0; 1(2 x)=02 0;x 0; 2x=0Note that the constraint is linear so that the Kuhn-Tucker condition is a necessary condition.For this problem,x=2is a maximum.Thus at x=2;the Kuhn-Tucker condition holds.But in this problem,the Kuhn-Tucker condition is not su¢cient.In particular,at x=1;which is not a maximum,the Kuhn-Tucker condition holds.We would like to know when the Kuhn-Tucker condition is a su¢cient condition.The following condition can be used to check whether a solution of the Kuhn-Tucker condition is a maximum or not.Let x satisfy the Kuhn-Tucker condition. Suppose one of the following conditions hold1.There exist x with x i>0for all i and g k(x) 0for all k and@f@x i (x )=0for at least onei:2.The function f is twice continuously di¤erentiable in a neighorhood of x and@f@xi (x )=0for at least one i:3.The function f is concave.13Then x must be a maximum.Condition1is stronger than the condition you would see in Arrow and Enthoven("Quasi-concave Programming",Econometrica,October1961,pp.779-800.)but has a simplier form.Note that in the previous example@f@x (1)=0and@f@x(2)=3.Thecondition is satis…ed at x=2but not at x=1.For quasi-concave programming,the Slater condition can still be used to ensure the necessity of K-T condition for a maximum provided g k are concave for all k or@g k@x i=0for all e¤ective constraints at the maximum.A Summary of the relation between a maximum and the Kuhn-Tucker condition1.If the objective function is concave and the constraint is linear,then the K-T condition must hold at the maximum(necessity).Moreover,any point satisfying K-T is a maximum(su¢ciency).2.If both the objective and the constraint functions are concave,any point satisfying K-T condition must be a maximum.However,a maximum may not satisfy K-T.3.If both the objective function and constraints are concave,and the Slater condition holds, then any point satisfying K-T must be a maximum and a maximum must satisfy K-T.4.If KTCQ holds,any local maximum must satisfy K-T.However,some non-maximum point may also satisfy K-T.5.If the objective function is quasi-concave and the constraints are concave,and the Slater condition holds,the K-T condition must hold at the maximum.But some non-maximum point may also satisfy K-T.6.If both the objective and the constraint functions are quasi-concave,and condition1,2,or 3holds,any point satisfying K-T condition is a maximum.Application:Optimal Incentive SchemeConsider a simple model of a principal-agent relationship.The principal want to induce the agent to take some action that is costly to the agent.A typical example is manager-worker relation.The manager is often unable to observe the action of the worker but may be able to partially infer the action of the worker by observing the output.The manager’s problem is to design an incentive payment to the worker that induce the worker to take the best action from the viewpoint of the manager.Suppose that there are only two possible outputs x1and x2where x1<x2and there are only two possible actions,a and b;that can be taken by the worker.Actions a and b can be interpreted as shirking and working hard,respectively.The actions can in‡uence the probability of occurrence of x i,i=1;2.Let ia and ib be the probability that output level x i is observed when the worker take action a and b;respectively.Naturally,it is more likely to get a high output if the worker works hard,i.e. 2a< 2b and 1a> 1b The manager pay s i to the worker if x i is observed.The expected pro…t of the manager if the worker chooses action b isX i=1;2(x i s i) ibThe worker is assumed to have expected utility function of income u where u0>0;u00<0. The cost(in utility)of the actions are denoted by c a and c b;respectively.The manager wants14the worker to choose action b:To achieve this,the payment scheme must satisfy the following incentive-compatibility constraint.X i=1;2 ib u i(s i) c b X i=1;2 ia u i(s i) c aThe principal’s maximization problem ismaxs1;s2X i=1;2(x i s i) ibsubject to X i=1;2 ib u(s i) c b u 0X i=1;2 ib u(s i) c b X i=1;2 ia u(s i)+c a 0where u is the utility the worker can get elsewhere.The payment scheme must give a higher expected utility than u or else the worker will not be participating.This is called participation constraint.The Lagrangian functionL(x; ; )=X i=1;2(x i s i) ib+ 0@Xi=1;2ib u(s i) c b u1A+ 0@Xi=1;2( ib ia)u(s i) c b+c a1AWe can solve the problem using the Kuhn Tucker condition.@L@s i= ib+ ib u0(s i)+ ( ib ia)u0(s i)=0;i=1;2(1)0;X i=1;2 ib u(s i) c b u 0; 0@Xi=1;2ib u(s i) c b u1A=0;0;X i=1;2( ib ia)u(s i) c b+c a 0; 0@Xi=1;2( ib ia)u(s i) c b+c a1A=0Divide(1)by ib u0(s i),we get1u0(s i)= + 1 ia ib ;i=1;2(2)15Since 1a1b >1> 2a2b;we must have u0(s1) u0(s2):As u is strictly concave i.e.u00<0;s1 s2:Moreover, 1 1a 1b 0<1u0(s1):Thus >0:By complementary slackness,X i=1;2 ib u(s i) c b u=0:(3) There are two cases:Case1:c a c b(workaholic).X i=1;2( ib ia)u(s i) c b+c a=( 2b 2a)(u(s2) u(s1)) c b+c a>0By complementary slackness, =0:Then(2)becomes1u0(s1)= :This implies u0(s1)=u0(s2):By strict concavity of u,s1=s2.As >0and P i=1;2 ib u(s i) c b u=u(s1) c b u=0: Then we can solve for s1from the last equation.Case2:c a<c b:Suppose =0,then by the same argument as case1,s1=s2.Then P i=1;2( ib ia)u(s i) c b+c a=c a c b<0which is a contradiction and thus >0:By complementary slackness,X i=1;2( ib ia)u(s i) c b+c a=0(4)Finally,we can use equations(3)and(4)to solve for s1and s2.16。

相关主题