• Non ci sono risultati.

TASK 8/1/2020 QUANTITATIVE METHODS for ECONOMIC APPLICATIONS MATHEMATICS for ECONOMIC APPLICATIONS

N/A
N/A
Protected

Academic year: 2021

Condividi "TASK 8/1/2020 QUANTITATIVE METHODS for ECONOMIC APPLICATIONS MATHEMATICS for ECONOMIC APPLICATIONS"

Copied!
7
0
0

Testo completo

(1)

TASK 8/1/2020

QUANTITATIVE METHODS for ECONOMIC APPLICATIONS MATHEMATICS for ECONOMIC APPLICATIONS

I M 1) If D œ # # $ / 1% 3  # "  $ / 1$ 3, calcolate D. Since / œ cos sin "  " and

# #

1% 3  1 1  

%  3 % œ 3

/ œ "  $

# #

1$ 3   

cos1 sin1 it is:

$  3 $ œ 3

D œ # #  $ "  "  # "  $ "  $ œ

# # # #

         

3 3 

œ #  $ "  3  "  $" 3$ œ

œ #  $  " $  3 #   $ $  $ œ "  3 .

From "  3 œ # "  " œ # † cos sin we get:

# #

  3     (  3 ( 

% %

1 1

D œ"  3 œ  % # † cos( 5 #  sin( 5 #  Ÿ Ÿ à )1  #1  3 )1  #1 ß ! 5 "

And so - œ"% # †cos( sin (  and - œ#% # †cos "& sin"& .

)1  3 )1 )1  3 )1

I M 2) Given the matrix  œ determine the two values of the parameter for5

" " "

" ! "

" 5 "

 

 

 

 

 

 

 

 

 

 

 

 

which the matrix admits multiple eigenvalues. For these values of , check if the correspon-5 ding matrix is diagonalizable or not.

From   we get

   

   

   

   

   

   

- ˆ œ ! œ œ

" " " " "

" " ! "

" 5 " 5 "

 

 

 

- -

- -

- - -

œ  - - #- 5  -" - œ - - #- 5  " - œ

œ  - - # #  5  "-   œ !. The first eigenvalue is - œ !. To get multiple eigenva- lues there are two possibilities.

1) -# #  5  " œ !-   for - œ !Ê5  " œ !Ê5 œ  " Þ The characteristic polyno- mial becomes - - # #-œ !Ê- -#  # œ ! Ê -" œ-# œ ! and -$ œ #.

For 5 œ  " and - œ ! we get:

   

 

 

 

 

 

     

 ! †ˆ œ œ  " Á !  ! †ˆ œ #

" " "

" "

"  " "

" "

" !

! and since we get Rank

and so 7 œ $  # œ "  7 œ #1! +! and the matrix, for 5 œ  " and - œ ! is not a diagona- lizable one.

2) The second degree polynomial -# #  5  "-   may have a double root.

(2)

Solving we get: -œ "„"  5  " œ "„  5  # and so we get the double root - œ " if 5 œ  # ÞFor 5 œ  # and -œ " we get:

 

 

 

 

 

 

   

 " †ˆ œ œ  " Á !

! " "

" "

"  # !

! "

"  "

 " and since also this time we get:

Rank " †ˆœ # and so 7 œ $  # œ "  7 œ #1" +" and the matrix, for 5 œ  # and - œ " is not a diagonalizable one.

I M 3) Consider the linear map 0 À‘% Ä‘%, ˜œ  —† for which:

0 B ß B ß B ß B " # $ % œ B  $B à #B  #B à B  #B à #B  %B" # " # $ % $ %.

Determine the dimensions of the Kernel and of the Image of this linear map, and then find a basis for the Kernel.

From 0 B ß B ß B ß B " # $ % œ B  $B à #B  #B à B  #B à #B  %B" # " # $ % $ % we get:

 œ

" $ ! !

# # ! !

! ! " #

! ! # %

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

. By elementary operations on the rows:

V Ã V  #V# # " and V Ã V  #V% % $ we get:

 œ Ä

" $ ! ! " $ ! !

# # ! ! !  % ! !

! ! " # ! ! " #

! ! # % ! ! ! !

" $ !

!  % !

!

   

     

     

     

     

     

     

   

   

   

   

   

   

   

   

   

. Since

! "

œ  % Á ! we see that Rank  œ $ and so Dim Imm   œ $ and Dim Ker   œ %  $ œ ".

To find a basis for the Kernel we must solve the system:

 —† œ Ê † œ Ê

" $ ! ! B ! B  $B œ !

# # ! ! B ! #B  #B œ !

! ! " # B ! B 

! ! # % B !

     

     

     

     

     

     

     

     

 

 

 

 

 

 

 

 







" " #

# " #

$ $

%

#B œ !

#B  %B œ !

Ê Ê

B  $B œ !

 %B œ ! B  #B œ !

%

$ %

" #

#

$ %



Ê !ß !ß  #5ß 5

B œ ! B œ ! B œ  #B

 "#  

$ %

. So the vectors belonging to the Kernel are of the form and a basis for the Kernel may be the vector !ß !ß  #ß ".

I M 4) Given the matrix œ " # determine at least a matrix , different from , that is 

# "

 

similar to .

Similarity between matrices is satisfied if it exists a non singular matrix such that:

 † œ  † Ê œ † † œ # "  œ # " œ " Á !

" " " "

" . If we choose   it is    

and so: œ # " Ê  œ "  " Ê  œ "  " and finally:

" "  " #  " #

       ‡ T  

" œ "  " Þ œ"† †

 " #

  So, from we get:

 œ "  " † " # † # " œ "  " † % $ œ  " !

 " # # " " "  " # & $ ' $

            .

(3)

II M 1) Given 0 Bß C œ B C  BC  BC  # # and @œcosαßsinα , determine at least two va- lues of for which it results α H 0 "@  ß " œ H 0 "@ß@#  ß " .

0 Bß C œ B C  BC  BC  # # is a twice differentiable function a Bß C −  ‘#. So H 0 "@  ß " œ f0 " ß " † @ and H 0 "#@ß@  ß " œ @ †‡ "à " † @X .

It is f0 B ß Cœ#BC  C  C#à B  B  #BC Ê f0 "#     ß " œ #ß # .

So we get: H 0 "@  ß " œ f0 " ß " † @ œ #ß # †  cosαßsinαœ #cosαsinα. Since ‡Bß Cœ #C #B  "  #C it is ‡ "ß " œ # $ and so:

#B  "  #C #B $ #

   

H 0 " œ † # $ † œ #

$ #

@ß@#   ß " cos sin  cos  #  ' †  # #

sin cos cos sin sin

α α α

α α α α α

  .

Therefore H 0 "#@ß@  ß " œ # 'cosα†sinαœ # "  $cosα†sinα.

Then having to be H 0 "@  ß " œ H 0 "@ß@#  ß " it will be cosαsinαœ" $cosα†sinα and this equality is certainly verified at least for α and for α 1.

œ ! œ

#

II M 2) Solve the problem .

Max/min u.c.:



 

0 Bß C œ B  C C Ÿ "  B

"  B Ÿ C

# #

#

We rewrite the problem as .

Max/min u.c.:



 

0 Bß C œ B  C B  C  " Ÿ !

"  B  C Ÿ !

# #

#

The objective function of the problem is a continuous function, the feasible region is a com-X pact set and therefore there are certainly maximum and minimum values.

As can be seen from the figure, it is 0 Bß C   ! a Bß C −  ,   X . Using Kuhn-Tucker conditions, we form the Lagrangian function:

(4)

ABß Cß- -"ß #œB  C # # -"B  C  " #  -#"  B  C. 1) case -" œ !ß-# œ ! À

 

 

 

 

 

 

 

A A

wB wC

œ #B œ ! œ #C œ !

Ê

B œ ! C œ ! B  C  " Ÿ !

"  B  C Ÿ !

!  !  " Ÿ !

"  !  ! Ÿ ! À

# !à !

not satisfied

in fact   Â X.

2) case -" Á !ß-# œ ! À

 

 

 

 

 

 

 

 

A - -

A -

-

wB " "

wC "

"

œ #B  # B œ #B "  œ ! œ #C  œ !

Ê

B œ ! C œ

 

C œ "  B

"  B Ÿ C

"

œ #  !

" Ÿ "

# so !ß " is a possible Maximum

point; or:

  

  

  

  

  

 

 

 

 

A - -

A -

- -

wB " "

wC "

# #

" "

œ #B  # B œ #B "  œ ! œ #C  œ !

Ê Ê

B œ "  B œ

C œ C œ

 

C œ "  B

"  B Ÿ C œ " œ "

"  B Ÿ C "  B Ÿ C

#

" " "

# # #

" "

# #

œ

from which:













 

B œ C œ

Ê

"

#

"

#

" " " "

# # # #

-" œ "  !

"  Ÿ Ÿ À

" "

# satisfied

so is a possible Maximum point,

while , in fact .













 

B œ  C œ

"

#

"

#

" "

# #

-" œ "

"  Ÿ À

" "

# not satisfied

  Â X

3) case -" œ !ß-# Á! À

  

  

  

  

  

 

 

 

 

 

 

A -

A -

- -

wB #

wC #

#

œ #B  œ ! œ #C  œ !

C œ "  B Ê Ê

B œ  C œ 

B œ C œ

œ 

C Ÿ "  B œ " 

C Ÿ "  B

"  ! Ÿ "  À

# #

- -

- -

#

#

# #

#

#

# #

"

"#

#

" "

# % vera

; from #  ! it fol-

lows that the point " "

# #ß is a possible Minimum point.

4) case -" Á!ß-# Á! À

  

  

  

  

  

  

  

 

A - -

A - -

- -

- - -

wB " #

wC " #

# "

" # #

œ #B  # B  œ !

œ #C   œ !

C œ "  B

Ê Ê

B œ ! B œ !

C œ " C œ "

œ ! œ #

#   œ ! œ !

C œ "  B# À !ß " already studied,

and:

(5)

 

 

 

 

 

 

 

 

B œ " B œ "

C œ ! C œ !

#  #  œ ! œ #  !

 

- - Ê -

- - -

" # "

" # œ ! # œ #  !

so "ß ! is a possible Maximum point.

Let's study the objective function 0 Bß C œ B  C  # # on the points of the first constraint C œ "  B#. Since:

0 Bß "  B # œ B  "  B### œ B  B  "% # it is 0 Bß "  Bw#œ %B  #B$ from this we get 0 Bß "  B œ #B #B  "   ! if " (in

#

w#  #

B    X it is ! Ÿ B Ÿ ". Therefore:

and so the point " "

#ß

# is, relative to the boundary points only, a Minimum point, contra- dicting the previous indication -" œ "  ! which indicated it as a possible maximum point. So " " it is neither a maximum nor a minimum point.

#ß

#

The same conclusion could be reached using the bordered Hessian matrix of the Lagrangian function ABß Cß-"œB  C # # -"B  C  "# . It is for -" œ ":

‡  ‡

   

   

   

   

   

     

  

Bß C œ œ œ  # † # #  !

! #B " ! # "

#B !

" ! # # !

" ! #

#  #-" Ê " " !

#ß

# ,

and this result shows us again " " as a minimum point.

#ß

#

Now let's study the objective function 0 Bß C œ B  C  # # on the points of the second constra- int C œ "  B.

Since 0 Bß "  B œ B  "  B  #  # œ #B  #B  "# it is 0 Bß "  B œ %B  #w  from which 0 Bß "  B œ # #B  "   ! "

#

w    if B   (in X it is ! Ÿ B Ÿ ". Therefore:

so we have confirmation that the point " "

# #ß s a minimum point, conclusion already en-i

sured by the Weierstrass Theorem, since the point " " was the only candidate found for

# #ß

a minimum point. Here too we could confirm using the bordered Hessian matrix of the Lagrangian function ABß Cß-#œB  C # # -#"  B  C. It is:

(6)

‡  ‡

   

   

   

   

   

     

Bß C œ œ œ  %  !

!  "  " !  "  "

 " !  " !

 " ! #  " ! #

# Ê " " #

# #ß , a result

that shows us again " " .

# #ß as a minimum point

So " " " " "

# #ß s the minimum point withi 0 # #ß œ #;  !ß " are  "ß ! maximum points, with 0 !ß " œ 0 "ß ! œ "    .

II M 3) Given the equation 0 Bß C œ B C  BC  #BC  #B  #C œ !  $ $ , satisfied at  "ß " , verify that an implicit function B Ä C B  can be defined with it and that such function has a stationary point. Then determine the nature of this stationary point.

Since f0Bß Cœ$B C  C  #C  # B  $BC  #B  ## $ à $ #  it is f0 "ß " œ !ß % . It is therefore possible to define an implicit function for which .

B Ä C B C œ  4! œ !

  w 1

So B œ 1 is a stationary point for the implicit function.

From ‡Bß C œ  'BC $B  $C  # we get ‡ œ .

$B  $C  # 'BC "ß "

# #

# #   6 4

4 6

From: C œ 0  #0 C  0 C C œ  œ   ! Þ

0 % #

'  ) † !  ' † ! $

ww ww ww w ww w ww

BB BC CC #

Cw

  #

  , we get 1 From Cw " œ ! C " œ  $  ! we get that B œ"

and ww  # is a maximum point for the impli- cit function.

II M 4) Given the vectors —œ BCß #  $C  and ˜œ B  %ß BC , determine if pairs Bß C exist for which the scalar product of the two vectors — ˜† œ 0 Bß C  is maximum or mini- mum.

0 Bß C œ  — ˜† œ BCß #  $C † B  %ß BC œ B C  %BC  #BC  $BC    # #. So 0 Bß C œ B C  'BC  $BC  # #. We apply first order conditions:

(7)

f Ê 0 œ #BC  'C  $C œ C #B  '  $C œ ! 0 œ B  'B  'BC œ B B  '  'C œ !

0 Bß C œ     

 

w #

Bw #

C

and so we get four possi- ble solutions:

B œ ! B œ ! B œ  ' B œ 'C  ' 

C œ ! ∪ C œ # ∪ C œ ! ∪ "#C  "#  '  $C œ ! B œ  #

Ê C œ #

$ . For the second order conditions we construct the Hessian matrix:

‡Bß C œ  #C #B  '  'C

#B  '  'C  'B .

Since ‡ !ß ! œ ! ' it is ‡ œ  $'  !  !ß ! is a saddle point;

' !

   #

Since ‡ !ß # œ %  ' it is ‡ œ  $'  !  !ß # is a saddle point;

 ' !

   #

Since ‡ 'ß !œ !  ' it is ‡ œ  $'  !  'ß ! is a saddle point;

 ' $'

   #

Since ‡ it is ‡ ‡ , so is

 #ß # ‡  #ß 

$ œ  # œ  !à œ "#  !

 # "# œ "'  % œ "#  !

     

 

% %

$ $ #

" " $

#

a minimum point.

Riferimenti

Documenti correlati

Find

a compact set, and so surely exist maximum and mini- mum values.. It is not convenient to use

To nalyze the nature of the stationary points of the function we apply first and second order

a To nalyze the nature of the stationary points of the function we apply first and second order

For a complete analysis we form the Lagrangian function only for this con- straint.. For a complete analysis we form the Lagrangian function only for

Firstly we search for two linearly independent eigenvectors corresponding to the eigenvalue -

[r]

If the three vectors are linearly dependent they must form a singular matrix, and therefore the de- terminant of the matrix having the three vectors as its lines must be