Problem 11907
(American Mathematical Monthly, Vol.123, April 2016) Proposed by X.-Q. Chang (USA).
LetA be an n × n positive-definite Hermitian matrix, with minimum and maximum eigenvalues λ andµ
tr(A) µn + µn
tr(A)
n
≤ det A
µ + µA−1
,
n
λtr(A−1)+λtr(A−1) n
n
≤ det A
λ + λA−1
.
Solution proposed by Moubinool Omarjee, Lyc´ee Henri IV, Paris, France, and Roberto Tauraso, Dipartimento di Matematica, Universit`a di Roma “Tor Vergata”, Italy.
Any Hermitian matrix can be diagonalized by a unitary matrix: there exists a unitary matrix U such that A = U DU−1where D is diagonal matrix with
λ = λ1≤ λ2≤ · · · ≤ λn= µ
on the main diagonal. Note that λ > 0 because A is positive-definite. Let t > 0 then
det A
t + tA−1
= det
U D
t + tD−1
U−1
= det D
t + tD−1
=
n
Y
k=1
λk
t + t λk
.
Let f (x) = ln xt +xt then by inspecting the sign of the second derivative
f′′(x) = t4+ 4t2x2− x4 x2(x2+ t2)2 , it is easy to see that ftis convex in (0, tp
2 +√
5]. Therefore if µ ≤ tp 2 +√
5 then 1
nln
det A
t + tA−1
= 1 n
n
X
k=1
f (λk) ≥ f 1 n
n
X
k=1
λk
!
= f tr(A) n
= ln tr(A) tn + tn
tr(A)
which implies that
tr(A) tn + tn
tr(A)
n
≤ det A
t + tA−1
.
Hence the first inequality follows by taking t = µ (notice thatp 2 +√
5 > 2).
Moroever the second inquality can be obtained from the first one by replacing A with A−1 and µ
with 1/λ.