Lecture 11: Positive semidefinite matrix Rajat Mittal IIT Kanpur In the last lecture a positive semidefinite matrix was defined as a symmetric matrix with non-negative eigenvalues. The original definition is that a matrix M ∈ L(V ) is positive semidefinite iff, 1. M is symmetric, 2. v T M v ≥ 0 for all v ∈ V . If the matrix is symmetric and v T M v > 0, ∀v ∈ V, then it is called positive definite. When the matrix satisfies opposite inequality it is called negative definite. The two definitions for positive semidefinite matrix turn out be equivalent. In the next section, we identify many different definitions with positive semidefinite matrices. 1 Equivalent definitions of positive semidefinite matrices Theorem 1. For a symmetric n × n matrix M ∈ L(V ), following are equivalent. 1. 2. 3. 4. v T M v ≥ 0 for all v ∈ V . All the eigenvalues are non-negative. There exist a matrix B, s.t., B T B = M . Gram matrix of vectors u1 , · · · , un ∈ U , where U is some vector space. Hence ∀i, j; Mi,j = viT vj . Proof. 1 ⇒ 2: Say λ is an eigenvalue of M . Then there exist eigenvector v ∈ V , s.t., M v = λv. So 0 ≤ v T M v = λv T v. Since v T v is positive for all v, implies λ is non-negative. 2 ⇒ 3: Since the matrix M is symmetric, it has a spectral decomposition. X M= λi xi xTi i Define yi = √ λi xi . This definition is possible because λi ’s are non-negative. Then, X M= yi yiT . i Define B to be the matrix whose columns are yi . Then it is clear that B T B = M . From this construction, B’s columns are orthogonal. In general, any matrix of the form B T B is positive semi-definite. The matrix B need not have orthogonal columns (it can even be rectangular). But this representation is not unique and there always exists a matrix B with orthogonal columns for M , s.t., B T B = M . This decomposition is unique if B is positive semidefinite. The positive semidefinite B, s.t., B T B = M , is called the square root of M . Exercise 1. Prove that the square root of a matrix is unique. Hint: Use the spectral decomposition to find one of the square root. Suppose A is any square root of M . Then use the spectral decomposition of A and show the square root is unique (remember the decomposition to eigenspaces is unique) . 3 ⇒ 4: We are given a matrix B, s.t., B T B = M . Say the rows of B are u1 , · · · , un . Then, from the definition of matrix multiplication, ∀i, j; Mi,j = viT vj Exercise 2. Show that for a positive semidefinite matrix M ∈ L(V ), there exists v1 , · · · , vn ∈ V , s.t, M is a gram matrix of v1 · · · , vn . 4 ⇒ 1: Suppose M is the gram matrix of vectors u1 , · · · , un . Then, X X xT M x = Mi,j xi xj = xi xj (viT vj ), i,j i,j where xi is the ith element of vector x. Define y = 0 ≥ yT y = X P i xi vi , then, xi xj (viT vj ) = xT M x. i,j Hence xT M x ≥ 0 for all x. Exercise 3. Prove that 2 ⇒ 1 and 3 ⇒ 1 directly. P Remark: A matrix M of the form M = i xi xTi is positive semidefinite (Exercise: Prove it), even if xi ’s are not orthogonal to each other. Remark: A matrix of the form yxT is a rank one matrix. It is rank one because all columns are scalar multiples of y. Similarly, all rank one matrices can be expressed in this form. Exercise 4. A rank one matrix yxT is positive semi-definite iff y is a positive scalar multiple of x. 2 Some examples – An n × n identity matrix is positive semidefinite. It has rank n. All the eigenvalues are 1 and every vector is an eigenvector. It is the only matrix with all eigenvalues 1 (Prove it). – The all 1’s matrix J (n × n) is a rank one positive semidefinite matrix. It has one eigenvalue n and rest are zero. – The matrix 1 −1 M= , −1 1 is positive semidefinite. Because, the quadratic form xT M x = (x1 −x2 )2 , where x1 , x2 are two components of x. – Suppose any symmetric matrix M has maximum eigenvalue λ. The matrix λ0 I − M , where λ0 ≥ λ is positive semidefinite. 3 Composition of semidefinite matrices – The direct sum matrix A ⊕ B, A 0 0 B , is positive semidefinite iff A and B both are positive semidefinite. This can most easily be seen by looking at the quadratic form xT (A ⊕ B)x. Divide x into x1 and x2 of the required dimensions, then xT (A ⊕ B)x = xT1 Ax1 + xT2 Bx2 . 2 – The tensor product A ⊗ B is positive semidefinite iff A and B are both positive semidefinite or both are negative semidefinite. This follows from the fact that given the eigenvalues λ1 , · · · , λn for A and µ1 , · · · , µm for B; the eigenvalues of A ⊗ B are ∀i, j, λi µj . – The sum of two positive semidefinite matrices is positive semidefinite. – The product of two positive semidefinite matrices need not be positive semidefinite. Exercise 5. Give an example of two positive semidefinite matrices whose product is not positive semidefinite. – The hadamard product of two positive semidefinite matrices A and B, A◦B, is also positive semidefinite. Since A and B are positive semidefinite for some vectors u1 , · · · , un and v1 , · · · vn . The hadamard matrix will be the gram matrix of ui ⊗ vi ’s. Hence it will be positive semidefinite. – The inverse of a positive definite matrix is positive definite. The eigenvalues of the inverse are inverses of the eigenvalues. – The matrix P T M P is positive semidefinite if M is positive semidefinite. 3
© Copyright 2024 ExpyDoc