Skip to content
Snippets Groups Projects

Addressing #16

Merged Maciej Topyla requested to merge maciejedits into master
+ 155
131
@@ -2,75 +2,89 @@
title: Eigenvalues and eigenvectors
---
# Eigenvalues and eigenvectors
# 6. Eigenvalues and eigenvectors
The lecture on eigenvalues and eigenvectors consists of the following parts:
- [Eigenvalue equations in linear algebra](#Eigenvalue-equations-linear-algebra)
- [6.1. Eigenvalue equations in linear algebra](#61-eigenvalue-equations-in-linear-algebra)
- [Eigenvalue equations in quantum mechanics](#Eigenvalue-equations-quantum-mechanics)
- [6.2. Eigenvalue equations in quantum mechanics](#62-eigenvalue-equations-in-quantum-mechanics)
and at the end of the lecture one can find the corresponding [Problems](#problems)
and at the end of the lecture notes, there is a set of corresponding exercises:
The contents of this lecture are summarised in the following **videos**:
- [6.3. Problems](#63-problems)
- [6_eigenvectors_QM_video1](https://www.dropbox.com/s/n6hb5cu2iy8i8x4/linear_algebra_09.mov?dl=0)
***
The contents of this lecture are summarised in the following **video**:
- [Eigenvalues and eigenvectors](https://www.dropbox.com/s/n6hb5cu2iy8i8x4/linear_algebra_09.mov?dl=0)
*The total length of the videos: ~3 minutes 30 seconds*
***
In the previous lecture we discussed a number of *operator equations*, which are equations of the form
In the previous lecture, we discussed a number of *operator equations*, which have the form
$$
\hat{A}|\psi\rangle=|\varphi\rangle \, ,
$$
where $|\psi\rangle$ and $|\varphi\rangle$ are state vectors
belonging to the Hilbert space of the system $\mathcal{H}$.
A specific class of operator equations, that appear frequently
in quantum mechanics, are equations of the form
$$
\hat{A}|\psi\rangle= \lambda_{\psi}|\psi\rangle \, ,
$$
where $\lambda_{\psi}$ is a scalar (in general complex). These are equations where the action of the operator $\hat{A}$
on the state vector $|\psi\rangle$ returns *the same state vector*
multiplied by the scalar $\lambda_{\psi}$. This type of operator equations are known as *eigenvalue equations*,
and are of great importance for the description of quantum systems.
In this lecture we present the main ingredients of these equations
and how we can apply them to quantum systems.
!!! info "Eigenvalue equation:"
A specific class of operator equations, which appear frequently in quantum mechanics, consists of equations in the form
$$
\hat{A}|\psi\rangle= \lambda_{\psi}|\psi\rangle \, ,
$$
where $\lambda_{\psi}$ is a scalar (in general complex). These are equations where the action of the operator $\hat{A}$
on the state vector $|\psi\rangle$ returns *the same state vector* multiplied by the scalar $\lambda_{\psi}$.
This type of operator equations are known as *eigenvalue equations* and are of great importance for the description of quantum systems.
In this lecture, we present the main ingredients of these equations and how we can apply them to quantum systems.
## Eigenvalue equations in linear algebra
##6.1. Eigenvalue equations in linear algebra
First of all let us review eigenvalue equations in linear algebra. Assume that we have a (square) matrix $A$ with dimensions $n\times n$ and $\vec{v}$ is a column vector in $n$ dimensions. The corresponding eigenvalue equation will be of form
First of all, let us review eigenvalue equations in linear algebra. Assume that we have a (square) matrix $A$ with dimensions $n\times n$ and $\vec{v}$ is a column vector in $n$ dimensions. The corresponding eigenvalue equation will be of form
$$
A \vec{v} =\lambda \vec{v} .
$$
with $\lambda$ being a scalar number (real or complex, depending on the type
of vector space). We can express the previous equation in terms of its components,
assuming as usual some specific choice of basis, by using
the rules of matrix multiplication,
$$
\sum_{j=1}^n A_{ij} v_j = \lambda v_i \, .
$$
The scalar $\lambda$ is known as the *eigenvalue* of the equation, while the vector $\vec{v}$ is known as the associated *eigenvector*.
The key feature of such equations is that applying a matrix $A$ to the vector $\vec{v}$ returns *the original vector* up to an overall rescaling, $\lambda \vec{v}$. In general there will be multiple solutions to the eigenvalue equation $A \vec{v} =\lambda \vec{v}$, each one characterised by an specific eigenvalue and eigenvectors. Note that in some cases one has *degenerate solutions*, whereby a given matrix has two or more eigenvectors that are equal.
the rules of matrix multiplication:
In order to determine the eigenvalues of the matrix $A$, we need to evaluate the solutions of the so-called *characteristic equation*
of the matrix $A$, defined as
$$
{\rm det}\left( A-\lambda \mathbb{I} \right)=0 \, ,
$$
where $\mathbb{I}$ is the identity matrix of dimensions $n\times n$,
and ${\rm det}$ is the determinant.
!!! info "Eigenvalue equation: Eigenvalue and Eigenvector"
$$
\sum_{j=1}^n A_{ij} v_j = \lambda v_i \, .
$$
The scalar $\lambda$ is known as the *eigenvalue* of the equation, while the vector $\vec{v}$ is known as the associated *eigenvector*.
The key feature of such equations is that applying a matrix $A$ to the vector $\vec{v}$ returns *the original vector* up to an overall rescaling, $\lambda \vec{v}$.
This relations follows from the eigenvalue equation in terms of components
!!! warning "Number of solutions"
In general, there will be multiple solutions to the eigenvalue equation $A \vec{v} =\lambda \vec{v}$, each one characterised by an specific eigenvalue and eigenvectors. Note that in some cases one has *degenerate solutions*, whereby a given matrix has two or more eigenvectors that are equal.
!!! tip "Characteristic equation:"
In order to determine the eigenvalues of the matrix $A$, we need to evaluate the solutions of the so-called *characteristic equation*
of the matrix $A$, defined as
$$
{\rm det}\left( A-\lambda \mathbb{I} \right)=0 \, ,
$$
where $\mathbb{I}$ is the identity matrix of dimensions $n\times n$, and ${\rm det}$ is the determinant.
This relation follows from the eigenvalue equation in terms of components
$$
\sum_{j=1}^n A_{ij} v_j = \lambda v_i \, ,\quad \to \quad \sum_{j=1}^n A_{ij} v_j - \sum_{j=1}^n\lambda \delta_{ij} v_j =0 \, ,\quad \to \quad \sum_{j=1}^n\left( A_{ij} - \lambda \delta_{ij}\right) v_j =0 \, .
\begin{align}
\sum_{j=1}^n A_{ij} v_j &= \lambda v_i \, , \\
\to \quad \sum_{j=1}^n A_{ij} v_j - \sum_{j=1}^n\lambda \delta_{ij} v_j &=0 \, ,\\
\to \quad \sum_{j=1}^n\left( A_{ij} - \lambda \delta_{ij}\right) v_j &=0 \, .
\end{align}
$$
Therefore the eigenvalue condition can be written as a set of coupled linear equations
Therefore, the eigenvalue condition can be written as a set of coupled linear equations
$$
\sum_{j=1}^n\left( A_{ij} - \lambda \delta_{ij}\right) v_j =0 \, , \qquad i=1,2,\ldots,n\, ,
$$
which only admit non-trivial solutions if the determinant of the matrix $A-\lambda\mathbb{I}$ vanishes
(the so-called Cramer condition), thus leading to the characteristic equation.
(the so-called Cramer's condition), thus leading to the characteristic equation.
Once we have solved the characteristic equation, we end up with $n$ eigenvalues $\lambda_k$, $k=1,\ldots,n$.
@@ -90,49 +104,60 @@ $$
= A_{11}A_{22} - A_{12}A_{21} \, ,
$$
while the corresponding expression for a matrix belonging to a vector
space in $n=3$ dimensions will be given in terms of the previous expression
space in $n=3$ dimensions in terms of the previous expression will be given as
$$
{\rm det}\left( A \right) = \left| \begin{array}{ccc} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22}
& A_{23} \\ A_{31} & A_{32}
& A_{33} \end{array} \right| = A_{11} \left| \begin{array}{cc} A_{22} & A_{23}
\\ A_{32} & A_{33} \end{array} \right|
- A_{12} \left| \begin{array}{cc} A_{21} & A_{23} \\ A_{31} & A_{33} \end{array} \right|
& A_{33} \end{array} \right| =
\begin{array}{c}
+ A_{11} \left| \begin{array}{cc} A_{22} & A_{23} \\ A_{32} & A_{33} \end{array} \right| \\
- A_{12} \left| \begin{array}{cc} A_{21} & A_{23} \\ A_{31} & A_{33} \end{array} \right| \\
+ A_{13} \left| \begin{array}{cc} A_{21} & A_{22} \\ A_{31} & A_{32} \end{array} \right|
\end{array}
$$
Let us illustrate how to compute eigenvalues and eigenvectors by considering a $n=2$ vector space. Consider the following matrix
$$
A = \left( \begin{array}{cc} 1 & 2 \\ -1 & 4 \end{array} \right) \, ,
$$
which has associated the following characteristic equation
$$
{\rm det}\left( A-\lambda\cdot I \right) = \left| \begin{array}{cc} 1-\lambda & 2 \\ -1 & 4-\lambda \end{array} \right| = (1-\lambda)(4-\lambda)+2 = \lambda^2 -5\lambda + 6=0 \, .
$$
This is a quadratic equation which we know how to solve exactly, and we find
that the two eigenvalues are $\lambda_1=3$ and $\lambda_2=2$.
Next we can determine the associated eigenvectors $\vec{v}_1$ and $\vec{v}_2$. For the first one the equation that needs to be solved is
$$
\left( \begin{array}{cc} 1 & 2 \\ -1 & 4 \end{array} \right)
\left( \begin{array}{c} v_{1,1} \\ v_{1,2} \end{array} \right)=\lambda_1
\left( \begin{array}{c} v_{1,1} \\ v_{1,2} \end{array} \right) = 3 \left( \begin{array}{c} v_{1,1} \\ v_{1,2} \end{array} \right)
$$
from where we find the condition that $v_{1,1}=v_{1,2}$: an important property of eigenvalue equations is that the eigenvectors are only fixed up to an *overall normalisation condition*. This should be clear from its definition: if a vector $\vec{v}$ satisfies $A\vec{v}=\lambda\vec{v} $,
then the vector $\vec{v}'=c \vec{v}$ with $c$ some constant will also satisfy the same equation. So then we find that the eigenvalue $\lambda_1$ has associated an eigenvector
$$
\vec{v}_1 = \left( \begin{array}{c} 1 \\ 1 \end{array} \right) \, ,
$$
and indeed one can check that
$$
A\vec{v}_1 = \left( \begin{array}{cc} 1 & 2 \\ -1 & 4 \end{array} \right)
\left( \begin{array}{c} 1 \\ 1 \end{array} \right) = \left( \begin{array}{c} 3 \\ 3 \end{array} \right)=
3 \vec{v}_1 \, ,
$$
as we wanted to demonstrate. As an exercise, you can try to obtain the expression of the eigenvector
corresponding to the second eigenvalue $\lambda_2=2$.
## Eigenvalue equations in quantum mechanics
!!! check "Example"
Let us illustrate how to compute eigenvalues and eigenvectors by considering a $n=2$ vector space.
Consider the following matrix
$$
A = \left( \begin{array}{cc} 1 & 2 \\ -1 & 4 \end{array} \right) \, ,
$$
which has associated the following characteristic equation
$$
{\rm det}\left( A-\lambda\cdot I \right) = \left| \begin{array}{cc} 1-\lambda & 2 \\ -1 & 4-\lambda \end{array} \right| = (1-\lambda)(4-\lambda)+2 = \lambda^2 -5\lambda + 6=0 \, .
$$
This is a quadratic equation which we know how to solve exactly; the two eigenvalues are $\lambda_1=3$ and $\lambda_2=2$.
Next, we can determine the associated eigenvectors $\vec{v}_1$ and $\vec{v}_2$. For the first one, the equation to solve is
$$
\left( \begin{array}{cc} 1 & 2 \\ -1 & 4 \end{array} \right)
\left( \begin{array}{c} v_{1,1} \\ v_{1,2} \end{array} \right)=\lambda_1
\left( \begin{array}{c} v_{1,1} \\ v_{1,2} \end{array} \right) = 3 \left( \begin{array}{c} v_{1,1} \\ v_{1,2} \end{array} \right)
$$
from where we find the condition that $v_{1,1}=v_{1,2}$.
An important property of eigenvalue equations is that the eigenvectors are only fixed up to an *overall normalisation condition*.
This should be clear from its definition: if a vector $\vec{v}$ satisfies $A\vec{v}=\lambda\vec{v} $,
then the vector $\vec{v}'=c \vec{v}$ with $c$ some constant will also satisfy the same equation. So then we find that the eigenvalue $\lambda_1$ has an associated eigenvector
$$
\vec{v}_1 = \left( \begin{array}{c} 1 \\ 1 \end{array} \right) \, ,
$$
and indeed one can check that
$$
A\vec{v}_1 = \left( \begin{array}{cc} 1 & 2 \\ -1 & 4 \end{array} \right)
\left( \begin{array}{c} 1 \\ 1 \end{array} \right) = \left( \begin{array}{c} 3 \\ 3 \end{array} \right)=
3 \vec{v}_1 \, ,
$$
as we intended to demonstrate.
!!! note "Exercise"
As an exercise, try to obtain the expression of the eigenvector
corresponding to the second eigenvalue $\lambda_2=2$.
##6.2. Eigenvalue equations in quantum mechanics
We can now extend the ideas of eigenvalue equations from linear algebra to the case of quantum mechanics.
The starting point is the eigenvalue equation for the operator $\hat{A}$,
@@ -147,27 +172,29 @@ $$
\hat{A}|\psi_k\rangle= \lambda_{\psi_k}|\psi_k\rangle \, , \quad k =1,\ldots, n \, .
$$
In order to determine the eigenvalues and eigenvectors of a given operator $\hat{A}$ we will have to solve the
corresponding eigenvalue problem for this operator, what we called above the *characteristic equation*.
In order to determine the eigenvalues and eigenvectors of a given operator $\hat{A}$, we will have to solve the
corresponding eigenvalue problem for this operator, what we called above as the *characteristic equation*.
This is most efficiently done in the matrix representation of this operation, where we have
that the previous operator equation can be expressed in terms of its components as
that the above operator equation can be expressed in terms of its components as
$$
\begin{pmatrix} A_{11} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix} \begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix}= \lambda_{\psi_k}\begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix} \, , \quad k=1,\ldots,n \, .
\begin{pmatrix} A_{11} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix} \begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix}= \lambda_{\psi_k}\begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix} \, .
$$
As discussed above, this condition is identical to solving a set of linear equations
for the form
$$
\begin{pmatrix} A_{11}- \lambda_{\psi_k} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi_k} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi_k} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix}
\begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix}=0 \, , \quad k=1,\ldots,n \, .
$$
This set of linear equations only has a non-trivial set of solutions provided that
the determinant of the matrix vanishes, as follows from the Cramer condition:
\begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix}=0 \, .
$$
{\rm det} \begin{pmatrix} A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix}=
\left| \begin{array}{cccc}A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{array} \right| = 0
$$
which in general will have $n$ independent solutions, which we label as $\lambda_{\psi,k}$.
!!! info "Cramer's rule"
This set of linear equations only has a non-trivial set of solutions provided that
the determinant of the matrix vanishes, as follows from the Cramer's condition:
$$
{\rm det} \begin{pmatrix} A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix}=
\left| \begin{array}{cccc}A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{array} \right| = 0
$$
which in general will have $n$ independent solutions, which we label as $\lambda_{\psi,k}$.
Once we have solved the $n$ eigenvalues $\{ \lambda_{\psi,k} \} $, we can insert each
of them in the original evolution equation and determine the components of each of the eigenvectors,
@@ -177,69 +204,66 @@ $$
|\psi_2\rangle = \begin{pmatrix} \psi_{2,1} \\ \psi_{2,2} \\ \psi_{2,3} \\ \vdots \end{pmatrix} \,, \quad \ldots \, , |\psi_n\rangle = \begin{pmatrix} \psi_{n,1} \\ \psi_{n,2} \\ \psi_{n,3} \\ \vdots \end{pmatrix} \, .
$$
An important property of eigenvalue equations is that if you have two eigenvectors
$ |\psi_i\rangle$ and $ |\psi_j\rangle$ that have associated *different* eigenvalues,
$\lambda_{\psi_i} \ne \lambda_{\psi_j} $, then these two eigenvectors are orthogonal to each
other, that is
$$
\langle \psi_j | \psi_i\rangle =0 \, \quad {\rm for} \quad {i \ne j} \, .
$$
This property is extremely important, since it suggest that we could use the eigenvectors
of an eigenvalue equation as a *set of basis elements* for this Hilbert space.
!!! tip "Orthogonality of eigenvectors"
An important property of eigenvalue equations is that if you have two eigenvectors
$ |\psi_i\rangle$ and $ |\psi_j\rangle$ that have associated *different* eigenvalues,
$\lambda_{\psi_i} \ne \lambda_{\psi_j} $, then these two eigenvectors are orthogonal to each
other, that is
$$
\langle \psi_j | \psi_i\rangle =0 \, \quad {\rm for} \quad {i \ne j} \, .
$$
This property is extremely important, since it suggest that we could use the eigenvectors
of an eigenvalue equation as a *set of basis elements* for this Hilbert space.
Recall from the discussions of eigenvalue equations in linear algebra that
the eigenvectors $|\psi_i\rangle$ are defined *up to an overall normalisation constant*. Clearly, if $|\psi_i\rangle$ is a solution of $\hat{A}|\psi_i\rangle = \lambda_{\psi_i}|\psi_i\rangle$
then $c|\psi_i\rangle$ will also be a solution, with $c$ some constant. In the context of quantum mechanics, we need to choose this overall rescaling constant
to ensure that the eigenvectors are normalised, that is, that they satisfy
then $c|\psi_i\rangle$ will also be a solution, with $c$ being a constant. In the context of quantum mechanics, we need to choose this overall rescaling constant to ensure that the eigenvectors are normalised, thus they satisfy
$$
\langle \psi_i | \psi_i\rangle = 1 \, \quad {\rm for~all}~i \, .
$$
With such a choice of normalisation, one says that the set of eigenvectors
With such a choice of normalisation, one says that the eigenvectors in a set
are *orthogonal* among them.
The set of all the eigenvalues of an operator is called *eigenvalue spectrum* of the operator. Note that different eigenvectors can also have the same eigenvalue. If this is the case the eigenvalue is said to be *degenerate*.
!!! tip "Eigenvalue spectrum and degeneracy"
The set of all eigenvalues of an operator is called the *eigenvalue spectrum* of an operator. Note that different eigenvectors can also have the same eigenvalue. If this is the case the eigenvalue is said to be *degenerate*.
***
##Problems
**1)** *Eigenvalues and Eigenvectors*
Find the characteristic polynomial and eigenvalues for each of the following matrices,
##6.3. Problems
$$A=\begin{pmatrix} 5&3\\2&10 \end{pmatrix}\, \quad
B=\begin{pmatrix} 7i&-1\\2&6i \end{pmatrix} \, \quad C=\begin{pmatrix} 2&0&-1\\0&3&1\\1&0&4 \end{pmatrix}$$
1. *Eigenvalues and eigenvectors I*
**2)** The Hamiltonian for a two-state system is given by
$$H=\begin{pmatrix} \omega_1&\omega_2\\ \omega_2&\omega_1\end{pmatrix}$$
A basis for this system is
$$|{0}\rangle=\begin{pmatrix}1\\0 \end{pmatrix}\, ,\quad|{1}\rangle=\begin{pmatrix}0\\1 \end{pmatrix}$$
Find the characteristic polynomial and eigenvalues for each of the following matrices,
$$A=\begin{pmatrix} 5&3\\2&10 \end{pmatrix}\, \quad
B=\begin{pmatrix} 7i&-1\\2&6i \end{pmatrix} \, \quad C=\begin{pmatrix} 2&0&-1\\0&3&1\\1&0&4 \end{pmatrix}$$
Find the eigenvalues and eigenvectors of the Hamiltonian $H$, and express the eigenvectors in terms of $\{|0 \rangle,|1\rangle \}$
2. *Hamiltonian*
The Hamiltonian for a two-state system is given by
$$H=\begin{pmatrix} \omega_1&\omega_2\\ \omega_2&\omega_1\end{pmatrix}$$
A basis for this system is
$$|{0}\rangle=\begin{pmatrix}1\\0 \end{pmatrix}\, ,\quad|{1}\rangle=\begin{pmatrix}0\\1 \end{pmatrix}$$
Find the eigenvalues and eigenvectors of the Hamiltonian $H$, and express the eigenvectors in terms of $\{|0 \rangle,|1\rangle \}$
**3)** Find the eigenvalues and eigenvectors of the matrices
3. *Eigenvalues and eigenvectors II*
$$A=\begin{pmatrix} -2&-1&-1\\6&3&2\\0&0&1 \end{pmatrix}\, \quad B=\begin{pmatrix} 1&1&2\\2&2&2\\-1&-1&-1 \end{pmatrix} $$.
Find the eigenvalues and eigenvectors of the matrices
$$A=\begin{pmatrix} -2&-1&-1\\6&3&2\\0&0&1 \end{pmatrix}\, \quad B=\begin{pmatrix} 1&1&2\\2&2&2\\-1&-1&-1 \end{pmatrix} $$.
**4)** *The Hadamard gate*
4. *The Hadamard gate*
In one of the problems of the previous section we discussed that an important operator used in quantum computation is the *Hadamard gate*, which is represented by the matrix:
$$\hat{H}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix} \, .$$
Determine the eigenvalues and eigenvectors of this operator.
In one of the problems of the previous section we discussed that an important operator used in quantum computation is the *Hadamard gate*, which is represented by the matrix:
$$\hat{H}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix} \, .$$
Determine the eigenvalues and eigenvectors of this operator.
**5)** Show that the Hermitian matrix
5. *Hermitian matrix*
$$\begin{pmatrix} 0&0&i\\0&1&0\\-i&0&0 \end{pmatrix}$$
Show that the Hermitian matrix
$$\begin{pmatrix} 0&0&i\\0&1&0\\-i&0&0 \end{pmatrix}$$
has only two real eigenvalues and find and orthonormal set of three eigenvectors.
has only two real eigenvalues and find and orthonormal set of three eigenvectors.
6. *Orthogonality of eigenvectors*
**6)**
Confirm, by explicit calculation, that the eigenvalues of the real, symmetric matrix
$$\begin{pmatrix} 2&1&2\\1&2&2\\2&2&1 \end{pmatrix}$$
are real, and its eigenvectors are orthogonal.
Confirm, by explicit calculation, that the eigenvalues of the real, symmetric matrix
$$\begin{pmatrix} 2&1&2\\1&2&2\\2&2&1 \end{pmatrix}$$
are real, and its eigenvectors are orthogonal.
Loading