Skip to content
Snippets Groups Projects
Commit 2aa75764 authored by Maciej Topyla's avatar Maciej Topyla
Browse files

Update src/6_eigenvectors_QM.md

parent ffaa81c4
No related branches found
No related tags found
1 merge request!23Addressing #16
Pipeline #120532 passed
......@@ -107,7 +107,9 @@ $$
$$
!!! check "Example"
Let us illustrate how to compute eigenvalues and eigenvectors by considering a $n=2$ vector space. Consider the following matrix
Let us illustrate how to compute eigenvalues and eigenvectors by considering a $n=2$ vector space.
Consider the following matrix
$$
A = \left( \begin{array}{cc} 1 & 2 \\ -1 & 4 \end{array} \right) \, ,
$$
......@@ -115,17 +117,20 @@ $$
$$
{\rm det}\left( A-\lambda\cdot I \right) = \left| \begin{array}{cc} 1-\lambda & 2 \\ -1 & 4-\lambda \end{array} \right| = (1-\lambda)(4-\lambda)+2 = \lambda^2 -5\lambda + 6=0 \, .
$$
This is a quadratic equation which we know how to solve exactly, and we find
that the two eigenvalues are $\lambda_1=3$ and $\lambda_2=2$.
This is a quadratic equation which we know how to solve exactly; the two eigenvalues are $\lambda_1=3$ and $\lambda_2=2$.
Next we can determine the associated eigenvectors $\vec{v}_1$ and $\vec{v}_2$. For the first one the equation that needs to be solved is
Next, we can determine the associated eigenvectors $\vec{v}_1$ and $\vec{v}_2$. For the first one, the equation to solve is
$$
\left( \begin{array}{cc} 1 & 2 \\ -1 & 4 \end{array} \right)
\left( \begin{array}{c} v_{1,1} \\ v_{1,2} \end{array} \right)=\lambda_1
\left( \begin{array}{c} v_{1,1} \\ v_{1,2} \end{array} \right) = 3 \left( \begin{array}{c} v_{1,1} \\ v_{1,2} \end{array} \right)
$$
from where we find the condition that $v_{1,1}=v_{1,2}$: an important property of eigenvalue equations is that the eigenvectors are only fixed up to an *overall normalisation condition*. This should be clear from its definition: if a vector $\vec{v}$ satisfies $A\vec{v}=\lambda\vec{v} $,
then the vector $\vec{v}'=c \vec{v}$ with $c$ some constant will also satisfy the same equation. So then we find that the eigenvalue $\lambda_1$ has associated an eigenvector
from where we find the condition that $v_{1,1}=v_{1,2}$.
An important property of eigenvalue equations is that the eigenvectors are only fixed up to an *overall normalisation condition*.
This should be clear from its definition: if a vector $\vec{v}$ satisfies $A\vec{v}=\lambda\vec{v} $,
then the vector $\vec{v}'=c \vec{v}$ with $c$ some constant will also satisfy the same equation. So then we find that the eigenvalue $\lambda_1$ has an associated eigenvector
$$
\vec{v}_1 = \left( \begin{array}{c} 1 \\ 1 \end{array} \right) \, ,
$$
......@@ -135,14 +140,14 @@ $$
\left( \begin{array}{c} 1 \\ 1 \end{array} \right) = \left( \begin{array}{c} 3 \\ 3 \end{array} \right)=
3 \vec{v}_1 \, ,
$$
as we wanted to demonstrate.
as we intended to demonstrate.
!!! note "Exercise"
As an exercise, try to obtain the expression of the eigenvector
corresponding to the second eigenvalue $\lambda_2=2$.
## Eigenvalue equations in quantum mechanics
##6.2. Eigenvalue equations in quantum mechanics
We can now extend the ideas of eigenvalue equations from linear algebra to the case of quantum mechanics.
The starting point is the eigenvalue equation for the operator $\hat{A}$,
......@@ -157,10 +162,10 @@ $$
\hat{A}|\psi_k\rangle= \lambda_{\psi_k}|\psi_k\rangle \, , \quad k =1,\ldots, n \, .
$$
In order to determine the eigenvalues and eigenvectors of a given operator $\hat{A}$ we will have to solve the
corresponding eigenvalue problem for this operator, what we called above the *characteristic equation*.
In order to determine the eigenvalues and eigenvectors of a given operator $\hat{A}$, we will have to solve the
corresponding eigenvalue problem for this operator, what we called above as the *characteristic equation*.
This is most efficiently done in the matrix representation of this operation, where we have
that the previous operator equation can be expressed in terms of its components as
that the above operator equation can be expressed in terms of its components as
$$
\begin{pmatrix} A_{11} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix} \begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix}= \lambda_{\psi_k}\begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix} \, , \quad k=1,\ldots,n \, .
$$
......@@ -171,13 +176,15 @@ $$
\begin{pmatrix} A_{11}- \lambda_{\psi_k} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi_k} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi_k} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix}
\begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix}=0 \, , \quad k=1,\ldots,n \, .
$$
This set of linear equations only has a non-trivial set of solutions provided that
the determinant of the matrix vanishes, as follows from the Cramer condition:
$$
{\rm det} \begin{pmatrix} A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix}=
\left| \begin{array}{cccc}A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{array} \right| = 0
$$
which in general will have $n$ independent solutions, which we label as $\lambda_{\psi,k}$.
!!! info "Cramer's rule"
This set of linear equations only has a non-trivial set of solutions provided that
the determinant of the matrix vanishes, as follows from the Cramer condition:
$$
{\rm det} \begin{pmatrix} A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix}=
\left| \begin{array}{cccc}A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{array} \right| = 0
$$
which in general will have $n$ independent solutions, which we label as $\lambda_{\psi,k}$.
Once we have solved the $n$ eigenvalues $\{ \lambda_{\psi,k} \} $, we can insert each
of them in the original evolution equation and determine the components of each of the eigenvectors,
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment