Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • Samuel/lectures
  • mathematics-for-quantum-physics/lectures
2 results
Show changes
Commits on Source (616)
Showing
with 9800 additions and 9 deletions
*~
.ipynb_checkpoints
site
docs
*.pyc
__pycache__
......@@ -7,9 +7,8 @@ stages:
build lectures:
stage: build
before_script:
- pip install -U mkdocs mkdocs-material python-markdown-math notedown
- pip install -U mkdocs mkdocs-material python-markdown-math
script:
- python execute.py
- mkdocs build
artifacts:
paths:
......@@ -37,10 +36,10 @@ build lectures:
## Create the SSH directory and give it the right permissions
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan tnw-tn1.tudelft.net >> ~/.ssh/known_hosts
- ssh-keyscan qt4.tudelft.net >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- "rsync -rv site/* mathforquantum@tnw-tn1.tudelft.net:$DEPLOY_PATH"
- "rsync -rv site/* uploader@qt4.tudelft.net:$DEPLOY_PATH"
deploy master version:
<<: *prepare_deploy
......@@ -72,7 +71,7 @@ undeploy test version:
DEPLOY_PATH: "test_builds/$CI_COMMIT_REF_NAME"
script:
- mkdir empty/
- "rsync -rlv --delete empty/ mathforquantum@tnw-tn1.tudelft.net:$DEPLOY_PATH"
- "rsync -rlv --delete empty/ uploader@qt4.tudelft.net:$DEPLOY_PATH"
environment:
name: $CI_COMMIT_REF_NAME
action: stop
# mathforquantum
Lecture notes and teaching material used for the Delft University of Technology course awesome course.
# Mathematics for Quantum Physics
Lecture notes and teaching material used for the Delft University of Technology course TN3105.
The compiled materials are available at https://mathforquantum.quantumtinkerer.tudelft.nl
......@@ -8,3 +7,41 @@ The compiled materials are available at https://mathforquantum.quantumtinkerer.t
This repository is based on a template for publishing lecture notes, developed
by Anton Akhmerov, who also hosts such repositories for other courses.
# Version
This a minimal stable version of the website from the branch "enabling search" based on mkdocs-material with a funcitoning website-wide search (without the support for inline jupyter notebook conversion by thebe)
# HOWTOs
## How to add new material to the lecture notes
1. First, create a new merge request. In this way, your edits
will be pushed to a separate folder, and not directly appear on the website.
Detailed information on how to create a merge request can be found
[here](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html), but in most cases these two simple steps are sufficient:
- create a new branch in the repository on gitlab (either using the gitlab UI, or on the command line and then push to gitlab)
- on top of the gitlab page you will see a blue "Create merge request" button associated with your new branch. Fill out the information, and don't forget to
start the name of the merge request with "WIP:"
2. Write the new material using [markdown](https://en.wikipedia.org/wiki/Markdown#Example). The markdown files are stored in the `src` folder and have the
ending `.md`. In particular, in markdown you can
- write math using latex syntax. `$...$` is used for math in the text,
`$$...$$` for separate equations.
- highlight certain blocks using the `!!!` syntax. For examples, use
```
!!! check "Example: optional title"
The text of the example (could have math in it
$f(x)$), which must be indented by 4 spaces
```
Other useful blocks are `!!! warning` and `!!! info`
3. Place figures in `docs/figures`
4. If you added a new markdown file that should be linked in the index, you need
to add it to `mkdocs.yml` under the `nav:` entry.
5. Whenever you push a commit to the branch/merge request, it will automatically be deployed on a preview webpage. This process may take a few minutes. You can find the preview website by going to your merge request. There will be on top a box with the label "Pipeline", and in the box a button "View app". Clicking on "View app" will bring you to the preview webpage.
6. When you are done with the merge request, remove "Draft:" from the title, and notify an instructor.
---
title: Complex Numbers
---
# 1. Complex Numbers
The lecture on complex numbers consists of three parts, each with their own video:
- [1.1. Definition and basic operations](#11-definition-and-basic-operations)
- [1.2. Complex functions](#12-complex-functions)
- [1.3. Differentiation and integration](#13-differentiation-and-integration)
**Total video length: 38 minutes and 53 seconds**
## 1.1 Definition and basic operations
<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/fLMdaMuEp8s?rel=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
### Complex numbers
!!! info "Definition I"
Complex numbers are numbers of the form $$z = a + b {\rm i}.$$
Here $\rm i$ is the square root of -1: $${\rm i} = \sqrt{-1},$$
or equivalently: $${\rm i}^2 = -1.$$
Usual operations on numbers have their natural extension for complex
numbers, as we shall see below.
Some useful definitions:
!!! info "Definition II"
For a complex number $z = a + b {{\rm i}}$, $a$ is called the *real part*, and $b$ the *imaginary part*.
!!! info "Complex conjugate"
The *complex conjugate* $z^*$ of $z = a + b {{\rm i}}$ is defined as
$$z^* = a - b{{\rm i}},$$
i.e., taking the complex conjugate means flipping the sign of the imaginary part.
### Addition
!!! info "Addition"
For two complex numbers, $z_1 = a_1 + b_1 {{\rm i}}$ and $z_2 = a_2 + b_2 {{\rm i}}$,
the sum $w = z_1 + z_2$ is given as
$$w = w_1 + w_2 {{\rm i}}= (a_1 + a_2) + (b_1 + b_2) {{\rm i}}$$
where the parentheses in the rightmost expression have been added to group the real and the imaginary part. A consequence of this definition is that the sum of a complex number and its complex conjugate is real:
$$z + z^* = a + b {{\rm i}}+ a - b {{\rm i}}= 2a,$$ i.e., this results in twice the real part of $z$.
Similarly, subtracting $z^*$ from $z$ yields $$z - z^* = a + b {{\rm i}} - a + b {{\rm i}}= 2b{\rm i},$$ i.e., twice the imaginary part of $z$ (times $\rm i$).
### Multiplication
!!! info "Multiplication"
For the same two complex numbers $z_1$ and $z_2$ as above, their product is calculated as
$$w = z_1 z_2 = (a_1 + b_1 {{\rm i}}) (a_2 + b_2 {{\rm i}}) = (a_1 a_2 - b_1 b_2) + (a_1 b_2 + a_2 b_1) {{\rm i}},$$
where the parentheses have again beèn used to indicate the real and imaginary parts.
A consequence of this definition is that the product of a complex number
$z = a + b {{\rm i}}$ with its conjugate is real:
$$z z^* = (a+b{{\rm i}})(a-b{{\rm i}}) = a^2 + b^2.$$
The square root of this number is called the *norm* $|z|$ of $z$:
$$|z| = \sqrt{z z^*} = \sqrt{a^2 + b^2}.$$
### Division
The quotient $z_1/z_2$ of two complex numbers $z_1$ and $z_2$ defined above can be evaluated by multiplying the numerator and denominator by the complex conjugate of $z_2$:
!!! info "Division"
$$\frac{z_1}{z_2} = \frac{z_1 z_2^*}{z_2 z_2^*} = \frac{(a_1 a_2 + b_1 b_2) + (-a_1 b_2 + a_2 b_1) {{\rm i}}}{a_2^2 + b_2^2}.$$
Try this yourself!
!!! check "Example:"
$$\begin{align}
\frac{1 + 2{\rm i}}{1 - 2{\rm i}} &= \frac{(1 + 2{\rm i})(1 + 2{\rm i})}{1^2 + 2^2} = \frac{1+4{\rm i} -4}{5}\\
& = -\frac{3}{5} + {\rm i} \frac{4}{5}
\end{align}$$
### Visualization: the complex plane
Complex numbers can be rendered on a two-dimensional (2D) plane, the
*complex plane*. This plane is spanned by two unit vectors, one
horizontal representing the real number 1 and the vertical
unit vector representing ${\rm i}$.
<figure markdown>
![image](figures/complex_numbers_5_0.svg)
<figcaption>The norm of $z$ is the length of its vector spanned in the complex plane.</figcaption>
</figure>
#### Addition in the complex plane
Adding two numbers in the complex plane corresponds to adding their
respective horizontal and vertical components:
<figure markdown>
![image](figures/complex_numbers_8_0.svg)
<figcaption>The sum of two complex numbers is found as the diagonal of a parallelogram spanned by the vectors of those two numbers.</figcaption>
</figure>
## 1.2. Complex functions
<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/7XtR_wDSqRc?rel=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Real functions can (most of the times) be written in terms of a Taylor series expanded at a point $x_{0}$:
$$f(x) = \sum \limits_{n=0}^{\infty} \frac{f^{(n)}(x_{0})}{n!} (x-x_{0})^{n}$$
We can write something similar for complex functions by replacing the *real* variable $x$ with its *complex* counterpart $z$:
$$f(z) = \sum \limits_{n=0}^{\infty} \frac{f^{(n)}(x_{0})}{n!} (z-x_{0})^{n}$$
For this course, the most important function is the *complex exponential function*, at which we will have a closer look below.
### The complex exponential function
The complex exponential is used *extremely often*.
It occurs in Fourier transforms and it is very convenient for doing calculations involving cosines and sines.
It also makes many common operations on complex number a lot easier to perform.
!!! info "The exponential function and Euler identity"
The exponential function $f(z) = \exp(z) = e^z$ is defined as:
$$\exp(z) = e^{x + {\rm i}y} = e^{x} e^{{\rm i} y} = e^{x} \left( \cos y + {\rm i} \sin y\right).$$
The last expression is called the *Euler identity*.
!!! note "**Exercise**"
Check that this function obeys
$$\exp(z_1) \exp(z_2) = \exp(z_1 + z_2).$$
*You will need sum and difference formulas of cosine and sine.*
### The polar form
A complex number $z$ can be represented by two real numbers, $a$ and $b$, which correspond to the real and imaginary part of the complex number.
Another representation of $z$ is a *vector* in the complex plane with a horizontal component that corresponds to the real part of $z$ and a vertical component that corresponds to the imaginary part of $z$.
It is also possible to characterize that vector by its *length* and *direction*, where the latter can be represented by the
angle that the vector makes with the horizontal axis:
<figure markdown>
![image](figures/complex_numbers_10_0.svg)
<figcaption>The angle with the horizontal axis is denoted by $\varphi$
like in the case of conventional polar coordinates,
but in the context of complex numbers, this angle is called as the <b>argument</b>.</figcaption>
</figure>
!!! info "Polar form of complex numbers"
A complex number can be represented either by its real and imaginary part
corresponding to the Cartesian coordinates in the complex plane,
or by its *norm* and its *argument* corresponding to polar coordinates.
The norm is the length of the vector, and the argument is the angle it makes with the horizontal axis.
We can conclude that for a complex number $z = a + b {\rm i}$, its real and imaginary parts
can be expressed in polar coordinates as $$a = |z| \cos\varphi$$ $$b = |z| \sin\varphi$$
!!! info "Inverse equations"
The inverse equations are $$|z| = \sqrt{a^2 + b^2}$$
$$\varphi = \arctan(b/a)$$ for $a>0$.
In general:
$$\varphi = \begin{cases} \arctan(b/a) &{\rm for ~} a>0; \\
\pi + \arctan(b/a) & {\rm for ~} a<0 {\rm ~ and ~} b>0;\\
-\pi + \arctan(b/a) &{\rm for ~} a<0 {\rm ~ and ~} b<0. \end{cases}$$
It turns out that by using the magnitude $|z|$ and phase $\varphi$, we can write any complex number as
$$z = |z| e^{{\rm i} \varphi}$$
By increasing $\varphi$ by $2 \pi$, we make a full circle around the origin and reach the same point on the complex plane. In other words, by adding $2 \pi$ to the argument of $z$, we get the same complex number $z$!
As a result, the argument $\varphi$ is defined up to $2 \pi$, and we are free to make any choice we like, such as in the examples in the figure below:
<figure markdown>
![image](figures/complex_numbers_11_0.svg)
<figcaption> $-\pi < \varphi < \pi$ (left) and (right) $-\frac{\pi}{2} < \varphi < \frac{3 \pi}{2}$ </figcaption>
</figure>
Some useful values of the complex exponential to know by heart are:
!!! tip "Useful identities:"
$$e^{2{\rm i } \pi} = 1$$
$$e^{{\rm i} \pi} = -1 $$
$$e^{{\rm i} \pi/2} = {\rm i}$$
From the first expression, it also follows that
$$e^{{\rm i} (y + 2\pi n)} = e^{{\rm i}y} {\rm ~ for ~} n \in \mathbb{Z}$$
As a result, $y$ is only defined up to $2\pi$.
Furthermore, we can define the sine and cosine in terms of complex exponentials:
!!! info "Complex sine and cosine"
$$\cos(x) = \frac{e^{{\rm i} x} + e^{-{\rm i} x}}{2}$$
$$\sin(x) = \frac{e^{{\rm i} x} - e^{-{\rm i} x}}{2i}$$
Most operations on complex numbers become easier when complex numbers are converted to their *polar form* using the complex exponential.
Some functions and operations, which are common in real analysis, can be easily derived for their complex counterparts by substituting the real variable $x$ with the complex variable $z$ in its polar form:
!!! info "Examples of some complex functions stated using polar form"
$$z^{n} = \left(r e^{{\rm i} \varphi}\right)^{n} = r^{n} e^{{\rm i} n \varphi}$$
$$\sqrt[n]{z} = \sqrt[n]{r e^{{\rm i} \varphi} } = \sqrt[n]{r} e^{{\rm i}\varphi/n} $$
$$\log(z) = log \left(r e^{{\rm i} \varphi}\right) = log(r) + {\rm i} \varphi$$
$$z_{1}z_{2} = r_{1} e^{{\rm i} \varphi_{1}} r_{2} e^{{\rm i} \varphi_{2}} = r_{1} r_{2} e^{{\rm i} (\varphi_{1} + \varphi_{2})}$$
Use of polar form lets us notice immediately that for example, as a result of multiplication, the norm of the new number is the *product* of the norms of the multiplied numbers and its argument is the *sum* of the arguments of the multiplied numbers.
In the complex plane, this looks as follows:
<figure markdown>
![image](figures/complex_numbers_12_0.svg)
<figcaption></figcaption>
</figure>
!!! check "Example: Find all solutions solving $z^4 = 1$."
Of course, we know that $z = \pm 1$ are two solutions, but which other solutions are possible? We take a systematic approach:
$$\begin{align} z = e^{{\rm i} \varphi} & \Rightarrow z^4 = e^{4{\rm i} \varphi} = 1 \\
& \Leftrightarrow 4 \varphi = n 2 \pi \\
& \Leftrightarrow \varphi = 0, \varphi = \frac{\pi}{2}, \varphi = -\frac{\pi}{2}, \varphi = \pi \\
& \Leftrightarrow z = 1, z = i, z = -i, z = -1 \end{align}$$
## 1.3. Differentiation and integration
<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/JyftSqmmVdU?rel=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
**We only consider differentiation and integration over *real* variables.**
We can then regard the complex ${\rm i}$ as another constant, and use our usual differentiation and integration rules:
!!! info "Differentiation and Integration rules"
$$\frac{d}{d\varphi} e^{{\rm i} \varphi} = e^{{\rm i} \varphi} \frac{d}{d\varphi} ({\rm i} \varphi) ={\rm i} e^{{\rm i} \varphi} .$$
$$\int_{0}^{\pi} e^{{\rm i} \varphi} = \frac{1}{{\rm i}} \left[ e^{{\rm i} \varphi} \right]_{0}^{\pi} = -{\rm i}(-1 -1) = 2 {\rm i}$$
## 1.4. Bonus: the complex exponential function and trigonometry
Let us show some tricks in the following examples where the simple properties of the exponential
function help in re-deriving trigonometric identities.
!!! example "Properties of the complex exponential function I"
Take $|z_1| = |z_2| = 1$, and $\arg{(z_1)} = \varphi_1$ and
$\arg{(z_2)} = \varphi_2$.
It is easy to see then that $z_i = \exp({\rm i} \varphi_i)$, $i=1, 2$. Then:
$$z_1 z_2 = \exp[{\rm i} (\varphi_1 + \varphi_2)].$$
The left hand side can be written as
$$\begin{align}
z_1 z_2 & = \left[ \cos(\varphi_1) + {\rm i} \sin(\varphi_1) \right] \left[ \cos(\varphi_2) + {\rm i} \sin(\varphi_2) \right] \\
& = \cos\varphi_1 \cos\varphi_2 - \sin\varphi_1 \sin\varphi_2 + {\rm i} \left( \cos\varphi_1 \sin\varphi_2 +
\sin\varphi_1 \cos\varphi_2 \right).
\end{align}$$
Also, the right hand side can be written as
$$\exp[{\rm i} (\varphi_1 + \varphi_2)] = \cos(\varphi_1 + \varphi_2) + {\rm i} \sin(\varphi_1 + \varphi_2).$$
Comparing the two expressions, equating their real and imaginary parts, we find
$$\cos(\varphi_1 + \varphi_2) = \cos\varphi_1 \cos\varphi_2 - \sin\varphi_1 \sin\varphi_2;$$
$$\sin(\varphi_1 + \varphi_2) = \cos\varphi_1 \sin\varphi_2 +
\sin\varphi_1 \cos\varphi_2.$$
Note that we used the Euler formula in order to derive the identities of trigonometric function.
The point is to show you that you can use the properties of the complex exponential to quickly find the form of trigonometric formulas, which are often easily forgotten.
!!! example "Properties of the complex exponential function II"
In this example, let's see what we can learn from the derivative of the exponential function:
$$\frac{d}{d\varphi} \exp({\rm i} \varphi) = {\rm i} \exp({\rm i} \varphi) .$$
Writing out the exponential in terms of cosine and sine, we see that
$$\cos'\varphi + {\rm i} \sin'\varphi = {\rm i} \cos\varphi - \sin\varphi.$$
where the prime $'$ denotes the derivative as usual. Equating real and imaginary parts leads to
$$\cos'\varphi = - \sin\varphi;$$
$$\sin'\varphi = \cos\varphi.$$
## 1.5. Summary
1. A complex number $z$ has the form $$z = a + b \rm i$$ where $a$ and
$b$ are both real, and $\rm i^2 = 1$. The real number $a$ is called
the *real part* of $z$ and $b$ is the *imaginary part*. Two complex
numbers can be added, subtracted and multiplied straightforwardly.
The quotient of two complex numbers $z_1=a_1 + \rm i b_1$ and
$z_2=a_2 + \rm i b_2$ is
$$\frac{z_1}{z_2} = \frac{z_1 z_2^*}{z_2 z_2^*} = \frac{(a_1 a_2 + b_1 b_2) + (-a_1 b_2 + a_2 b_1) {{\rm i}}}{a_2^2 + b_2^2}.$$
2. Complex numbers can also be characterised by their *norm*
$|z|=\sqrt{a^2+b^2}$ and *argument* $\varphi$. These parameters
correspond to polar coordinates in the complex plane. For a complex
number $z = a + b {\rm i}$, its real and imaginary parts can be
expressed as $$a = |z| \cos\varphi$$ $$b = |z| \sin\varphi$$ The
inverse equations are $$|z| = \sqrt{a^2 + b^2}$$
$$\varphi = \begin{cases} \arctan(b/a) &{\rm for ~} a>0; \\
\pi + \arctan(b/a) & {\rm for ~} a<0 {\rm ~ and ~} b>0;\\
-\pi + \arctan(b/a) &{\rm ~ for ~} a<0 {\rm ~ and ~} b<0.
\end{cases}$$
The complex number itself then becomes
$$z = |z| e^{{\rm i} \varphi}$$
3. The most important complex function for us is the complex exponential function, which simplifies many operations on complex numbers
$$\exp(z) = e^{x + {\rm i}y} = e^{x} \left( \cos y + {\rm i} \sin y\right).$$
where $y$ is defined up to $2 \pi$.\\
The $\sin$ and $\cos$ can be rewritten in terms of this complex exponential as
$$\cos(x) = \frac{e^{{\rm i} x} + e^{-{\rm i} x}}{2}$$
$$\sin(x) = \frac{e^{{\rm i} x} - e^{-{\rm i} x}}{2i}$$
Because we only consider *differentiation* and *integration* over *real variables*, the usual rules apply:
$$\frac{d}{d\varphi} e^{{\rm i} \varphi} = e^{{\rm i} \varphi} \frac{d}{d\varphi} ({\rm i} \varphi) ={\rm i} e^{{\rm i} \varphi} .$$
$$\int_{0}^{\pi} e^{{\rm i} \varphi} = \frac{1}{{\rm i}} \left[ e^{{\rm i} \varphi} \right]_{0}^{\pi} = -{\rm i}(-1 -1) = 2 {\rm i}$$
## 1.6. Problems
1. [:grinning:] Given $a=1+2\rm i$ and $b=-3+4\rm i$, calculate and draw in the complex plane the numbers:
1. $a+b$,
2. $ab$,
3. $b/a$.
2. [:grinning:] Evaluate:
1. $\rm i^{1/4}$,
2. $\left(1+\rm i \sqrt{3}\right)^{1/2}$,
3. $\exp(2\rm i^3)$.
3. [:grinning:] Find the three 3rd roots of $1$ and ${\rm i}$. </br>
(i.e. all possible solutions to the equations $x^3 = 1$ and $x^3 = {\rm i}$, respectively).
4. [:grinning:] *Quotients*</br>
1. Find the real and imaginary part of $$ \frac{1+ {\rm i}}{2+3{\rm i}} \, .$$
2. Evaluate for real $a$ and $b$:$$\left| \frac{a+b\rm i}{a-b\rm i} \right| \, .$$
5. [:sweat:] For any given complex number $z$, we can take the inverse $\frac{1}{z}$.
1. Visualize taking the inverse in the complex plane.
2. What geometric operation does taking the inverse correspond to? </br>
(Hint: first consider what geometric operation $\frac{1}{z^*}$ corresponds to.)
6. [:grinning:] *Differentation and integration* </br>
1. Compute $$\frac{d}{dt} e^{{\rm i} (kx-\omega t)},$$
2. Calculate the real part of $$\int_0^\infty e^{-\gamma t +\rm i \omega t} dt$$
($k$, $x$, $\omega$, $t$ and $\gamma$ are real; $\gamma$ is positive).
7. [:smirk:] Compute by making use of the Euler identity.
$$\int_{0}^{\pi}\cos(x)\sin(2x)dx$$
This diff is collapsed.
---
title: Vector Spaces
---
# 3. Vector spaces
The lecture on vector spaces consists of **three parts**:
- [3.1. Definition and basis dependence](#31-definition-and-basis-dependence)
- [3.2. Properties of a vector space](#32-properties-vector-space)
- [3.3. Matrix representation of vectors](#33-matrix-representation-vectors)
and at the end of this lecture note, there is a set of corresponding exercises
- [3.4 Problems](#34-problems)
---
The contents of this lecture are summarised in the following **videos**:
1. [Vector spaces: Introduction](https://www.dropbox.com/s/evytrbb55fgrcze/linear_algebra_01.mov?dl=0)
2. [Operations in vector spaces](https://www.dropbox.com/s/1530xb7zbuhwu6u/linear_algebra_02.mov?dl=0)
3. [Properties of vector spaces](https://www.dropbox.com/s/5lwkxd8lw5uwri9/linear_algebra_03.mov?dl=0)
**Total video lentgh: ~16 minutes**
## 3.1. Definition and basis dependence
A vector $\vec{v}$ is a mathematical object characterised by both a **magnitude** and a **direction**, that is, an orientation in a given space.
We can express a vector in terms of its individual **components**. Let's assume we have an $n$-dimensional space, meaning that the vector $\vec{v}$ can be oriented in different ways along each of $n$ dimensions. The expression of $\vec{v}$ in terms of its components is
$$\vec{v} = (v_1, v_2,\ldots, v_n) \, ,$$
We will denote by ${\mathcal V}^n$ the **vector space** composed by all possible vectors of the above form.
The components of a vector, $\{ v_i\}$ can be **real numbers** or **complex numbers**,
depending on whether we have a real or a complex vector space.
!!! info "Vector basis"
Note that the above expression of $\vec{v}$ in terms of its components assume that we are using a specific **basis**.
It is important to recall that the same vector can be expressed in terms of different bases.
A **vector basis** is a set of $n$ vectors that can be used to generate all the elements of a vector space.
For example, a possible basis of ${\mathcal V}^n$ could be denoted by $\vec{a}_1,\vec{a}_2,\ldots,\vec{a_n}$,
and we can write a generic vector $\vec{v}$ as
$$\vec{v} = (v_1, v_2, \ldots, v_n) = v_1 \vec{a}_1 + v_2 \vec{a}_2 + \ldots v_n \vec{a}_n \, .$$
However, one could choose a different basis, denoted by $\vec{b}_1,\vec{b}_2,\ldots,\vec{b_n}$, where the same vector would be expressed in terms of a different set of components
$$ \vec{v} = (v'_1, v'_2, \ldots, v'_n) = v'_1 \vec{b}_1 + v'_2 \vec{b}_2 + \ldots v'_n \vec{b}_n \, .$$
Thus, while the vector remains the same, the values of its components depend on the specific choice of basis.
The most common basis is the **Cartesian basis**, where for example for $n=3$:
$$\vec{a}_1 = (1, 0, 0) \, ,\qquad \vec{a}_2 = (0, 1, 0)\, ,\qquad \vec{a}_3 = (0, 0, 1) \, .$$
!!! warning ""
The elements of a vector basis must be **linearly independent** from one another, meaning
that none of them can be expressed as a linear combination of the other basis vectors.
We can consider one example in the two-dimensional real vector space $\mathbb{R}$, namely the $(x,y)$ coordinate plane, shown below.
<figure markdown>
![image](figures/3_vector_spaces_1.jpg)
<figcaption></figcaption>
</figure>
In this figure, you can see how the same vector $\vec{v}$ can be expressed in two different bases. In the first one (left panel), the Cartesian basis is used and its components are $\vec{v}=(2,2)$. In the second basis (right panel), the components are different, namely $\vec{v}=(2.4 ,0.8)$, while the magnitude and direction of the vector remain unchanged.
For many problems, both in mathematics and in physics, the appropriate choice of the vector space basis may significantly simplify the
solution process.
## 3.2. Properties of a vector space
You might be already familiar with the concept of performing a number of various **operations** between vectors, so in this course, let us review some essential operations that are relevant to start working with quantum mechanics:
!!! info "Addition"
I can add two vectors to produce a third vector, $$\vec{a} + \vec{b}= \vec{c}.$$
As with scalar addition, also vectors satisfy the commutative property, $$\vec{a} + \vec{b} = \vec{b} + \vec{a}.$$
Vector addition can be carried out in terms of their components,
$$ \vec{c} = \vec{a} + \vec{b} = (a_1 + b_1, a_2 + b_2, \ldots, a_n + b_n) = (c_1, c_2, \ldots, c_n).$$
!!! info "Scalar multiplication"
I can multiply a vector by a scalar number (either real or complex) to produce another vector, $$\vec{c} = \lambda \vec{a}.$$
Addition and scalar multiplication of vectors are both *associative* and *distributive*, so the following relations hold
$$\begin{align} &1. \qquad (\lambda \mu) \vec{a} = \lambda (\mu \vec{a}) = \mu (\lambda \vec{a})\\
&2. \qquad \lambda (\vec{a} + \vec{b}) = \lambda \vec{a} + \lambda \vec{b}\\
&3. \qquad (\lambda + \mu)\vec{a} = \lambda \vec{a} +\mu \vec{a} \end{align}$$
### Vector products
In addition to multiplying a vector by a scalar, as mentioned above, one can also multiply two vectors among them.
There are two types of vector products; where the end result is a scalar (so just a number) and where the end result is another vector.
!!! info "Scalar product of vectors"
The scalar product of vectors is given by $$ \vec{a}\cdot \vec{b} = a_1b_1 + a_2b_2 + \ldots + a_nb_n \, .$$
Note that since the scalar product is just a number, its value will not depend on the specific
basis in which we express the vectors: the scalar product is said to be *basis-independent*. The scalar product is also found via
$$\vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}| \cos \theta$$ with $\theta$ the angle between the vectors.
!!! info "Cross product"
The vector product (or cross product) between two vectors $\vec{a}$ and $\vec{b}$ is given by
$$ \vec{a}\times \vec{b} = |\vec{a}||\vec{b}|\sin\theta \hat{n}$$
where $|\vec{a}|=\sqrt{ \vec{a}\cdot\vec{a} }$ (and likewise for $|\vec{b}|$) is the norm of the vector $\vec{a}$, $\theta$ is the angle between the two vectors, and $\hat{n}$ is a unit vector which is *perpendicular* to the plane that contains $\vec{a}$ and $\vec{b}$.
Note that this cross-product can only be defined in *three-dimensional vector spaces*. The resulting vector
$\vec{c}=\vec{a}\times \vec{b} $ will have as components $c_1 = a_2b_3-a_3b_2$, $c_2= a_3b_1 - a_1b_3$, and $c_3= a_1b_2 - a_2b_1$.
### Unit vector and orthonormality
!!! info "Unit vector"
A special vector is the **unit vector**, which has a norm of 1 *by definition*. A unit vector is often denoted with a hat, rather than an arrow ($\hat{i}$ instead of $\vec{i}$). To find the unit vector in the direction of an arbitrary vector $\vec{v}$, we divide by the norm: $$\hat{v} = \frac{\vec{v}}{|\vec{v}|}$$
!!! info "Orthonormality"
Two vectors are said to be **orthonormal** of they are perpendicular (orthogonal) *and* both are unit vectors.
Now we are ready to define in a more formal way what vector spaces are,
an essential concept for the description of quantum mechanics.
### The main properties
The main properties of **vector spaces** are the following:
!!! info ""
A vector space is **complete upon vector addition**.
This property means that if two arbitrary vectors $\vec{a}$ and $\vec{b}$
are elements of a given vector space ${\mathcal V}^n$,
then their addition should also be an element of the same vector space
$$\vec{a}, \vec{b} \in {\mathcal V}^n, \qquad \vec{c} = (\vec{a} + \vec{b}) \in {\mathcal V}^n \, ,\qquad \forall\,\, \vec{a}, \vec{b} \,.$$
!!! info ""
A vector space is **complete upon scalar multiplication**.
This property means that when I multiply one arbitrary vector $\vec{a}$,
element of the vector space ${\mathcal V}^n$, by a general scalar $\lambda$, the result is another vector which also belongs to the same vector space $$\vec{a} \in {\mathcal V}^n, \qquad \vec{c} = \lambda \vec{a}
\in {\mathcal V}^n \qquad \forall\,\, \vec{a},\lambda \, .$$
The property that a vector space is complete upon scalar multiplication and vector addition is also known as the **closure condition**.
!!! info ""
There exists a **null element** $\vec{0}$ such that $\vec{a}+\vec{0} =\vec{0}+\vec{a}=\vec{a} $.
!!! info ""
**Inverse element**: for each vector $\vec{a} \in \mathcal{V}^n$ there exists another
element of the same vector space, $-\vec{a}$, such that their addition results
in the null element, $\vec{a} + ( -\vec{a}) = \vec{0}$. This element it called the **inverse element**.
A vector space comes often equipped with various multiplication operations between vectors, such as the scalar product mentioned above
(also known as *inner product*), but also many other operations such as *vector product* or *tensor product*. There are also many other properties, but for what we are interested in right now, these are sufficient.
## 3.3. Matrix representation of vectors
It is advantageous to represent vectors with a notation suitable for matrix manipulation and operations. As we will show in the next lectures, the operations involving states in quantum systems can be expressed in the language of linear algebra.
First of all, let us remind ourselves how we express vectors in the standard Euclidean space. In two dimensions, the position of a point $\vec{r}$ when making explicit the Cartesian basis vectors reads
$$ \vec{r}=x \hat{i}+y\hat{j} \, .$$
As mentioned above, the unit vectors $\hat{i}$ and $\hat{j}$ form an *orthonormal basis* of this vector space, and we call $x$ and $y$ the *components* of $\vec{r}$ with respect to the directions spanned by the basis vectors.
Recall also that the choice of basis vectors is not unique, we can use any other pair of orthonormal unit vectors $\hat{i}$ and $\hat{j}$, and express the vector $\vec{r}$ in terms of these new basis vectors as
$$ \vec{r}=x'\hat{i}'+y'\hat{j}'=x\hat{i}+y\hat{i} \, ,$$
with $x'\neq x$ and $y'\neq y$. So, while the vector itself does not depend on the basis, the values of its components are basis dependent.
We can also express the vector $\vec{r}$ in the following form
$$ \vec{r} = \begin{pmatrix}x\\y\end{pmatrix} \, ,$$
which is known as a *column vector*. Note that this notation assumes a specific choice of basis vectors, which is left
implicit and displays only the information on its components along this specific basis.
For instance, if we had chosen another set of basis vectors $\hat{i}'$ and $\hat{j}'$, the components would be $x'$ and $y'$, and the corresponding column vector representing the same vector $\vec{r}$ in such case would be given by
$$ \vec{r}= \begin{pmatrix}x'\\y'\end{pmatrix}.$$
We also know that Euclidean space is equipped with a scalar vector product.
The scalar product $\vec{r_1}\cdot\vec{r_2}$ of two vectors in 2D Euclidean space is given by
$$ \vec{r_1}\cdot\vec{r_2}=r_1\,r_2\,\cos\theta \, ,$$
where $r_1$ and $r_2$ indicate the *magnitude* (length) of the vectors and $\theta$ indicates its relative angle. Note that the scalar product of two vectors is just a number, and thus it must be *independent of the choice of basis*.
The same scalar product can also be expressed in terms of components of $\vec{r_1}$ and $\vec{r_2}$. When using the $\{ \hat{i}, \hat{j} \}$ basis, the scalar product will be given by
$$ \vec{r_1}\cdot\vec{r_2}=x_1\,x_2\,+\,y_1\,y_2 \, .$$
Note that the same result would be obtained if the basis $\{ \hat{i}', \hat{j}' \}$
had been chosen instead
$$ \vec{r_1}\cdot\vec{r_2}=x_1'\,x_2'\,+\,y_1'\,y_2' \, .$$
The scalar product of two vectors can also be expressed, taking into
account the properties of matrix multiplication, in the following form
$$ \vec{r_1}\cdot\vec{r_2} = \begin{pmatrix}x_1, y_1\end{pmatrix}\begin{pmatrix}x_2\\y_2\end{pmatrix} = x_1x_2+y_1y_2 \, ,$$
where here we say that the vector $\vec{r_1}$ is represented by a *row vector*.
Therefore, we see that the scalar product of vectors in Euclidean space can be expressed as the matrix multiplication of row and column vectors. The same formalism, as we will see in the next class, can be applied for the case of Hilbert spaces in quantum mechanics.
***
## 3.4. Problems
**1)** [:grinning:] Find a unit vector parallel to the sum of $\vec{r}_1$ and $\vec{r}_2$, where we have defined
$$\vec{r}_1=2\vec{i}+4\vec{j}-5\vec{k} \, , $$ and $$\vec{r}_2=\vec{i}+2\vec{j}+3\vec{k} \, .$$.
***
**2)** [:grinning:] If the vectors $\vec{a}$ and $\vec{b}$ may be written in the parametric form
as a function of the parameter $t$ as follows
$$\vec{a}=3t^3\,\vec{i}-2t\,\vec{j}+t^2\,\vec{k}$$ and $$\vec{b}=3\sin{t}\,\vec{i}+2\cos{t}\,\vec{k}$$
Evaluate the following derivatives with respect to the parameter $t$:
**(a)** $ d(\vec{a}\cdot\vec{b}) / dt$.
**(b)** $d \left( \vec{a} \times \vec{b}\right)/dt$.
***
**3)** [:sweat:] Three non-zero vectors $\vec{a}$, $\vec{b}$ and $\vec{c}$ are such that $(\vec{a}+\vec{b})$ is perpendicular to $(\vec{a}+\vec{c})$ and $(\vec{a}-\vec{b})$ is perpendicular to $(\vec{a}-\vec{c})$. Show that $\vec{a}$ is perpendicular to $\vec{b}+\vec{c}$. If the magnitude of the vectors $\vec{a}$, $\vec{b}$ and $\vec{c}$ are in the ratio 1:2:4, find the angle between $\vec{b}$ and $\vec{c}$.
***
**4)** [:grinning:] Find the vector product $\vec{b} \times \vec{c}$ and the triple product $\vec{a}\cdot(\vec{b} \times \vec{c})$, where these three vectors are defined as
$$\vec{a}=\vec{i}+4\vec{j}+\vec{k}\,,$$ and $$\vec{b}=-\vec{i}+2\vec{j}+2\vec{k}\,,$$ and $$\vec{c}=2\vec{i}-\vec{k}\,.$$
***
This diff is collapsed.
This diff is collapsed.
---
title: Eigenvalues and eigenvectors
---
# 6. Eigenvalues and eigenvectors
The lecture on eigenvalues and eigenvectors consists of the following parts:
- [6.1. Eigenvalue equations in linear algebra](#61-eigenvalue-equations-in-linear-algebra)
- [6.2. Eigenvalue equations in quantum mechanics](#62-eigenvalue-equations-in-quantum-mechanics)
and at the end of the lecture notes, there is a set of corresponding exercises:
- [6.3. Problems](#63-problems)
***
The contents of this lecture are summarised in the following **video**:
- [Eigenvalues and eigenvectors](https://www.dropbox.com/s/n6hb5cu2iy8i8x4/linear_algebra_09.mov?dl=0)
*The total length of the videos: ~3 minutes 30 seconds*
***
In the previous lecture, we discussed a number of *operator equations*, which have the form
$$
\hat{A}|\psi\rangle=|\varphi\rangle \, ,
$$
where $|\psi\rangle$ and $|\varphi\rangle$ are state vectors
belonging to the Hilbert space of the system $\mathcal{H}$.
!!! info "Eigenvalue equation:"
A specific class of operator equations, which appear frequently in quantum mechanics, consists of equations in the form
$$
\hat{A}|\psi\rangle= \lambda_{\psi}|\psi\rangle \, ,
$$
where $\lambda_{\psi}$ is a scalar (in general complex). These are equations where the action of the operator $\hat{A}$
on the state vector $|\psi\rangle$ returns *the same state vector* multiplied by the scalar $\lambda_{\psi}$.
This type of operator equations are known as *eigenvalue equations* and are of great importance for the description of quantum systems.
In this lecture, we present the main ingredients of these equations and how we can apply them to quantum systems.
##6.1. Eigenvalue equations in linear algebra
First of all, let us review eigenvalue equations in linear algebra. Assume that we have a (square) matrix $A$ with dimensions $n\times n$ and $\vec{v}$ is a column vector in $n$ dimensions. The corresponding eigenvalue equation will be of form
$$
A \vec{v} =\lambda \vec{v} .
$$
with $\lambda$ being a scalar number (real or complex, depending on the type
of vector space). We can express the previous equation in terms of its components,
assuming as usual some specific choice of basis, by using
the rules of matrix multiplication:
!!! info "Eigenvalue equation: Eigenvalue and Eigenvector"
$$
\sum_{j=1}^n A_{ij} v_j = \lambda v_i \, .
$$
The scalar $\lambda$ is known as the *eigenvalue* of the equation, while the vector $\vec{v}$ is known as the associated *eigenvector*.
The key feature of such equations is that applying a matrix $A$ to the vector $\vec{v}$ returns *the original vector* up to an overall rescaling, $\lambda \vec{v}$.
!!! warning "Number of solutions"
In general, there will be multiple solutions to the eigenvalue equation $A \vec{v} =\lambda \vec{v}$, each one characterised by an specific eigenvalue and eigenvectors. Note that in some cases one has *degenerate solutions*, whereby a given matrix has two or more eigenvectors that are equal.
!!! tip "Characteristic equation:"
In order to determine the eigenvalues of the matrix $A$, we need to evaluate the solutions of the so-called *characteristic equation*
of the matrix $A$, defined as
$$
{\rm det}\left( A-\lambda \mathbb{I} \right)=0 \, ,
$$
where $\mathbb{I}$ is the identity matrix of dimensions $n\times n$, and ${\rm det}$ is the determinant.
This relation follows from the eigenvalue equation in terms of components
$$
\begin{align}
\sum_{j=1}^n A_{ij} v_j &= \lambda v_i \, , \\
\to \quad \sum_{j=1}^n A_{ij} v_j - \sum_{j=1}^n\lambda \delta_{ij} v_j &=0 \, ,\\
\to \quad \sum_{j=1}^n\left( A_{ij} - \lambda \delta_{ij}\right) v_j &=0 \, .
\end{align}
$$
Therefore, the eigenvalue condition can be written as a set of coupled linear equations
$$
\sum_{j=1}^n\left( A_{ij} - \lambda \delta_{ij}\right) v_j =0 \, , \qquad i=1,2,\ldots,n\, ,
$$
which only admit non-trivial solutions if the determinant of the matrix $A-\lambda\mathbb{I}$ vanishes
(the so-called Cramer's condition), thus leading to the characteristic equation.
Once we have solved the characteristic equation, we end up with $n$ eigenvalues $\lambda_k$, $k=1,\ldots,n$.
We can then determine the corresponding eigenvector
$$
\vec{v}_k = \left( \begin{array}{c} v_{k,1} \\ v_{k,2} \\ \vdots \\ v_{k,n} \end{array} \right) \, ,
$$
by solving the corresponding system of linear equations
$$
\sum_{j=1}^n\left( A_{ij} - \lambda_k \delta_{ij}\right) v_{k,j} =0 \, , \qquad i=1,2,\ldots,n\, ,
$$
Let us remind ourselves that in $n=2$ dimensions the determinant of a matrix
is evaluated as
$$
{\rm det}\left( A \right) = \left| \begin{array}{cc} A_{11} & A_{12} \\ A_{21} & A_{22} \end{array} \right|
= A_{11}A_{22} - A_{12}A_{21} \, ,
$$
while the corresponding expression for a matrix belonging to a vector
space in $n=3$ dimensions in terms of the previous expression will be given as
$$
{\rm det}\left( A \right) = \left| \begin{array}{ccc} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22}
& A_{23} \\ A_{31} & A_{32}
& A_{33} \end{array} \right| =
\begin{array}{c}
+ A_{11} \left| \begin{array}{cc} A_{22} & A_{23} \\ A_{32} & A_{33} \end{array} \right| \\
- A_{12} \left| \begin{array}{cc} A_{21} & A_{23} \\ A_{31} & A_{33} \end{array} \right| \\
+ A_{13} \left| \begin{array}{cc} A_{21} & A_{22} \\ A_{31} & A_{32} \end{array} \right|
\end{array}
$$
!!! check "Example"
Let us illustrate how to compute eigenvalues and eigenvectors by considering a $n=2$ vector space.
Consider the following matrix
$$
A = \left( \begin{array}{cc} 1 & 2 \\ -1 & 4 \end{array} \right) \, ,
$$
which has associated the following characteristic equation
$$
{\rm det}\left( A-\lambda\cdot I \right) = \left| \begin{array}{cc} 1-\lambda & 2 \\ -1 & 4-\lambda \end{array} \right| = (1-\lambda)(4-\lambda)+2 = \lambda^2 -5\lambda + 6=0 \, .
$$
This is a quadratic equation which we know how to solve exactly; the two eigenvalues are $\lambda_1=3$ and $\lambda_2=2$.
Next, we can determine the associated eigenvectors $\vec{v}_1$ and $\vec{v}_2$. For the first one, the equation to solve is
$$
\left( \begin{array}{cc} 1 & 2 \\ -1 & 4 \end{array} \right)
\left( \begin{array}{c} v_{1,1} \\ v_{1,2} \end{array} \right)=\lambda_1
\left( \begin{array}{c} v_{1,1} \\ v_{1,2} \end{array} \right) = 3 \left( \begin{array}{c} v_{1,1} \\ v_{1,2} \end{array} \right)
$$
from where we find the condition that $v_{1,1}=v_{1,2}$.
An important property of eigenvalue equations is that the eigenvectors are only fixed up to an *overall normalisation condition*.
This should be clear from its definition: if a vector $\vec{v}$ satisfies $A\vec{v}=\lambda\vec{v} $,
then the vector $\vec{v}'=c \vec{v}$ with $c$ some constant will also satisfy the same equation. So then we find that the eigenvalue $\lambda_1$ has an associated eigenvector
$$
\vec{v}_1 = \left( \begin{array}{c} 1 \\ 1 \end{array} \right) \, ,
$$
and indeed one can check that
$$
A\vec{v}_1 = \left( \begin{array}{cc} 1 & 2 \\ -1 & 4 \end{array} \right)
\left( \begin{array}{c} 1 \\ 1 \end{array} \right) = \left( \begin{array}{c} 3 \\ 3 \end{array} \right)=
3 \vec{v}_1 \, ,
$$
as we intended to demonstrate.
!!! note "Exercise"
As an exercise, try to obtain the expression of the eigenvector
corresponding to the second eigenvalue $\lambda_2=2$.
##6.2. Eigenvalue equations in quantum mechanics
We can now extend the ideas of eigenvalue equations from linear algebra to the case of quantum mechanics.
The starting point is the eigenvalue equation for the operator $\hat{A}$,
$$
\hat{A}|\psi\rangle= \lambda_{\psi}|\psi\rangle \, ,
$$
where the vector state $|\psi\rangle$ is the eigenvector of the equation
and $ \lambda_{\psi}$ is the corresponding eigenvalue, in general a complex scalar.
In general this equation will have multiple solutions, which for a Hilbert space $\mathcal{H}$ with $n$ dimensions can be labelled as
$$
\hat{A}|\psi_k\rangle= \lambda_{\psi_k}|\psi_k\rangle \, , \quad k =1,\ldots, n \, .
$$
In order to determine the eigenvalues and eigenvectors of a given operator $\hat{A}$, we will have to solve the
corresponding eigenvalue problem for this operator, what we called above as the *characteristic equation*.
This is most efficiently done in the matrix representation of this operation, where we have
that the above operator equation can be expressed in terms of its components as
$$
\begin{pmatrix} A_{11} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix} \begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix}= \lambda_{\psi_k}\begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix} \, .
$$
As discussed above, this condition is identical to solving a set of linear equations
for the form
$$
\begin{pmatrix} A_{11}- \lambda_{\psi_k} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi_k} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi_k} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix}
\begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix}=0 \, .
$$
!!! info "Cramer's rule"
This set of linear equations only has a non-trivial set of solutions provided that
the determinant of the matrix vanishes, as follows from the Cramer's condition:
$$
{\rm det} \begin{pmatrix} A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix}=
\left| \begin{array}{cccc}A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{array} \right| = 0
$$
which in general will have $n$ independent solutions, which we label as $\lambda_{\psi,k}$.
Once we have solved the $n$ eigenvalues $\{ \lambda_{\psi,k} \} $, we can insert each
of them in the original evolution equation and determine the components of each of the eigenvectors,
which we can express as columns vectors
$$
|\psi_1\rangle = \begin{pmatrix} \psi_{1,1} \\ \psi_{1,2} \\ \psi_{1,3} \\ \vdots \end{pmatrix} \,, \quad
|\psi_2\rangle = \begin{pmatrix} \psi_{2,1} \\ \psi_{2,2} \\ \psi_{2,3} \\ \vdots \end{pmatrix} \,, \quad \ldots \, , |\psi_n\rangle = \begin{pmatrix} \psi_{n,1} \\ \psi_{n,2} \\ \psi_{n,3} \\ \vdots \end{pmatrix} \, .
$$
!!! tip "Orthogonality of eigenvectors"
An important property of eigenvalue equations is that if you have two eigenvectors
$ |\psi_i\rangle$ and $ |\psi_j\rangle$ that have associated *different* eigenvalues,
$\lambda_{\psi_i} \ne \lambda_{\psi_j} $, then these two eigenvectors are orthogonal to each
other, that is
$$
\langle \psi_j | \psi_i\rangle =0 \, \quad {\rm for} \quad {i \ne j} \, .
$$
This property is extremely important, since it suggest that we could use the eigenvectors
of an eigenvalue equation as a *set of basis elements* for this Hilbert space.
Recall from the discussions of eigenvalue equations in linear algebra that
the eigenvectors $|\psi_i\rangle$ are defined *up to an overall normalisation constant*. Clearly, if $|\psi_i\rangle$ is a solution of $\hat{A}|\psi_i\rangle = \lambda_{\psi_i}|\psi_i\rangle$
then $c|\psi_i\rangle$ will also be a solution, with $c$ being a constant. In the context of quantum mechanics, we need to choose this overall rescaling constant to ensure that the eigenvectors are normalised, thus they satisfy
$$
\langle \psi_i | \psi_i\rangle = 1 \, \quad {\rm for~all}~i \, .
$$
With such a choice of normalisation, one says that the eigenvectors in a set
are *orthogonal* among them.
!!! tip "Eigenvalue spectrum and degeneracy"
The set of all eigenvalues of an operator is called the *eigenvalue spectrum* of an operator. Note that different eigenvectors can also have the same eigenvalue. If this is the case the eigenvalue is said to be *degenerate*.
***
##6.3. Problems
1. *Eigenvalues and eigenvectors I*
Find the characteristic polynomial and eigenvalues for each of the following matrices,
$$A=\begin{pmatrix} 5&3\\2&10 \end{pmatrix}\, \quad
B=\begin{pmatrix} 7i&-1\\2&6i \end{pmatrix} \, \quad C=\begin{pmatrix} 2&0&-1\\0&3&1\\1&0&4 \end{pmatrix}$$
2. *Hamiltonian*
The Hamiltonian for a two-state system is given by
$$H=\begin{pmatrix} \omega_1&\omega_2\\ \omega_2&\omega_1\end{pmatrix}$$
A basis for this system is
$$|{0}\rangle=\begin{pmatrix}1\\0 \end{pmatrix}\, ,\quad|{1}\rangle=\begin{pmatrix}0\\1 \end{pmatrix}$$
Find the eigenvalues and eigenvectors of the Hamiltonian $H$, and express the eigenvectors in terms of $\{|0 \rangle,|1\rangle \}$
3. *Eigenvalues and eigenvectors II*
Find the eigenvalues and eigenvectors of the matrices
$$A=\begin{pmatrix} -2&-1&-1\\6&3&2\\0&0&1 \end{pmatrix}\, \quad B=\begin{pmatrix} 1&1&2\\2&2&2\\-1&-1&-1 \end{pmatrix} $$.
4. *The Hadamard gate*
In one of the problems of the previous section we discussed that an important operator used in quantum computation is the *Hadamard gate*, which is represented by the matrix:
$$\hat{H}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix} \, .$$
Determine the eigenvalues and eigenvectors of this operator.
5. *Hermitian matrix*
Show that the Hermitian matrix
$$\begin{pmatrix} 0&0&i\\0&1&0\\-i&0&0 \end{pmatrix}$$
has only two real eigenvalues and find and orthonormal set of three eigenvectors.
6. *Orthogonality of eigenvectors*
Confirm, by explicit calculation, that the eigenvalues of the real, symmetric matrix
$$\begin{pmatrix} 2&1&2\\1&2&2\\2&2&1 \end{pmatrix}$$
are real, and its eigenvectors are orthogonal.
This diff is collapsed.
This diff is collapsed.
File moved
File moved
docs/figures/3_vector_spaces_1.jpg

278 KiB

This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.