diff --git a/docs/1_complex_numbers.md b/1_complex_numbers.md
similarity index 100%
rename from docs/1_complex_numbers.md
rename to 1_complex_numbers.md
diff --git a/docs/2_coordinates.md b/2_coordinates.md
similarity index 100%
rename from docs/2_coordinates.md
rename to 2_coordinates.md
diff --git a/docs/3_vector_spaces.md b/3_vector_spaces.md
similarity index 100%
rename from docs/3_vector_spaces.md
rename to 3_vector_spaces.md
diff --git a/docs/4_vector_spaces_QM.md b/4_vector_spaces_QM.md
similarity index 100%
rename from docs/4_vector_spaces_QM.md
rename to 4_vector_spaces_QM.md
diff --git a/docs/5_operators_QM.md b/5_operators_QM.md
similarity index 100%
rename from docs/5_operators_QM.md
rename to 5_operators_QM.md
diff --git a/6_eigenvectors_QM.md b/6_eigenvectors_QM.md
new file mode 100644
index 0000000000000000000000000000000000000000..c1bf01c5c533c252ab2e84fb8b86c93ebf503712
--- /dev/null
+++ b/6_eigenvectors_QM.md
@@ -0,0 +1,269 @@
+---
+title: Eigenvalues and eigenvectors
+---
+
+# 6. Eigenvalues and eigenvectors
+
+The lecture on eigenvalues and eigenvectors consists of the following parts:
+
+- [6.1. Eigenvalue equations in linear algebra](#61-eigenvalue-equations-in-linear-algebra)
+
+- [6.2. Eigenvalue equations in quantum mechanics](#62-eigenvalue-equations-in-quantum-mechanics)
+
+and at the end of the lecture notes, there is a set of corresponding exercises:
+
+- [6.3. Problems](#63-problems)
+
+***
+
+The contents of this lecture are summarised in the following **video**:
+
+- [Eigenvalues and eigenvectors](https://www.dropbox.com/s/n6hb5cu2iy8i8x4/linear_algebra_09.mov?dl=0)
+
+*The total length of the videos: ~3 minutes 30 seconds*
+
+***
+
+In the previous lecture, we discussed a number of *operator equations*, which have the form
+$$
+\hat{A}|\psi\rangle=|\varphi\rangle \, ,
+$$
+where $|\psi\rangle$ and $|\varphi\rangle$ are state vectors
+belonging to the Hilbert space of the system $\mathcal{H}$.
+
+!!! info "Eigenvalue equation:"
+    A specific class of operator equations, which appear frequently in quantum mechanics, consists of equations in the form
+    $$
+    \hat{A}|\psi\rangle= \lambda_{\psi}|\psi\rangle \, ,
+    $$
+    where $\lambda_{\psi}$ is a scalar (in general complex). These are equations where the action of the operator $\hat{A}$
+    on the state vector $|\psi\rangle$ returns *the same state vector* multiplied by the scalar $\lambda_{\psi}$. 
+    This type of operator equations are known as *eigenvalue equations* and are of great importance for the description of quantum systems.
+
+In this lecture, we present the main ingredients of these equations and how we can apply them to quantum systems.
+
+##6.1. Eigenvalue equations in linear algebra
+
+First of all, let us review eigenvalue equations in linear algebra. Assume that we have a (square) matrix $A$ with dimensions $n\times n$ and $\vec{v}$ is a column vector in $n$ dimensions. The corresponding eigenvalue equation will be of form
+$$
+A \vec{v} =\lambda \vec{v} .
+$$
+with $\lambda$ being a scalar number (real or complex, depending on the type
+of vector space). We can express the previous equation in terms of its components,
+assuming as usual some specific choice of basis, by using
+the rules of matrix multiplication:
+
+!!! info "Eigenvalue equation: Eigenvalue and Eigenvector"
+    $$
+    \sum_{j=1}^n A_{ij} v_j = \lambda v_i \, .
+    $$
+    The scalar $\lambda$ is known as the *eigenvalue* of the equation, while the vector $\vec{v}$ is known as the associated *eigenvector*.
+    The key feature of such equations is that applying a matrix $A$ to the vector $\vec{v}$ returns *the original vector* up to an overall rescaling, $\lambda \vec{v}$. 
+
+!!! warning "Number of solutions"
+    In general, there will be multiple solutions to the eigenvalue equation $A \vec{v} =\lambda \vec{v}$, each one characterised by an specific eigenvalue and eigenvectors. Note that in some cases one has  *degenerate solutions*, whereby a given matrix has two or more eigenvectors that are equal.
+
+!!! tip "Characteristic equation:"
+    In order to determine the eigenvalues of the matrix $A$, we need to evaluate the solutions of the so-called *characteristic equation*
+    of the matrix $A$, defined as
+    $$
+    {\rm det}\left( A-\lambda \mathbb{I} \right)=0 \, ,
+    $$
+    where $\mathbb{I}$ is the identity matrix of dimensions $n\times n$, and ${\rm det}$ is the determinant.
+
+This relation follows from the eigenvalue equation in terms of components
+$$
+\begin{align}
+\sum_{j=1}^n A_{ij} v_j &= \lambda v_i \, , \\
+\to \quad \sum_{j=1}^n A_{ij} v_j - \sum_{j=1}^n\lambda \delta_{ij} v_j &=0 \, ,\\
+\to \quad \sum_{j=1}^n\left( A_{ij} - \lambda \delta_{ij}\right) v_j &=0 \, .
+\end{align}
+$$
+Therefore, the eigenvalue condition can be written as a set of coupled linear equations
+$$
+\sum_{j=1}^n\left( A_{ij} - \lambda \delta_{ij}\right) v_j =0 \, , \qquad i=1,2,\ldots,n\, ,
+$$
+which only admit non-trivial solutions if the determinant of the matrix $A-\lambda\mathbb{I}$ vanishes
+(the so-called Cramer's condition), thus leading to the characteristic equation.
+
+Once we have solved the characteristic equation, we end up with $n$ eigenvalues $\lambda_k$, $k=1,\ldots,n$.
+  
+We can then determine the corresponding eigenvector
+$$
+\vec{v}_k = \left( \begin{array}{c} v_{k,1}  \\ v_{k,2} \\ \vdots \\ v_{k,n} \end{array} \right) \, ,
+$$
+by solving the corresponding system of linear equations
+$$
+\sum_{j=1}^n\left( A_{ij} - \lambda_k \delta_{ij}\right) v_{k,j} =0 \, , \qquad i=1,2,\ldots,n\, ,
+$$
+
+Let us remind ourselves that in $n=2$ dimensions the determinant of  a matrix
+is evaluated as
+$$
+{\rm det}\left( A \right) = \left|  \begin{array}{cc} A_{11}  & A_{12} \\ A_{21}  &  A_{22} \end{array} \right|
+= A_{11}A_{22} - A_{12}A_{21} \, ,
+$$
+while the corresponding expression for a matrix belonging to a vector
+space in $n=3$ dimensions in terms of the previous expression will be given as
+$$
+{\rm det}\left( A \right) = \left|  \begin{array}{ccc} A_{11}  & A_{12}  & A_{13} \\ A_{21}  &  A_{22}
+&  A_{23} \\ A_{31}  &  A_{32}
+&  A_{33}  \end{array} \right| = 
+\begin{array}{c} 
++ A_{11} \left|  \begin{array}{cc} A_{22}  & A_{23} \\ A_{32}  &  A_{33} \end{array} \right| \\
+- A_{12} \left|  \begin{array}{cc} A_{21}  & A_{23} \\ A_{31}  &  A_{33} \end{array} \right| \\
++ A_{13} \left|  \begin{array}{cc} A_{21}  & A_{22} \\ A_{31}  &  A_{32} \end{array} \right|
+\end{array}
+$$
+
+!!! check "Example"
+    Let us illustrate how to compute eigenvalues and eigenvectors by considering a $n=2$ vector space. 
+    
+    Consider the following matrix
+    $$
+    A = \left( \begin{array}{cc} 1  &  2 \\ -1  &  4 \end{array} \right) \, ,
+    $$
+    which has associated the following characteristic equation
+    $$
+    {\rm det}\left( A-\lambda\cdot I \right)  = \left| \begin{array}{cc} 1-\lambda  &  2 \\ -1  &  4-\lambda \end{array} \right| = (1-\lambda)(4-\lambda)+2 = \lambda^2 -5\lambda + 6=0 \, .
+    $$
+    This is a quadratic equation which we know how to solve exactly; the two eigenvalues are $\lambda_1=3$ and $\lambda_2=2$.
+
+    Next, we can determine the associated eigenvectors $\vec{v}_1$ and $\vec{v}_2$. For the first one, the equation to solve is
+    $$
+    \left( \begin{array}{cc} 1  &  2 \\ -1  &  4 \end{array} \right)
+    \left( \begin{array}{c} v_{1,1}  \\ v_{1,2}  \end{array} \right)=\lambda_1
+    \left( \begin{array}{c} v_{1,1}  \\ v_{1,2}  \end{array} \right) = 3 \left( \begin{array}{c} v_{1,1}  \\ v_{1,2}  \end{array} \right) 
+    $$
+    from where we find the condition that $v_{1,1}=v_{1,2}$. 
+    
+    An important property of eigenvalue equations is that the eigenvectors are only fixed up to an  *overall normalisation condition*. 
+    
+    This should be clear from its definition: if a vector $\vec{v}$ satisfies $A\vec{v}=\lambda\vec{v} $,
+    then the vector $\vec{v}'=c \vec{v}$ with $c$ some constant will also satisfy the same equation. So then we find that the eigenvalue $\lambda_1$ has an associated eigenvector
+    $$
+    \vec{v}_1 = \left( \begin{array}{c} 1   \\ 1 \end{array} \right) \, ,
+    $$
+    and indeed one can check that
+    $$
+    A\vec{v}_1 = \left( \begin{array}{cc} 1  &  2 \\ -1  &  4 \end{array} \right)
+    \left( \begin{array}{c} 1   \\ 1 \end{array} \right) = \left( \begin{array}{c} 3  \\ 3 \end{array} \right)=
+    3 \vec{v}_1 \, ,
+    $$
+    as we intended to demonstrate.
+
+!!! note "Exercise"
+    As an exercise, try to obtain the expression of the eigenvector
+    corresponding to the second eigenvalue $\lambda_2=2$.
+
+
+##6.2. Eigenvalue equations in quantum mechanics
+
+We can now extend the ideas of eigenvalue equations from linear algebra to the case of quantum mechanics.
+The starting point is the eigenvalue equation for the operator $\hat{A}$,
+$$
+\hat{A}|\psi\rangle= \lambda_{\psi}|\psi\rangle \, ,
+$$
+where the vector state $|\psi\rangle$ is the eigenvector of the equation
+and $ \lambda_{\psi}$ is the corresponding eigenvalue, in general a complex scalar.
+    
+In general this equation will have multiple solutions, which for a Hilbert space $\mathcal{H}$ with $n$ dimensions can be labelled as
+$$
+\hat{A}|\psi_k\rangle= \lambda_{\psi_k}|\psi_k\rangle \, , \quad k =1,\ldots, n \, .
+$$
+  
+In order to determine the eigenvalues and eigenvectors of a given operator $\hat{A}$, we will have to solve the
+corresponding eigenvalue problem for this operator, what we called above as the *characteristic equation*.
+This is most efficiently done in the matrix representation of this operation, where we have
+that the above operator equation can be expressed in terms of its components as
+$$
+\begin{pmatrix} A_{11} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix} \begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix}=  \lambda_{\psi_k}\begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix} \, .
+$$
+
+As discussed above, this condition is identical to solving a set of linear equations
+for the form
+$$
+\begin{pmatrix} A_{11}- \lambda_{\psi_k} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi_k} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi_k} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix}
+\begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix}=0 \, .
+$$
+
+!!! info "Cramer's rule"
+    This set of linear equations only has a non-trivial set of solutions provided that
+    the determinant of the matrix vanishes, as follows from the Cramer's condition:
+    $$
+    {\rm det} \begin{pmatrix} A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix}=
+    \left|  \begin{array}{cccc}A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{array} \right| = 0
+    $$
+    which in general will have $n$ independent solutions, which we label as $\lambda_{\psi,k}$.
+
+Once we have solved the $n$ eigenvalues $\{ \lambda_{\psi,k} \} $, we can insert each
+of them in the original evolution equation and determine the components of each of the eigenvectors,
+which we can express as columns vectors
+$$
+|\psi_1\rangle = \begin{pmatrix} \psi_{1,1} \\  \psi_{1,2} \\  \psi_{1,3} \\ \vdots \end{pmatrix} \,, \quad
+|\psi_2\rangle = \begin{pmatrix} \psi_{2,1} \\  \psi_{2,2} \\  \psi_{2,3} \\ \vdots \end{pmatrix} \,, \quad \ldots \, , |\psi_n\rangle = \begin{pmatrix} \psi_{n,1} \\  \psi_{n,2} \\  \psi_{n,3} \\ \vdots \end{pmatrix} \, .
+$$
+
+!!! tip "Orthogonality of eigenvectors"
+    An important property of eigenvalue equations is that if you have two eigenvectors
+    $ |\psi_i\rangle$ and $ |\psi_j\rangle$ that have associated *different* eigenvalues,
+    $\lambda_{\psi_i} \ne \lambda_{\psi_j}  $, then these two eigenvectors are orthogonal to each
+    other, that is
+    $$
+    \langle \psi_j | \psi_i\rangle =0 \, \quad {\rm for} \quad {i \ne j} \, .
+    $$
+    This property is extremely important, since it suggest that we could use the eigenvectors
+    of an eigenvalue equation as a *set of basis elements* for this Hilbert space.
+
+Recall from the discussions of eigenvalue equations in linear algebra that
+the eigenvectors $|\psi_i\rangle$ are defined *up to an overall normalisation constant*. Clearly, if $|\psi_i\rangle$ is a solution of $\hat{A}|\psi_i\rangle = \lambda_{\psi_i}|\psi_i\rangle$
+then $c|\psi_i\rangle$ will also be a solution, with $c$ being a constant. In the context of quantum mechanics, we need to choose this overall rescaling constant to ensure that the eigenvectors are normalised, thus they satisfy
+$$
+\langle \psi_i | \psi_i\rangle = 1 \, \quad {\rm for~all}~i \, .
+$$
+With such a choice of normalisation, one says that the eigenvectors in a set
+are *orthogonal* among them.
+
+!!! tip "Eigenvalue spectrum and degeneracy"
+    The set of all eigenvalues of an operator is called the *eigenvalue spectrum* of an operator. Note that different eigenvectors can also have the same eigenvalue. If this is the case the eigenvalue is said to be *degenerate*.
+
+***
+
+##6.3. Problems
+
+1. *Eigenvalues and eigenvectors I* 
+
+    Find the characteristic polynomial and eigenvalues for each of the following matrices,
+    $$A=\begin{pmatrix} 5&3\\2&10 \end{pmatrix}\,  \quad
+    B=\begin{pmatrix} 7i&-1\\2&6i \end{pmatrix} \, \quad C=\begin{pmatrix} 2&0&-1\\0&3&1\\1&0&4 \end{pmatrix}$$
+
+2. *Hamiltonian*
+
+    The Hamiltonian for a two-state system is given by 
+    $$H=\begin{pmatrix} \omega_1&\omega_2\\  \omega_2&\omega_1\end{pmatrix}$$
+    A basis for this system is 
+    $$|{0}\rangle=\begin{pmatrix}1\\0  \end{pmatrix}\, ,\quad|{1}\rangle=\begin{pmatrix}0\\1  \end{pmatrix}$$
+    Find the eigenvalues and eigenvectors of the Hamiltonian $H$, and express the eigenvectors in terms of $\{|0 \rangle,|1\rangle \}$
+
+3. *Eigenvalues and eigenvectors II*
+
+    Find the eigenvalues and eigenvectors of the matrices
+    $$A=\begin{pmatrix} -2&-1&-1\\6&3&2\\0&0&1 \end{pmatrix}\, \quad B=\begin{pmatrix} 1&1&2\\2&2&2\\-1&-1&-1 \end{pmatrix} $$.
+
+4. *The Hadamard gate*
+
+    In one of the problems of the previous section we discussed that an important operator used in quantum computation is the *Hadamard gate*, which is represented by the matrix:
+    $$\hat{H}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix} \, .$$
+    Determine the eigenvalues and eigenvectors of this operator.
+
+5. *Hermitian matrix*
+
+    Show that the Hermitian matrix
+    $$\begin{pmatrix} 0&0&i\\0&1&0\\-i&0&0 \end{pmatrix}$$
+    has only two real eigenvalues and find and orthonormal set of three eigenvectors.
+
+6. *Orthogonality of eigenvectors*
+
+    Confirm, by explicit calculation, that the eigenvalues of the real, symmetric matrix
+    $$\begin{pmatrix} 2&1&2\\1&2&2\\2&2&1 \end{pmatrix}$$
+    are real, and its eigenvectors are orthogonal.
diff --git a/7_differential_equations_1.md b/7_differential_equations_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..ea9d3d4429a24455123d18cccaf7cc3b6788453b
--- /dev/null
+++ b/7_differential_equations_1.md
@@ -0,0 +1,931 @@
+---
+title: Differential Equations
+---
+
+#7. Differential equations: Part 1
+
+The first lecture on differential equations consists of three parts, each with a video embedded in the paragraph:
+
+- [7.1. First examples of differential equations](#71-first-examples-of-differential-equations-definitions-and-strategies)
+- [7.2. Theory of systems of first-order differential equations](#72-theory-of-systems-of-differential-equations)
+- [7.3. Solving homogeneous first-order differential equations with constant coefficients](#73-solving-homogeneous-linear-system-with-constant-coefficients)
+
+**Total video length: 1 hour 15 minutes 4 seconds**
+
+and at the end of the lecture notes, there is a set of corresponding exercises:
+
+- [7.4. Problems](#74-problems)
+
+***
+
+##7.1. First examples of differential equations: Definitions and strategies
+
+<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/IUr38H4dcWI?rel=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+
+###7.1.1. Definitions
+
+A differential equation or DE is any equation which involves both a function and a
+derivative of that function. In this lecture, we will be focusing on 
+*Ordinary Differential Equations* (ODEs), meaning that our equations will involve 
+functions of one independent variable and hence any derivatives will be full 
+derivatives. Equations which involve a function of several independent variables
+and their partial derivatives are called *Partial Differential Equations* (PDEs); they will 
+be introduced in the follow-up lecture. 
+
+We consider functions $x(t)$ and define $\dot{x}(t)=\frac{dx}{dt}$, 
+$x^{(n)}(t)=\frac{d^{n}x}{dt^{n}}$. An $n$*-th* order differential equation is 
+an equation of the form:
+
+$$x^{(n)}(t) = f(x^{(n-1)}(t), \cdots, x(t), t).$$ 
+
+Typically, $n \leq 2$. Such an equation will usually be presented with a set of 
+initial conditions,
+
+$$x^{(n-1)}(t_{0}) = x^{(n-1)}_{0}, \cdots, x(t_0)=x_0. $$
+
+This is because to fully specify the solution of an $n$*-th* order differential 
+equation, $n$ initial conditions are necessary (we need to specify the value of $n-1$ derivatives
+of $x(t)$ and as well the value of the function $x(t)$ for some  $t_0$). To understand why we need 
+initial conditions, look at the following example.
+
+!!! check "Example: Initial conditions"
+
+    Consider the following calculus problem,
+    
+    $$\dot{x}(t)=t. $$ 
+
+    By integrating, one finds that the solution to this equation is 
+
+    $$\frac{1}{2}t^2 + c,$$
+
+    where $c$ is an integration constant. In order to specify the integration 
+    constant, an initial condition is needed. For instance, if we know that when 
+    $t=2$ then $x(2)=4$, we can plug this into the equation to get 
+
+    $$\frac{1}{2}\cdot 4 + c = 4, $$
+
+    which implies that $c=2$. 
+    
+Essentially, initial conditions are needed when solving differential equations so
+that the unknowns resulting from integration may be determined.
+
+!!! info "Terminology for Differential Equations"
+
+    1. If a differential equation does not explicitly contain the 
+        independent variable $t$, it is called an *autonomous equation*.
+    2. If the largest derivative in a differential equation is of the first order, 
+        i.e. $n=1$, then the equation is called a first order differential 
+        equation.
+    3. Often you will see differential equations presented using $y(x)$ 
+        instead of $x(t)$. This is just a different nomenclature. 
+            
+In this course, we will be focusing on *Linear Differential Equations*, meaning 
+that we consider differential equations $x^{(n)}(t) = f(x^{(n-1)}(t), \cdots, x(t), t)$
+where the function $f$ is a linear polynomial function of the unknown function
+$x(t)$. A simple way to spot a non-linear differential equation is to look for 
+non-linear terms, such as $x(t) \cdot \dot{x}(t)$ or $x^{(n)}(t) \cdot x^{(2)}(t)$. 
+
+Often, we will be dealing with several coupled differential equations. In this 
+situation, we can write the entire system of differential equations as a vector 
+equation, involving a linear operator. For a system of $m$ equations, denote 
+
+$$\vec{x}(t) = \begin{bmatrix}
+x_1(t) \\
+\vdots \\
+x_{m}(t) \\
+\end{bmatrix}.$$
+
+A system of first order linear equations is then written as 
+
+$$\dot{\vec{x}}(t) = \vec{f}(\vec{x}(t),t) $$
+
+with the initial condition $\vec{x}(t_0) = \vec{x}_0$.
+
+###7.1.2. Basic examples and strategies for a (single) first-order differential equation
+
+Before focusing on systems of first order equations, we will first consider 
+examplary cases of single first-order equations with only one unknown function $x(t)$.
+In this case, we can distinguish important cases.
+
+#### Type 1: $\dot{x}(t) = f(t)$
+
+The simplest type of differential equation is the type usually learned about in the 
+integration portion of a calculus course. Such equations have the form,
+
+$$\dot{x}(t) = f(t). $$
+
+When $F(t)$ is an anti-derivative of $f(t)$ i.e. $\dot{F}=f$, then the solutions
+to this type of equation are 
+
+$$x(t) = F(t) + c. $$
+
+
+!!! info "What is the antiderivative?"
+    
+    You may know the antiderivative $F(t)$ of a function $f(t)$ under a different name -
+    it is the same as the indefinite integral: $F(t) = \int f(t) dt$. Remember that taking
+    an integral is essentially the opposite of differentiation, and indeed taking an integral
+    means finding a function $F(t)$ such that $\dot{F}(t) = \frac{dF}{dt} = \frac{d}{dt}
+    \int f(t) dt = f(t)$. In the context of differential equations we prefer to call this the
+    antiderivative as solving the differential equation means essentially undoing the derivative.
+
+    Note that the antiderivative is only defined up to a constant (as is the indefinite integral).
+    In practice, you will thus find some particular expression for $F(t)$ through integration. To
+    capture all possible solutions, don't forget the integration constant $c$ in the expression
+    above!
+
+!!! check "Example"
+
+    Given the equation
+    
+    $$\dot{x}(t)=t, $$
+    
+    one finds by integrating that the solution is $\frac{1}{2}t^2 + c$. 
+    
+
+#### Type 2: $\dot{x}(t) = f(x(t))$
+
+The previous example was easy, as the function $x(t)$ did not enter in the right-hand side.
+A second important case that we can solve explicitly is when the right-hand side is some
+function of $x(t)$:
+    
+$$\dot{x}(t)=f(x(t)).$$ 
+    
+This implies that $\frac{\dot{x}(t)}{f(x)} = 1$. Let $F(x)$ be the 
+anti-derivative of $\frac{1}{f(x)}$. Then, by making use of the chain rule: 
+    
+$$\frac{d}{dt} F(x(t)) = \frac{dx}{dt}\,\frac{dF}{dx} = \frac{\dot{x}(t)}{f(x(t))} = 1$$
+    
+$$\Leftrightarrow F(x(t)) = t + c.$$
+    
+From this, we notice that if we can solve for $x(t)$, then we have the 
+solution! Having a specific form for the function $f(x)$ can often make it 
+possible to solve either implicitly or explicitly for the function $x(t)$.
+
+!!! check "Example"
+
+    Given the equation
+    
+    $$\dot{x} = \lambda x, $$
+    
+    re-write the equation to be in the form 
+    
+    $$\frac{\dot{x}}{\lambda x} = 1.$$
+    
+    Now, applying the same process which was shown through just above, let $f(x)=\lambda x$ 
+    and $F(x)$ be the anti-derivative of the $\frac{1}{f(x)}$. Integrating 
+    allows us to find the form of this anti-derivative. 
+    
+    $$F(x):= \int \frac{dx}{\lambda x} = \frac{1}{\lambda}\log{\lambda x} $$
+    
+    Now, making use of the general solution we also have that $F(x(t)) =t+c$. 
+    These two equations can be combined to form an equation for $x(t)$,
+    
+    $$\log(\lambda x)  = \lambda t + c$$
+    $$x(t) = \frac{1}{\lambda} e^c e^{\lambda t} $$
+    $$x(t) = c_0 e^{\lambda t}$$
+    
+    where in the last line we defined a new constant $c_0 =\frac{1}{\lambda}e^c$.
+    Given an initial condition, we could immediately determine this constant $c_0$.
+    
+
+#### Type 3: $\dot{x}(t) = g(t) f(x(t))$
+
+So far we have considered onle DE's where the right-hand side is either a function of $t$
+*or* of $x(t)$. We can still solve a more generic case, if we can separate the two dependencies
+as:
+
+$$\dot{x}(t)=g(t)f(x(t)).$$
+
+This type of differential equation is called a first order differential equation 
+with non-constant coefficients. If $f(x(t))$ is linear in $x$ then it is also 
+said to be a linear equation.  
+
+This equation can be re-written to isolate the coefficient function, g(t)
+
+$$\frac{\dot{x}(t)}{f(x(t))} = g(t). $$
+
+Now, define $F(x)$ to be the anti-derivative of $\frac{1}{f(x)}$, and $G(t)$ to
+be the anti-derivative of $g(t)$. Without showing again the use of chain rule on
+the left side of the equation, we have
+
+$$\frac{d}{dt} F(x(t)) = g(t) $$
+$$\Rightarrow F(x(t)) = G(t) + c $$
+
+Given this form of a general solution, the knowledge of specific functions $f, g$ would
+make it possible to solve for $x(t)$. 
+
+!!! check "Example"
+
+    Let us apply the above strategy to the following equation,
+    
+    $$\dot{x}= t x^2 .$$
+    
+    The strategy indicates that we should define $f(x)=x^2$ and $g(t)=t$. 
+    As before, we can re-arrange the equation into the form:
+    
+    $$\frac{\dot{x}}{x^2} = t. $$
+    
+    It is then necessary to find $F(x)$, the anti-derivative of $\frac{1}{f(x)}$,
+    or the left hand side of the above equation, as well as $G(t)$, the 
+    anti-derivative of $g(t)$, or the right hand side of the previous equation.
+    
+    By integrating, one finds
+    
+    $$F(x) = - \frac{1}{x} $$
+    $$G(t)=\frac{1}{2}t^2 + c. $$
+    
+    Accordingly then, the intermediate equation we have is
+    
+    $$- \frac{1}{x} = \frac{1}{2} t^2 + c. $$
+    
+    At this point, it is possible to solve for $x(t)$ by re-arrangement
+    
+    $$x(t)= \frac{-2}{t^2 + c_0}, $$
+    
+    where in the last line we have defined $c_0 = 2c$. Once again, specification
+    of an initial condition would enable determination of $c_0$ directly. To see 
+    this, suppose $x(0) = 2$. By inserting this into the equation for $x(t)$, we get
+    
+    $$2 = \frac{-2}{c_0} $$
+    $$ \Rightarrow c_0 = -1.$$
+    
+    When solved for $c_0$, with the choice of initial condition $x(0)=2$, the 
+    full equation for $x(t)$ becomes 
+    
+    $$x(t)=\frac{-2}{t^2 -1}. $$
+
+!!! check "Example: First order linear differential equation with general non-constant coefficient function"
+
+    Let us apply the above strategy of dealing with non-constant coefficient functions
+    to the more general equation
+
+    $$\dot{x}= g(t) \cdot x. $$
+
+    This equation suggests that we first define $f(x)=x$ and then find $F(x)$ and 
+    $G(t)$, the anti-derivatives of $\frac{1}{f(x)}$ and $g(t)$, respectively. By doing
+    so, we determine that 
+
+    $$F(x) = log(x) \, .$$
+
+    Follow the protocol subsequently, we arrive at the equation
+
+    $$log(x) = G(t) + c.$$
+
+    Exponentiating and defining $c_0:=e^c$ delivers the equation for $x(t)$,
+
+    $$x(t)= c_0 e^{G(t)} .$$
+    
+So far, we have only considered first order differential equations. If we consider
+extending the strategies which we have developed to higher order equations such as
+
+$$x^{(2)}(t)=f(x), $$
+
+with f(x) being a linear function, then our work will swiftly become more tedious. Later on,
+we will develop a general theory for linear equations which will enable us to 
+tackle such higher order equations. For now, we move on to considering systems 
+of coupled first order linear DE's. 
+
+##7.2. Theory of systems of differential equations
+
+<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/4VoSMc08nQA?rel=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+
+
+An intuitive presentation of a system of coupled first order differential 
+equations can be given by a phase portrait. Before demonstrating such a portrait,
+let us introduce a useful notation for working with systems of DE's. Several
+coupled DE's can be written down concisely as a single vector equation:
+
+$$\dot{\vec{x}}=\vec{f}(\vec{x}). $$
+
+In such an equation, the vector $\dot{\vec{x}}$ is the rate of change of a vector 
+quantity, for example; the velocity which is the rate of change of the position 
+vector. The term $\vec{f}(\vec{x})$ describes a vector field, which has one vector 
+per point $\vec{x}$. This type of equation can also be extended to include a time 
+varying vector field, $\vec{f}(\vec{x},t)$. 
+
+In the phase portrait below, the velocities of the cars are determined by 
+the vector field $\vec{f}(\vec{x})$, where their velocity corresponds to the slope of 
+each arrow. The position of each of the little cars is determined by an initial 
+condition. Since the field lines do not cross and the cars begin on different 
+field lines, they will remain on different field lines. 
+
+![image](figures/Phase_portrait_with_cars.png)
+
+!!!info "Properties of a system of 1st order linear DEs"
+    If $\vec{f}(\vec{x})$ is not *crazy*, for example - if it is continuous and 
+    differentiable, then it is possible to prove the following two properties for 
+    a system of first order linear DE's
+
+    1. **Existence of solution**: For any specified initial condition, there is a solution.
+    2. **Uniqueness of solution**: Any point $\vec{x}(t)$ is uniquely determined by the
+        initial condition and the equation i.e. we know where each point "came from"
+        $\vec{x}(t'<t)$. 
+
+###7.2.1. Systems of linear first order differential equations
+
+####7.2.1.1. Homogeneous systems
+
+Any homogeneous system of first order linear DE's can be written in the form 
+
+$$\dot{\vec{x}} = A(t) \vec{x} \, ,$$
+
+where $A$ is a linear operator. The system is called homogeneous because it 
+does not contain any additional term which is not dependent on $\vec{x}$ (for 
+example an additive constant or an additional function depending only on t). 
+
+!!! info "Linearity of a system of DEs"
+    An important property of such a system is *linearity*, which has the following 
+    implications
+
+    1. If $\vec{x}(t)$ is a solution ,then $c \vec{x}(t)$ is a solution too, for any constant c
+    2. If $\vec{x}(t)$ and $\vec{y}(t)$ are both solutions, then so is $a \vec{x}(t)+ b \vec{y}(t)$,
+    where $a$ and $b$ are both constants. 
+
+These properties have special importance for modelling physical systems, due to
+the principle of superposition which is especially important in quantum physics,
+as well as electromagnetism and fluid dynamics. For example, in electromagnetism, 
+when there are four charges arranged in a square acting on a test charge 
+located within the square, it is sufficient to sum the individual forces in 
+order to find the total force. Physically, this is the principle of superposition, 
+and mathematically, superposition is linearity and applies to linear models.
+
+!!! info "General Solution"
+
+    For a system of $n$ linear first order DE's with $n \times n$ linear operator 
+    $A(t)$, the general solution can be written as 
+
+    $$\vec{x}(t) = c_1 \vec{\phi}_1 (t) + c_2 \vec{\phi}_2 (t) + \cdots + c_n \vec{\phi}_n (t),$$
+
+    where $\{\vec{\phi}_1 (t), \vec{\phi}_2(t), \cdots, \vec{\phi}_n (t) \}$ are $n$ independent solutions which form a basis for the solution space, and $c_1, c_2, \cdots c_n$ are constants. 
+
+    $\{\vec{\phi}_1 (t), \vec{\phi}_2(t), \cdots, \vec{\phi}_n (t) \}$ are a basis if and 
+    only if they are linearly independent for fixed $t$:
+
+    $$\det \big{(}\vec{\phi}_1 (t) | \vec{\phi}_2 (t) | \cdots | \vec{\phi}_n (t) \big{)} \neq 0.$$
+
+    If this condition holds for one $t$, it holds for all $t$.
+
+####7.2.2.2 Inhomogeneous systems
+
+Compared to the homogeneous equation, an inhomogeneous equation has an 
+additional term, which may be a function of the independent variable. 
+
+$$ \dot{\vec{x}}(t) = A(t) \vec{x}(t) + \vec{b}(t).$$
+
+!!! info "Relation between a solutions of a homogeneous and inhomogeneous equations" 
+    There is a simple connection between the general solution of an inhomogeneous 
+    equation and the corresponding homogeneous equation. If $\vec{\psi}_1$ and $\vec{\psi}_2$
+    are two solutions of the inhomogeneous equation, then their difference is a 
+    solution of the homogeneous equation 
+
+    $$(\dot{\vec{\psi}_1}-\dot{\vec{\psi}_2}) = A(t) (\vec{\psi}_1 - \vec{\psi}_2). $$
+
+    The general solution of the inhomogeneous equation can be written in terms of 
+    the basis of solutions for the homogeneous equation, plus one particular solution
+    to the inhomogeneous equation,
+
+    $$\vec{x}(t) = \vec{\psi}(t) + c_1 \vec{\phi}_1 (t) + c_2 \vec{\phi}_2 (t) + \cdots + c_n \vec{\phi}_n (t). $$
+
+    In the above equation, $\{\vec{\phi}_1 (t), \vec{\phi}_2(t), \cdots, \vec{\phi}_n (t) \}$
+    form a basis for the solution space of the homogeneous equation and $\vec{\psi}(t)$
+    is a particular solution of the inhomogeneous system. 
+
+!!! tip "Strategy of finding the solution of the inhomogeneous equation"
+    Now we need a strategy for finding the solution of the inhomogeneous equation. 
+    Begin by making an ansatz that $\vec{x}(t)$ can be written as a linear combination 
+    of the basis functions for the homogeneous system, with coefficients that are 
+    functions of the independent variable.
+    
+    1. Ansatz:
+        $$\vec{x}(t) = c_1(t) \vec{\phi}_1 (t)+ c_2(t) \vec{\phi}_2(t) + \cdots + c_n(t) \vec{\phi}_n (t) $$
+
+    2. Define the vector $\vec{c}(t)$ and matrix $\vec{\Phi}(t)$ as
+
+        $$\vec{c}(t) = \begin{bmatrix}
+        c_1(t) \\
+        \vdots \\
+        c_n(t) \\
+        \end{bmatrix} $$
+        $$\vec{\Phi}(t) = \big{(} \vec{\phi}_1 (t) | \cdots | \vec{\phi}_n (t) \big{)} $$
+
+    3. With these definitions, it is possible to re-write the ansatz for $\vec{x}(t)$,
+
+        $$ \vec{x}(t) = \vec{\Phi}(t) \vec{c}(t).$$
+
+    4. Using the Leibniz rule, we then have the following expanded equation,
+
+        $$\dot{\vec{x}}(t) = \dot{\vec{\Phi}}(t) \vec{c}(t) + \vec{\Phi}(t) \dot{\vec{c}}(t).$$
+
+    5. Substituting the new expression into the differential equation gives,
+
+        $$\dot{\vec{\Phi}}(t) \vec{c}(t) + \vec{\Phi}(t) \dot{\vec{c}}(t) = A(t) \vec{\Phi}(t) \vec{c}(t) + \vec{b}(t) $$
+        $$\vec{\Phi}(t) \dot{\vec{c}}(t) = \vec{b}(t). $$
+
+        In order to cancel terms in the previous line, we made use of the fact that $\vec{\Phi}(t)$ solves the homogeneous equation $\dot{\vec{\Phi}} = A \vec{\Phi}$.
+        
+    6. By way of inverting and integrating, we can write the equation for the coefficient vector $\vec{c}(t)$
+
+        $$\vec{c}(t) = \int \vec{\Phi}^{-1}(t) \vec{b}(t) dt.$$
+
+    7. With access to a concrete form of the coefficient vector, we can then write down the particular solution,
+
+        $$\vec{\psi}(t)= \vec{\Phi}(t) \cdot \int \vec{\Phi}^{-1}(t) \vec{b}(t) dt .$$
+
+!!! check "Example: Inhomogeneous first order linear differential equation"
+
+    The technique for solving a system of inhomogeneous equations also works for a 
+    single inhomogeneous equation. Let us apply the technique to the equation
+
+    $$ \dot{x} = \lambda x + a. $$
+
+    In this particular inhomogenous equation, the function $g(t)=a$. As discussed in
+    an earlier example, the solution to the homogenous equation is
+    $c e^{\lambda t}$. Hence, we define $\phi(t)=e^{\lambda t}$ and make the ansatz
+
+    $$\psi(t) = c(t) e^{\lambda t}. $$
+
+    Solving for $c(t)$ results in 
+
+    $$c(t) = \int e^{- \lambda t} a  dt$$
+    $$c(t) = - \frac{ a }{\lambda} e^{- \lambda t} $$
+
+    Overall then, the solution (which can be easily verified by substitution) is 
+
+    $$\psi(t) = - \frac{a}{\lambda}.  $$ 
+    
+##7.3. Solving homogeneous linear system with constant coefficients
+
+<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/GGIDjgUpsH8?rel=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+
+The type of equation under consideration in this section looks like 
+
+$$ \dot{\vec{x}}(t) = A \vec{x}(t),$$
+
+where, throughout this section, $A$ will be a constant matrix. It is possible 
+to define a formal solution using the *matrix exponential*, 
+$\vec{x}(t) = e^{A t}$.
+
+!!! info "Definition: Matrix Exponential"
+
+    Before defining the matrix exponential, recall the definition of the regular 
+    exponential function in terms of Taylor series,
+    
+    $$e^{x} = \overset{\infty}{\underset{n=0}{\Sigma}} \frac{x^n}{n!},$$
+    
+    in which it is agreed that $0!=1$. The matrix exponential is defined in 
+    exactly the same way, only now instead of taking powers of a number or 
+    function, powers of a matrix are calculated with
+    
+    $$e^{A} = \overset{\infty}{\underset{n=0}{\Sigma}} \frac{{A}^n}{n!}.$$
+     
+    It is important to use caution when translating the properties of the normal exponential function over to the matrix exponential, because not all of the regular properties hold generally. In particular,      
+    $$e^{X + Y} \neq e^{X} e^{Y},$$
+    
+    unless it happens that 
+     
+    $$[X, Y] = 0.$$
+     
+    The necessary condition for this property to hold, stated on the previous line, is called *commutativity*. Recall that in general, matrices are not commutative so such a condition is only met for particular choices of matrices. The property of *non-commutativity* (what happens when the condition is not met) is of central importance in the mathematical structure of quantum mechanics. *For example, mathematically, non-commutativity is responsible for the Heisenberg uncertainty relations.*
+     
+    On the other hand, one property that does hold, is that $e^{- A t}$ is the inverse of the matrix exponential of $A$. 
+    
+    Furthermore, it is possible to derive the derivative of the matrix exponential by making use of the Taylor series formulation,
+    $$\begin{align}
+    \frac{d}{dt} e^{A t} &= \frac{d}{dt} \overset{\infty}{\underset{n=0}{\Sigma}} \frac{(A t)^n}{n!} \\
+    ... &= \overset{\infty}{\underset{n=0}{\Sigma}} \frac{1}{n!} \frac{d}{dt} (A t)^n \\
+    ... &= \overset{\infty}{\underset{n=0}{\Sigma}} \frac{n A}{n!}(A t)^{n-1} \\
+    ... &= \overset{\infty}{\underset{n=1}{\Sigma}} \frac{A}{(n-1)!}(A t)^{n-1} \\
+    ... &= \overset{\infty}{\underset{n=0}{\Sigma}} \frac{A}{n!}(A t)^n \\
+    \frac{d}{dt} e^{A t} &= A e^{A t}.
+    \end{align}$$
+
+Armed with the matrix exponential and it's derivative, 
+$\frac{d}{dt} e^{A t} = A e^{A t}$, it is simple to verify that 
+the matrix exponential solves the differential equation. 
+
+!!! info "Properties of the solution using the matrix exponential:"
+    1. The columns of $e^{A t}$ form a basis for the solution space.
+    2. Accounting for initial conditions, the full solution of the equation is $\dot{\vec{x}}(t) = e^{A t} {\vec{x}}_{0}$, with initial condition $\vec{x}(0) = e^{A 0}{\vec{x}}_0 = \mathbb{I} {\vec{x}}_{0} = {\vec{x}}_{0}$. (here $\mathbb{I}$ is the $n\times n$ identity matrix)
+
+Next, we will discuss how to determine a solution in practice, beyond the 
+formal solution just presented. 
+
+### Case 1: $A$ is diagonalizable
+
+For an $n \times n$ matrix $A$, denote the $n$ distinct eigenvectors as $\{\vec{v}_1, \cdots, \vec{v}_n \}$. By definition, the eigenvectors satisfy the equation
+
+$$A \vec{v}_i = \lambda_i \vec{v}_i, \qquad \forall i \epsilon \{1, \cdots, n \}. $$
+
+Here, we give consideration to the case of distinct eigenvectors, in which case 
+the $n$ eigenvectors form a basis for $\mathbb{R}^{n}$. 
+
+!!! info "Strategy for finding solution when $A$ is diagonizable"
+
+    1.  To solve the equation $\dot{\vec{x}}(t) = A \vec{x}(t)$, define a set of scalar functions $\{u_{1}(t), \cdots u_{n}(t) \}$ and make the following ansatz:
+        $$\vec{\phi}_{i}(t) = u_{i}(t) \vec{v}_{i}.$$
+    2.  Then, by differentiating,
+        $$\dot{\vec{\phi}_i}(t) = \dot{u_i}(t) \vec{v}_{i}.$$
+
+    3.  The above equation can be combined with the differential equation for 
+        $\vec{\phi}_{i}(t)$, 
+        $$\dot{\vec{\phi}_{i}}(t)=A \vec{\phi}_{i}(t) \, ,$$
+        to derive the following equations,
+
+        $$\dot{u_i}(t) \vec{v}_{i} = A u_{i}(t) \vec{v}_{i}$$
+        $$\dot{u_i}(t) \vec{v}_{i} = u_{i}(t) \lambda_{i} \vec{v}_{i} $$
+        $$\vec{v}_{i} (\dot{u_i}(t) - \lambda_i u_{i}(t)) = 0, $$
+
+        where in the second last line, we make use of the fact that $\vec{v}_i$ is an eigenvector of $A$. 
+    4.  The obtained relation implies that 
+
+        $$\dot{u_i}(t) = \lambda_i u_{i}(t).$$
+
+        This is a simple differential equation, of the type dealt with in the third example. 
+    5.  The solution is found to be
+
+        $$u_{i}(t) = c_i e^{\lambda_i t},$$
+
+        with $c_i$ being a constant. 
+    6.  The general solution is found by adding all $n$ of the
+        solutions $\vec{\phi}_{i}(t)$,
+
+        $$\vec{x}(t) = c_{1} e^{\lambda_1 t} \vec{v}_{1} + c_{2} e^{\lambda_2 t} \vec{v}_{2} + \cdots + c_{n} e^{\lambda_n t} \vec{v}_{n}.$$
+
+        and the vectors $\{e^{\lambda_1 t} \vec{v}_{1}, \cdots, e^{\lambda_n t} \vec{v}_{n} \}$
+        form a basis for the solution space since $\det(\vec{v}_1 | \cdots | \vec{v}_n) \neq 0$
+        (the $n$ eigenvectors are linearly independent). 
+
+!!! check "Example: Homogeneous first order linear system with diagonalizable constant coefficient matrix"
+
+    Define the matrix
+    $$A = \begin{bmatrix}
+    0 & -1 \\
+    1 & 0 
+    \end{bmatrix},$$
+    and consider the DE 
+
+    $$\dot{\vec{x}}(t) = A \vec{x}(t), \quad \vec{x}_0 = \begin{bmatrix}
+    1 \\
+    0 
+    \end{bmatrix}. $$
+ 
+    To proceed by following the solution technique, we determine the eigenvalues of 
+    $A$, 
+ 
+    $$\det {\begin{bmatrix} 
+    -\lambda & -1 \\
+    1 & - \lambda \\
+    \end{bmatrix}} = \lambda^2 + 1 = 0. $$
+    
+    By solving the characteristic polynomial, one finds the two eigenvalues 
+    $\lambda_{\pm} = \pm i$. 
+    
+    Focusing first on the positive eigenvalue, we can determine the first 
+    eigenvector,
+    
+    $$\begin{bmatrix} 
+    0 & -1 \\
+    1 & 0 \\
+    \end{bmatrix} \begin{bmatrix}
+    a \\
+    b\\
+    \end{bmatrix} = i \begin{bmatrix}
+    a \\
+    b \\
+    \end{bmatrix}.$$
+
+    A solution to this eigenvector equation is given by $a=1$, $b=-i$, altogether
+    implying that
+    $$\lambda_1=i, \vec{v}_{1} = \begin{bmatrix} 
+    1 \\
+    -i \\
+    \end{bmatrix}.$$
+    
+    As for the second eigenvalue, $\lambda_{2} = -i$, we can solve the analogous
+    eigenvector equation to determine
+    $$\vec{v}_{2} = \begin{bmatrix}
+    1 \\
+    i \\
+    \end{bmatrix}.$$
+    
+    Hence, two independent solutions of the differential equation are:
+    
+    $$\vec{\phi}_{1} = e^{i t}\begin{bmatrix}
+    1 \\
+    -i \\
+    \end{bmatrix}, \vec{\phi}_{2}  = e^{-i t} \begin{bmatrix}
+    1 \\
+    i \\
+    \end{bmatrix}.$$
+ 
+    Before we can obtain the general solution of the equation, we must find coefficients for the linear combination of the two solutions which would satisfy the initial condition. To this end, we must solve:
+    
+    $$c_1 \vec{\phi}_{1}(t) + c_2 \vec{\phi}_{2}(t) = 
+    \begin{bmatrix}
+    1 \\
+    0 \\
+    \end{bmatrix}$$
+    
+    $$\begin{bmatrix}
+    c_1 + c_2 \\
+    -i c_1 + i c_2 \\
+    \end{bmatrix} = \begin{bmatrix} 
+    1 \\
+    0 \\
+    \end{bmatrix}.$$
+    
+    The second row of the vector equation for $c_1, c_2$ implies that $c_1=c_2$. 
+    The first row then implies that $c_1=c_2=\frac{1}{2}$. 
+    
+    Overall then, the general solution of the DE can be summarized
+    
+    $$\dot{\vec{x}}(t) = \begin{bmatrix}
+    \frac{1}{2}(e^{i t} + e^{-i t}) \\
+    \frac{1}{2 i}(e^{i t} - e^{-i t}) \\
+    \end{bmatrix} = \begin{bmatrix}
+    \cos(t) \\
+    \sin(t) \\
+    \end{bmatrix}. $$
+
+### Case 2: $A (2 $ by $ 2)$ is defective
+
+In this case, we consider the situation where $\det(A- \lambda I)$ 
+has a root $\lambda$ with multiplicity 2, but only one eigenvector $\vec{v}_1$. 
+
+!!! check "Example: Matrix with eigenvalue of multiplicity 2 and only a single eigenvector. (Part 1)"
+    
+    Consider the matrix
+    
+    $$A = \begin{bmatrix}
+    1 & 1 \\
+    0 & 1 \\
+    \end{bmatrix}$$
+    
+    The characteristic polynomial can be found by evaluating
+    
+    $$\det \big{(} \begin{bmatrix}
+    1-\lambda & 1 \\
+    0 & 1-\lambda \\
+    \end{bmatrix} \big{)} = 0$$
+    $$(1-\lambda)^2 = 0$$
+    
+    Hence, the matrix $A$ has the single eigenvalue $\lambda=1$ with multiplicity 2. As for finding an eigenvector, we solve
+    
+    $$\begin{bmatrix} 
+    1 & 1 \\
+    0 & 1 \\
+    \end{bmatrix} \begin{bmatrix} 
+    a \\
+    b \\
+    \end{bmatrix} = \begin{bmatrix} 
+    a \\
+    b \\
+    \end{bmatrix}$$
+    $$\begin{bmatrix} 
+    a+b \\
+    b \\
+    \end{bmatrix} = \begin{bmatrix} 
+    a \\
+    b \\
+    \end{bmatrix}.$$
+    
+    These equations, $a+b=a$ and $b=b$ imply that $b=0$ and $a$ can be chosen arbitrarily, for example as $a=1$. Then, the only eigenvector is
+    
+    $$\vec{v}_1 = \begin{bmatrix} 
+    1 \\
+    0 \\
+    \end{bmatrix}.$$
+    
+What is the problem in this case? Since there are $n$ equations to be solved and an $n \times n$ linear operator $A$, the solution space for the equation requires a basis of $n$ solutions. In this case however, there are $n-1$ eigenvectors, so we cannot use only these eigenvectors in forming a basis for 
+the solution space. 
+
+!!! info "Strategy for finding a solution when $A (2 $ by $ 2)$ is defective"
+
+    1. Suppose that we have a system of $2$ coupled equations, so that $A$ is a $2 \times 2$ matrix, which has eigenvalue $\lambda_1$ with multiplicity $2$. As in the previous section, we can form one solution using the single eigenvector $\vec{v}_1$,
+
+        $$\vec{\phi}_1(t) = e^{\lambda_1 t} \vec{v}_1.$$
+
+    2. To determine the second linearly independent solution, make the following ansatz:
+
+        $$\vec{\phi}_2(t) = t e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_1 t} \vec{v}_2.$$
+
+    3. With this ansatz, it is then necessary to determine an appropriate vector $\vec{v}_2$ such that $\vec{\phi}_2(t)$ is really a solution of this problem. To achieve that, take the derivative of $\vec{\phi}_2(t)$,
+
+        $$\dot{\vec{\phi}_2}(t) = e^{\lambda_1 t} \vec{v}_1 + \lambda_1 t e^{\lambda_1 t} \vec{v}_1 + \lambda_1 e^{\lambda_1 t} \vec{v}_2 $$
+
+    4. Also, write the matrix equation for $\vec{\phi}_2(t)$,
+
+        $$A \vec{\phi}_2(t) = A t e^{\lambda_1 t} \vec{v}_1 + A e^{\lambda_1 t} \vec{v}_2 $$
+        $$A \vec{\phi}_2(t) = \lambda_1 t e^{\lambda_1 t} \vec{v}_1 + A e^{\lambda_1 t}\vec{v}_2$$
+
+    5. Since $\vec{\phi}_2(t)$ must solve the equation $\dot{\vec{\phi}_2(t)} = A \vec{\phi}_2(t)$, we can combine and simplify the previous equations to write 
+
+        $$A \vec{v}_2 - \lambda_1  \vec{v}_2 = \vec{v}_1$$
+        $$(A- \lambda_1 I) \vec{v}_2 = \vec{v}_1 $$
+
+    6. With this condition, it is possible to write the general solution as
+
+        $$\vec{x}(t) = c_1  e^{\lambda_1 t} \vec{v}_1 + c_2(t e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_1 t} \vec{v}_2).$$
+
+!!! check "Example: Continuation of the example with $A$ defective (Part 2)"
+    
+    Now, our task is to apply the condition derived just above in order to solve for $\vec{v}_2$,
+    
+    $$\begin{bmatrix}
+    1-1 & 1 \\
+    0 & 1-1 \\
+    \end{bmatrix} \begin{bmatrix}
+    a \\
+    b \\
+    \end{bmatrix} = \begin{bmatrix}
+    1 \\
+    0 \\
+    \end{bmatrix}$$
+    $$\begin{bmatrix} 
+    b \\
+    0 \\
+    \end{bmatrix} = \begin{bmatrix} 
+    1 \\
+    0 \\
+    \end{bmatrix}$$
+    
+    Hence, $b=1$ and $a$ is undetermined, so may be taken as $a=0$. Then, 
+    $$\vec{v}_{2} = \begin{bmatrix} 0 \\1 \end{bmatrix}.$$ 
+    
+    Overall then, the general solution is 
+    
+    $$\vec{x}(t) = c_1 e^t \begin{bmatrix} 
+    1 \\
+    0 \\
+    \end{bmatrix} + c_2 e^t \big{(} t \begin{bmatrix} 
+    1 \\
+    0 \\ 
+    \end{bmatrix}  + \begin{bmatrix} 
+    0 \\
+    1 \\
+    \end{bmatrix}\big{)}.$$
+
+### Bonus case 3: Higher multiplicity eigenvalues
+
+In this case, we consider the situation where the matrix $A$ has an 
+eigenvalue $\lambda$ with multiplicity $m>2$, and only one eigenvector $\vec{v}$ 
+corresponding to $\lambda$, $(A - \lambda I)\vec{v}=0$. Notice here 
+that $A$ must be at least an $m \times m$ matrix. 
+
+To solve such a situation, we will expand upon the result of the previous 
+section and define the vectors $\vec{v}_2$ through $\vec{v}_{m}$ by
+
+$$(A- \lambda I) \vec{v}_2 = \vec{v}_1$$
+$$\vdots$$
+$$(A- \lambda I) \vec{v}_m = \vec{v}_{m-1}.$$
+
+Then, the subset of the basis of solutions corresponding to eigenvalue $\lambda$
+is formed by the vectors
+
+$$\vec{\phi}_{k}(t) = e^{\lambda t} \big{(} \frac{t^{k-1}}{(k-1)!}\vec{v}_1 + \cdots + t \vec{v}_{k-1} + \vec{v}_{k} \big{)} \quad \forall k \epsilon \{1, \cdots, m \}.$$
+
+To prove this, first take the derivative of $\vec{\phi}_{k}(t)$,
+
+$$\dot{\vec{\phi}_{k}(t)} = \lambda \vec{\phi}_{k}(t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!}\vec{v}_1 + \cdots + \vec{v}_{k-1} \big{)}.$$
+
+Then, for comparison, multiply $\vec{\phi}_k(t)$ by $A$
+
+$$\begin{align} 
+A \vec{\phi}_k (t) &= e^{\lambda t} \big{(} \frac{t^{k-1}}{(k-1)!}\lambda \vec{v}_1 + \frac{t^{k-2}}{(k-2)!} A \vec{v}_2 + \cdots + A \vec{v}_{k-1} + A \vec{v}_k \big{)}\\
+&= \lambda \vec{\phi}_k (t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!}(A- \lambda I)\vec{v}_2 + \cdots + t (A- \lambda I)\vec{v}_{k-1} + (A- \lambda I)\vec{v}_{k}  \big{)}\\
+&=  \lambda \vec{\phi}_k (t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!} \vec{v}_1 + \cdots + t \vec{v}_{k-2} + \vec{v}_{k-1} \big{)}\\
+&= \dot{\vec{\phi}}_{k}(t).
+\end{align}$$
+
+Notice that in the second last line we made use of the relations 
+$(A- \lambda I)\vec{v}_{i} = \vec{v}_{i-1}$. 
+
+This completes the proof since we have demonstrated that $\vec{\phi}_{k}(t)$ is a solution of the DE.
+
+*** 
+##7.4. Problems
+
+1. [:grinning:] Solve:
+
+    (a)  $\dot{x}(t) = t^4$
+
+    (b)  $\dot{x}(t) = \sin(t)$
+
+2. [:grinning:] Solve, subject to the initial condition $x(0)=\frac{1}{2}$:
+
+    (a) $\dot{x}(t) = x^2$
+
+    (b) $\dot{x}(t) = t x$
+
+    (c) $\dot{x}(t) = t x^{4}$
+
+3. [:smirk:] Solve, subject to the given initial condition:
+
+    (a) $\dot{x}(t)=-\tan(x)\sin(x)$, subject to $x(0)=1$. 
+
+    (b) $\dot{x(t)}=\frac{1}{3} x^2+3$, subject to $x(0)=3$.
+
+    Hint: it is fine if you use a computer algebra program to solve the integrals for these problems.
+
+4. [:smirk:] Solve the following equation and list all possible solutions:
+
+    $$\dot{x}=\cos^2(x)$$
+
+    Hint: $\int \frac{1}{\cos^2(x)} dx = \tan(x) $
+      
+5. [:grinning:] Identify which of the following systems of equations is linear.
+
+    *Note that you do not need to solve them!*  
+
+    (a) $$\dot{x_1}= t x_1 -t x_2$$
+        $$\dot{x}_2 = x_1 x_2 - x_2$$
+
+    (b) $$\dot{x}_1 = e^{-t}x_1$$
+        $$\dot{x}_2 = \sqrt{t + \cos(t)-1}x_1 + \frac{\sin(t)}{t^2+t-1}x_2$$
+
+    (c) $$x^{(2)}_1 x_1 + \dot{x}_1 = 8 x_2$$
+        $$\dot{x}_2=5tx_2 + x_1$$
+
+6. [:grinning:] Take the system of equations:
+
+    $$\dot{x}_1 = \frac{1}{2} (t-1)x_1 + \frac{1}{2} (t+1)x_2$$
+
+    $$\dot{x}_2 = \frac{1}{2}(t+1)x_1 + \frac{1}{2}(t-1)x_2.$$
+
+    Show that 
+
+    $$\vec{\Phi}_1(t) = \begin{bmatrix} 
+    e^{- t} \\
+    -e^{- t} \\
+    \end{bmatrix}$$
+    and
+    $$\vec{\Phi}_2(t)=\begin{bmatrix}
+    e^{\frac{1}{2}(t^2)} \\
+    e^{\frac{1}{2}(t^2)} \\
+    \end{bmatrix}$$
+
+    constitute a basis for the solution space of this system of equations. 
+    To this end, first verify that they are indeed solutions and then that 
+    they form a basis. 
+
+7. [:grinning:] Take the system of equations: 
+
+    $$\dot{x}_1=x_1$$
+
+    $$\dot{x}_2=x_1.$$
+
+    Re-write this system of equations into the general form
+
+    $$\dot{\vec{x}} = A \vec{x}$$
+
+    and then find the general solution. Specify the general solution for the 
+    following initial conditions
+
+    (a) $$\vec{x}(0) = \begin{bmatrix} 
+        1 \\
+        0 \\
+        \end{bmatrix}$$
+
+    (b) $$\vec{x}(0) = \begin{bmatrix}
+        0 \\
+        1 \\ 
+        \end{bmatrix}$$
+
+8. [:smirk:] Find the general solution of 
+
+    $$\begin{bmatrix}
+    \dot{x}_1 \\
+    \dot{x}_2 \\
+    \dot{x}_3 \\
+    \end{bmatrix} = \begin{bmatrix} 
+    1 & 1 & 0 \\
+    1 & 1 & 0 \\
+    0 & 0 & 3 \\
+    \end{bmatrix} \begin{bmatrix} 
+    x_1 \\
+    x_2 \\
+    x_3 \\
+    \end{bmatrix}.$$
+
+    Then, specify the solution for the initial conditions 
+
+    (a) $$\begin{bmatrix} 
+        0 \\
+        0 \\
+        1 \\
+        \end{bmatrix}$$
+
+    (b) $$\begin{bmatrix}
+        1 \\
+        0 \\
+        0 \\
+        \end{bmatrix}$$
+
+9. [:sweat:] Find the general solution of the system of equations:
+
+    $$\dot{x}_1 = 3 x_1 + x_2$$
+    $$\dot{x}_2 = - x_1 + x_2$$  
+
+ 
+
diff --git a/8_differential_equations_2.md b/8_differential_equations_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..33cae3e72fb2b1bb86e08a1b3773b2cb8296b4aa
--- /dev/null
+++ b/8_differential_equations_2.md
@@ -0,0 +1,632 @@
+---
+title: Differential Equations 2
+---
+
+#8. Differential equations: Part 2
+
+The second lecture on differential equations consists of three parts, each with their own video:
+
+- [8.1. Higher order linear differential equations](#81-higher-order-linear-differential-equations)
+- [8.2. Partial differential equations: Separation of variables](#82-partial-differential-equations-separation-of-variables)
+- [8.3. Self-adjoint differential operators](#83-self-adjoint-differential-operators)
+
+**Total video length: 1 hour 9 minutes**
+
+and at the end of the lecture notes, there is a set of corresponding exercises:
+
+- [8.4. Problems](#84-problems)
+
+***
+
+##8.1. Higher order linear differential equations
+
+<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/ucvIiLgJ2i0?rel=0" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+
+###8.1.1 Definitions
+
+In the previous lecture, we focused on first order linear differential equations
+and systems of such equations. In this lecture, we switch focus to DE's 
+which involve higher derivatives of the function that we would like to solve for. To
+facilitate this shift, we are going to change notation. 
+
+!!! warning "Change of notation"
+    In the previous lecture, we wrote differential equations for $x(t)$. In this lecture we will write DE's 
+    of $y(x)$, where $y$ is the unknown function and $x$ is the independent variable. 
+    For this purpose, we make the following definitions,
+
+    $$y' = \frac{dy}{dx}, \ y'' = \frac{d^2 y}{dx^2}, \ \cdots, \ y^{(n)} = \frac{d^n y}{dx^n}.$$
+
+    In the new notation, a linear $n$-th order differential equation with constant
+    coefficients reads 
+
+    $$y^{(n)} + a_{n-1} y^{(n-1)} + \cdots + a_1 y' + a_0 y = 0. $$
+
+!!! info "Linear combination of solutions are still solutions"
+
+    Note that, like it was the case for first order linear DE's, the property of 
+    linearity once again means that if $y_{1}(x)$ and $y_{2}(x)$ are both 
+    solutions, and $a$ and $b$ are constants, 
+    
+    $$a y_{1}(x) + b y_{2}(x)$$
+    
+    then a linear combination of the solutions is also a solution.
+
+###8.1.2 Mapping to a linear system of first-order DEs
+
+In order to solve a higher order linear DE, we will present a trick that makes it
+possible to map the problem of solving a single $n$-th order linear DE into a
+related problem of solving a system of $n$ first order linear DE's. 
+
+To begin, define:
+
+$$y_{1} = y, \ y_{2} = y', \ \cdots, \ y_{n} = y^{(n-1)}.$$
+
+Then, the differential equation can be re-written as
+
+$$\begin{split}
+y_1 ' & = y_2 \\
+y_2 ' & = y_3 \\
+& \vdots \\
+y_{n-1} '& = y_{n} \\
+y_{n} ' & = - a_{0} y_{1} - a_{1} y_{2} - \cdots - a_{n-1} y_{n}.
+\end{split}$$
+
+Notice that these $n$ equations together form a linear first order system, of which the 
+first $n-1$ equations are trivial. Note that this trick can be used to 
+reduce any system of $n$-th order linear DE's to a larger system of first order 
+linear DE's. 
+
+Since we already discussed the method of solution for first order linear 
+systems, we will outline the general solution to this system. As before, the 
+general solution will be the linear combination of $n$ linearly independent 
+solutions $f_{i}(x)$, $i \epsilon \{1, \cdots, n \}$, which make up a basis for 
+the solution space. Thus, the general solution has the form
+
+$$y(x) = c_1 f_1 (x) + c_2 f_2 (x) + \cdots + c_n f_{n}(x). $$
+
+!!! info "Wronskian"
+    To check that the $n$ solutions form a basis, it is sufficient to verify
+
+    $$ \det \begin{bmatrix} 
+    f_1(x) & \cdots & f_{n}(x) \\
+    f_1 ' (x) & \cdots & f_{n}'(x) \\
+    \vdots & \vdots & \vdots \\
+    f^{(n-1)}_{1} (x) & \cdots & f^{(n-1)}_{n} (x) \\
+    \end{bmatrix}  \neq 0.$$
+
+    The determinant in the preceding line is called the *Wronskian* or *Wronski determinant*.
+
+###8.1.3. General solution
+
+To determine particular solutions, we need to find the eigenvalues of 
+
+$$A = \begin{bmatrix} 
+0 & 1 & 0 & \cdots & 0 \\
+0 & 0 & 1 & \cdots & 0 \\
+\vdots & \vdots & \vdots & \cdots & \vdots \\
+0 & 0 & 0 & \cdots & 1 \\
+-a_0 & -a_1 & -a_2 & \cdots & -a_{n-1} \\
+\end{bmatrix}.$$
+
+It is possible to show that 
+
+$$\det(A - \lambda I) = -P(\lambda),$$
+
+in which $P(\lambda)$ is the characteristic polynomial of the system matrix $A$,
+
+$$P(\lambda) = \lambda^n + a_{n-1} \lambda^{n-1} + \cdots + a_0.$$
+
+
+??? info "Proof of $\det(A - \lambda I) = -P(\lambda)$"
+
+    As we demonstrate below, the proof relies on the co-factor expansion
+    technique for calculating a determinant. 
+
+    $$\begin{align} -\det(A - \lambda I) &=  \det \begin{bmatrix} 
+    \lambda & -1 & 0 & \cdots & 0 \\
+    0 & \lambda & -1 & \cdots & 0 \\
+    \vdots & \vdots & \vdots & \cdots & \vdots \\
+    a_0 & a_1 & a_2 & \cdots & a_{n-1} + \lambda \\
+    \end{bmatrix} \\
+    &= \lambda \det \begin{bmatrix}
+    \lambda & -1 & 0 & \cdots & 0 \\
+    0 & \lambda & -1 & \cdots & 0 \\
+    \vdots & \vdots & \vdots & \cdots & \vdots \\
+    a_1 & a_2 & a_3 & \cdots & a_{n-1} + \lambda \\
+    \end{bmatrix} + (-1)^{n+1}a_0 \det \begin{bmatrix} 
+    -1 & 0 & 0 & \cdots & 0 \\
+    \lambda & -1 & 0 & \cdots & 0 \\
+    \vdots & \vdots & \vdots & \cdots & \vdots \\
+    0 & 0 & \cdots & \lambda & -1 \\
+    \end{bmatrix} \\
+    &= \lambda \det \begin{bmatrix}
+    \lambda & -1 & 0 & \cdots & 0 \\
+    0 & \lambda & -1 & \cdots & 0 \\
+    \vdots & \vdots & \vdots & \cdots & \vdots \\
+    a_1 & a_2 & a_3 & \cdots & a_{n-1} + \lambda \\
+    \end{bmatrix} + (-1)^{n+1} a_0 (-1)^{n-1} \\
+    &= \lambda \det \begin{bmatrix}
+    \lambda & -1 & 0 & \cdots & 0 \\
+    0 & \lambda & -1 & \cdots & 0 \\
+    \vdots & \vdots & \vdots & \cdots & \vdots \\
+    a_1 & a_2 & a_3 & \cdots & a_{n-1} + \lambda \\
+    \end{bmatrix} + a_0 \\
+    &= \lambda (\lambda (\lambda \cdots + a_2) + a_1) + a_0 \\
+    &= P(\lambda).
+    \end{align}$$
+    
+    In the second last line of the proof, we indicated that the method of
+    co-factor expansion demonstrated above is repeated an additional $n-2$ times.
+    This completes the proof. 
+
+With the characteristic polynomial, it is possible to write the differential 
+equation as 
+
+$$P(\frac{d}{dx})y(x) = 0.$$
+
+To determine solutions, we need to find $\lambda_i$ such that $P(\lambda_i) = 0$. 
+By the fundamental theorem of algebra, we know that $P(\lambda)$ can be written 
+as
+
+$$P(\lambda) = \overset{l}{\underset{k=1}{\prod}} (\lambda - \lambda_k)^{m_k}.$$
+
+In the previous equation $\lambda_k$ are the k roots of the equations, and $m_k$
+is the multiplicity of each root. Note that the multiplicities satisfy 
+$\overset{l}{\underset{k=1}{\Sigma}} m_k = n$. 
+
+If the multiplicity of each eigenvalue is one, then solutions which form the 
+basis are then given as:
+
+$$f_{n}(x) = e^{\lambda_1 x}, \ e^{\lambda_2 x}, \ \cdots, \ e^{\lambda_n x}.$$
+
+If there are eigenvalues with multiplicity greater than one, the the solutions
+which form the basis are given as 
+
+$$f_{n}(x) = e^{\lambda_1 x}, \ x e^{\lambda_1 x} , \ \cdots, \ x^{m_{1}-1} e^{\lambda_1 x}, \ etc.$$
+
+??? info "Proof that basis solutions to $P(\frac{d}{dx})y(x) = 0$ are given by $f_{k}(x) = x^{m_{k}-1} e^{\lambda_k x}$"
+
+    In order to prove that basis solutions to the differential equation rewritten using the characteristic polynomial into the form 
+    $$P(\frac{d}{dx})y(x) = 0$$
+    are given by a general formula, taking into account the multiplicity of each eigenvalue:
+    $$f_{k}(x) = x^{m_{k}-1} e^{\lambda_k x}$$ 
+    let us first recollect some definitions:
+
+    1. A linear $n$-th order differential equation with constant coefficients reads 
+        $$y^{(n)} + a_{n-1} y^{(n-1)} + \cdots + a_1 y' + a_0 y = 0. $$
+
+    2. The general solution will be a linear combination of $n$ linearly independent solutions $f_{i}(x)$, $i \epsilon \{1, \cdots, n \}$, which make up a basis for the solution space. Thus, the general solution has the form:
+        $$y(x) = c_1 f_1 (x) + c_2 f_2 (x) + \cdots + c_n f_{n}(x). $$
+            
+    3.  The key to finding the suitable basis is to rewrite the DEG in terms of its basis solutions using the properties of the characteristic polynomial and the differential operator as its variable:
+        $$P(\frac{d}{dx})f_{k}(x) = 0$$
+        and thus, in the general form using the fundamental theorem of algebra: 
+        $$ P(\frac{d}{dx}) f_{k}(x) = \Biggl( \overset{l}{\underset{k=1}{\prod}} \left(\frac{d}{dx} - \lambda_k \right)^{m_k} \Biggr) f_{k}(x) = 0 \, .$$
+
+    4. The solutions to this equation are given as:
+        $$f_{k}(x) = e^{\lambda_1 x}, \ e^{\lambda_2 x}, \ \cdots, \ e^{\lambda_n x} \qquad (1 \leq k \leq l \leq n) $$
+        and for each eigenvalue $\lambda_{k}$ with multiplicity greater than one, $m>1$, there is a subset of size $m$ with solutions corresponding to that eigenvalue;
+        $$f_{k,m_{k}}(x) = e^{\lambda_k x}, \ x e^{\lambda_k x} , \ \cdots, \ x^{m_{k}-1} e^{\lambda_k x}.$$
+        These solve the differential equation above in the general form: 
+        $$ P(\frac{d}{dx}) x^{m_{k}-1} e^{\lambda_k x} = \Biggl( \overset{l}{\underset{k=1}{\prod}} \left(\frac{d}{dx} - \lambda_k \right)^{m_k} \Biggr) x^{m_{k}-1} e^{\lambda_k x} = 0 \, .$$
+    
+    5.  The solutions given above can form the basis if their Wronskian is non-zero on an interval (it may vanish at isolated points);
+        $$ \det \begin{bmatrix} 
+        f_1(x) & \cdots & f_{n}(x) \\
+        f_1 ' (x) & \cdots & f_{n}'(x) \\
+        \vdots & \vdots & \vdots \\
+        f^{(n-1)}_{1} (x) & \cdots & f^{(n-1)}_{n} (x) \\
+        \end{bmatrix}  \neq 0 \, ,$$
+        and correspondingly, if any eigenvalue has a multiplicity higher than one: 
+        $$ \det \begin{bmatrix} 
+        f_1(x) & \cdots & f_{k}(x) &x f_{k}(x) & \cdots & x^{m_{k}-1} f_{k}(x)& \cdots & f_{l}(x) \\
+        f_1 ' (x) & \cdots & f_{k}(x)' &[x f_{k}(x)]' & \cdots & [x^{m_{k}-1} f_{k}(x)]' & \cdots & f_{l}'(x) \\
+        \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
+        f^{(n-1)}_{1} (x) & \cdots & f^{(n-1)}_{k}(x) &[x f_{k}(x)]^{(n-1)} & \cdots & [x^{m_{k}-1} f_{k}(x)]^{(n-1)}& \cdots & f^{(n-1)}_{l} (x) \\
+        \end{bmatrix}  \neq 0 \, .$$
+        Computation of the Wronskian can quickly become a tedious task in general. In this case, we can easily observe that the basis functions are linearly independent, because is not possible to obtain any of the solutions from a linear combination of the others!
+        
+        For example, $x e^{\lambda_{1}x}$ cannot be obtained from $x^2 e^{\lambda_{1}x}, \, e^{\lambda_{1}x}, \, x e^{\lambda_{2}x}, \cdots \, .$
+
+
+!!! check "Example: Second order homogeneous linear DE with constant coefficients"
+
+    Consider the equation 
+    
+    $$y'' + Ey = 0.$$ 
+    
+    The characteristic polynomial of this equation is 
+    
+    $$P(\lambda) = \lambda^2 + E.$$
+    
+    There are three cases for the possible solutions, depending upon the value 
+    of E.
+    
+    **Case 1: $E>0$**
+    For ease of notation, define $E=k^2$ for some constant $k$. The 
+    characteristic polynomial can then be factored as
+    
+    $$P(\lambda) = (\lambda+ i k)(\lambda - i k). $$
+    
+    Following our formulation for the solution, the two basis functions for the 
+    solution space are 
+    
+    $$f_1(x) = e^{i k x}, \ f_2=e^{- i k x}.$$
+    
+    Alternatively, the trigonometric functions can serve as basis functions, 
+    since they are linear combinations of $f_1$ and $f_2$ which remain linearly
+    independent,
+    
+    $$\tilde{f_1}(x)=\cos(kx), \tilde{f_2}(x)=\sin(kx).$$
+    
+    **Case 2: $E<0$**
+    This time, define $E=-k^2$, for constant $k$. The characteristic polynomial 
+    can then be factored as 
+    
+    $$P(\lambda) = (\lambda+ k)(\lambda -  k).$$
+
+    The two basis functions for this solution are then 
+    
+    $$f_1(x)=e^{k x}, \ f_2(x) = e^{-k x}.$$
+    
+    **Case 3: $E=0$**
+    In this case, there is a repeated eigenvalue (equal to $0$), since the 
+    characteristic polynomial reads
+    
+    $$P(\lambda) = (\lambda-0)^2.$$
+    
+    Hence, the basis functions for the solution space read 
+    
+    $$f_1(x)=e^{0 x} = 1, \ f_{2}(x) = x e^{0 x} = x. $$
+
+
+##8.2. Partial differential equations: Separation of variables
+
+<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/I4ghpYsFLFY?rel=0" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+
+###8.2.1. Definitions and examples
+
+A partial differential equation (PDE) is an equation involving a function of two or 
+more independent variables and derivatives of said function. These equations
+are classified similarly to ordinary differential equations (the subject of
+our earlier study). For example, they are called linear if no terms such as
+
+$$\frac{\partial y(x,t)}{\partial x} \cdot \frac{d y(x,t)}{\partial t} \ or $$
+$$\frac{\partial^2 y(x,t)}{\partial x^2} y(x,t)$$ 
+
+occur. A PDE can be classified as $n$-th order according to the highest 
+derivative order of either variable occurring in the equation. For example, the 
+equation
+
+$$\frac{\partial^3 f(x,y)}{\partial x^3} + \frac{\partial f(x,t)}{\partial t} = 5$$
+
+is a $3^{rd}$ order equation because of the third derivative with respect to x
+in the equation.
+
+To begin with a context, we demonstrate that PDEs are of fundamental importance in physics, 
+especially in quantum physics. In particular, the Schrödinger equation, 
+which is of central importance in quantum physics, is a partial differential 
+equation with respect to time and space. This equation is essential 
+because it describes the evolution in time and space of the entire description
+of a quantum system $\psi(x,t)$, which is known as the wave function. 
+
+For a free particle in one dimension, the Schrödinger equation is 
+
+$$i \hbar \frac{\partial \psi(x,t)}{\partial t} = - \frac{\hbar^2}{2m} \frac{\partial^2 \psi(x,t)}{\partial x^2}. $$
+
+When we studied ODEs, an initial condition was necessary in order to fully 
+specify a solution. Similarly, in the study of PDEs an initial condition is 
+required but now also boundary conditions are required. Going back to the 
+intuitive discussion from the lecture on ODEs, each of these conditions is 
+necessary in order to specify an integration constant that occurs in solving 
+the equation. In partial differential equations at least one such constant will
+arise from the time derivative and likewise at least one from the spatial 
+derivative. 
+
+For the Schrödinger equation, we could supply the initial conditions
+$$\psi(x,0)=\psi_0(x)$$
+together with the boundary conditions
+$$\psi(0,t) = \psi(L, t) = 0$$
+
+This particular set of boundary conditions corresponds to a particle in a box,
+a situation which is used as the base model for many derivations in quantum 
+physics. 
+
+Another example of a partial differential equation common in physics is the 
+Laplace equation
+
+$$\frac{\partial^2 \phi(x,y)}{\partial x^2}+\frac{\partial^2 \phi(x,y)}{\partial y^2}=0.$$
+
+In quantum physics, Laplace's equation is important for the study of the hydrogen
+atom. In three dimensions and using spherical coordinates, the solutions to 
+Laplace's equation are special functions called spherical harmonics. In the 
+context of the hydrogen atom, these functions describe the wave function of the 
+system and a unique spherical harmonic function corresponds to each distinct set
+of quantum numbers.
+
+In the study of PDEs, there are no comprehensive overall treatment methods to the same 
+extent as there is for ODEs. There are several techniques which can be applied 
+to solving these equations and the choice of technique must be tailored to the
+equation at hand. Hence, we focus on some specific examples that are common in
+physics.
+
+###8.2.2. Separation of variables
+
+Let us focus on the one-dimensional Schrödinger equation of a free particle:
+
+$$i \hbar \frac{\partial \psi(x,t)}{\partial t} = - \frac{\hbar^2}{2m} \frac{\partial^2 \psi(x,t)}{\partial x^2}. $$
+
+To attempt a solution, we will make a *separation ansatz*,
+
+$$\psi(x,t)=\phi(x) f(t).$$
+
+!!! info "Separation ansatz"
+    The separation ansatz is a restrictive ansatz, not a fully general one. In
+    general, for such a treatment to be valid, an equation and the boundary 
+    conditions given with it have to fulfill certain properties. In this course
+    however, you will only be asked to use this technique when it is suitable.
+    
+!!! info "General procedure for the separation of variables:"
+
+    1. Substituting the separation ansatz into the PDE,
+
+        $$i \hbar \frac{\partial \phi(x)f(t)}{\partial t} = - \frac{\hbar^2}{2m} \frac{\partial^2 \phi(x)f(t)}{\partial x^2} $$
+        $$i \hbar \dot{f}(t) \phi(x) = - \frac{\hbar^2}{2m} \phi''(x)f(t). $$
+
+        Notice that in the above equation the derivatives on $f$ and $\phi$ can each be
+        written as ordinary derivatives, $\dot{f}=\frac{df(t)}{dt}$, 
+        $\phi''(x)=\frac{d^2 \phi}{dx^2}$. This is so because each one is a function of 
+        only one variable. 
+
+    2. Next, divide both sides of the equation by $\psi(x,t)=\phi(x) f(t)$,
+
+        $$i \hbar \frac{\dot{f}(t)}{f(t)} = - \frac{\hbar^2}{2m} \frac{\phi''(x)}{\phi(x)} = constant := \lambda. $$
+
+        In the previous line we concluded that each part of the equation must be equal 
+        to a constant, which we defined as $\lambda$. This follows because the left hand
+        side of the equation only has a dependence on the spatial coordinate $x$, whereas 
+        the right hand side only has dependence on the time coordinate $t$. If we have 
+        two functions $a(x)$ and $b(t)$ such that 
+        $a(x)=b(t) \quad \forall x, \quad t  \in \mathbb{R}$, then $a(x)=b(t)=const$.
+    3. The constant we defined, $\lambda$, is called a *separation constant*. With it, 
+        we can break the spatial and time dependent parts of the equation into two separate equations,
+
+        $$i \hbar \dot{f}(t) = \lambda f(t)$$
+
+        $$-\frac{\hbar^2}{2m} \phi''(x) = \lambda \phi(x) .$$
+
+    To summarize, this process has broken one partial differential equation into two
+    ordinary differential equations of different variables. In order to do this, we 
+    needed to introduce a separation constant, which remains to be determined.
+
+###8.2.3. Boundary and eigenvalue problems
+
+Continuing on with the Schrödinger equation example from the previous 
+section, let us focus on the spatial part
+
+$$-\frac{\hbar^2}{2m} \phi''(x) = \lambda \phi(x),$$
+$$\phi(0)=\phi(L)=0.$$
+
+This has the form of an eigenvalue equation, in which $\lambda$ is the 
+eigenvalue, $- \frac{\hbar^2}{2m} \frac{d^2}{dx^2}[\cdot]$ is the linear 
+operator and $\phi(x)$ is the eigenfunction. 
+
+Notice that this ordinary differential equation is specified 
+along with its boundary conditions. Note that in contrast to an initial value
+problem, a boundary value problem does not always have a solution. For example, 
+in the figure below, regardless of the initial slope, the curves never reach $0$
+when $x=L$. 
+
+![image](figures/DE2_1.png)
+
+For boundary value problems like this, there are only solutions for particular 
+eigenvalues $\lambda$. Coming back to the example, it turns out that solutions
+only exist for $\lambda>0$. 
+
+*This can be shown quickly, feel free to try it!*
+
+For simplicity, define $k^2:= \frac{2m \lambda}{\hbar^2}$. The equation then 
+reads
+
+$$\phi''(x)+k^2 \phi(x)=0.$$
+
+Two linearly independent solutions to this equation are 
+
+$$\phi_{1}(x)=\sin(k x), \ \phi_{2}(x) = \cos(k x).$$
+
+The solution to this homogeneous equation is then 
+
+$$\phi(x)=c_1 \phi_1(x)+c_2 \phi_2(x).$$
+
+The eigenvalue, $\lambda$, as well as one of the constant coefficients, can be 
+determined using the boundary conditions. 
+
+$$
+\begin{align}\phi(0) &=0 \ \Rightarrow \ \phi(x)=c_1 \sin(k x), \ c_2=0. \\
+\phi(L) &=0 \ \Rightarrow \ 0=c_1 \sin(k L) 
+\end{align} \, .
+$$
+
+In turn, using the properties of the $\sin(\cdot)$ function, it is now possible
+to find the allowed values of $k$ and hence also $\lambda$. The previous 
+equation implies, 
+
+$$k L = n \pi, \, n  \in  \mathbb{N}$$
+
+$$\lambda_n = \big{(}\frac{n \pi \hbar}{L} \big{)}^2.$$
+
+The values $\lambda_n$ are the eigenvalues. Now that we have determined 
+$\lambda$, it enters into the time equation, $i \hbar \dot{f}(t) = \lambda f(t)$
+only as a constant. We can therefore simply solve,
+
+$$\dot{f}(t) = -i \frac{\lambda}{\hbar} f(t)$$
+
+$$f(t) = A e^{\frac{-i \lambda t}{\hbar}}.$$
+
+In the previous equation, the coefficient $A$ can be determined if the original
+PDE is supplied with an initial condition. 
+
+Putting the solutions to the two ODEs together and redefining 
+$\tilde{A}=A \cdot c_1$, we arrive at the solutions for the PDE,
+
+$$\psi_n(x,t) = \tilde{A}_n e^{-i \frac{\lambda_n t}{\hbar}} \sin(\frac{n \pi x}{L}).$$
+
+Notice that there is one solution $\psi_{n}(x,t)$ for each natural number $n$. 
+These are also very special solutions that are important in the context of physics. We will next discuss how to 
+obtain the general solution in our example. 
+
+##8.3. Self-adjoint differential operators
+
+<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/p4MHW0yMMvY?rel=0" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+
+###8.3.1. Connection to Hilbert spaces
+
+As hinted earlier, it is possible to re-write the previous equation by 
+defining a linear operator, $L$, acting on the space of functions which satisfy
+$\phi(0)=\phi(L)=0$:
+
+$$L[\cdot]:= \frac{- \hbar^2}{2m} \frac{d^2}{dx^2}[\cdot]. $$
+
+Then, the ODE can be written as 
+
+$$L[\phi]=\lambda \phi.$$
+
+This equation looks exactly like, and it turns out to be, an eigenvalue equation!
+
+!!! info "Connecting function spaces to Hilbert spaces"
+    
+    Recall that a space of functions can be transformed into a Hilbert space by 
+    equipping it with a inner product,
+    
+    $$\langle f, g \rangle = \int^{L}_{0} dx f^*(x) g(x) $$
+    
+    Use of this inner product also has utility in demonstrating that particular 
+    operators are *Hermitian*. The term "Hermitian" is precisely defined below.
+    Of considerable interest is that Hermitian operators have a set of convenient 
+    properties including all real eigenvalues and orthonormal eigenfunctions. 
+    
+The *nicest* type of operators for many practical purposes are Hermitian 
+operators. In quantum physics, for example, all physical operators must be 
+Hermitian. 
+
+!!! info "Hermiticity of an operator"
+    Denote a Hilbert space $\mathcal{H}$. An operator $H: \mathcal{H} \mapsto \mathcal{H}$ is said to be Hermitian if it satisfies
+    $$\langle f, H g \rangle = \langle H f, g \rangle \ \forall \ f, \ g \ \epsilon \ \mathcal{H}.$$
+
+Now, we would like to investigate whether the operator we have been working with,
+$L$, satisfies the criterion of being Hermitian over the function space 
+$\phi(0)=\phi(L)=0$ equipped with the inner product defined above (i.e. it is a
+Hilbert space).
+
+1.  First, denote this Hilbert space $\mathcal{H}_{0}$ and consider $f, \ g \ \in \ \mathcal{H}_0$ which are two functions from the Hilbert space. Then, we can investigate
+    $$\langle f, L g \rangle = \frac{- \hbar^2}{2m} \int^{L}_{0} dx f^*(x) \frac{d^2}{dx^2}g(x).$$
+
+2. In the next step, use the fact that it is possible to do integration by parts in the integral,
+    $$
+    \langle f, L g \rangle = \frac{+ \hbar^2}{2m} ( \int^{L}_{0} dx \frac{d f^*}{dx} \frac{d g}{dx} - [f^*(x)\frac{d g}{dx}] \big{|}^{L}_{0} )
+    $$
+    The boundary term vanishes due to the boundary conditions $f(0)=f(L)=0$, which directly imply $f^*(0)=f^*(L)=0$. 
+4. Now, integrate by parts a second time 
+    $$\langle f, L g \rangle = \frac{- \hbar^2}{2m} (\int^{L}_{0} dx \frac{d^2 f^*}{dx^2} g(x) - [\frac{d f^*}{dx} g(x)] \big{|}^{L}_{0} ).$$
+    As before, the boundary term vanishes, due to the boundary conditions $g(0)=g(L)=0$. 
+    After canceling the boundary term, the expression on the right hand side contained in the integral simplifies to $\langle L f, g \rangle$. 
+5. Therefore,
+    $$\langle f, L g \rangle=\langle L f, g \rangle. $$
+
+
+Thus, we demonstrated that $L$ is a Hermitian operator on the space $\mathcal{H}_0$. As a hermitian operator, $L$ has the property that its eigenfunctions form an orthonormal basis for the space $\mathcal{H}_0$. Hence, it is possible to expand any function $f \in \mathcal{H}_0$ in terms of the eigenfunctions of $L$.
+
+!!! info "Connection to quantum states"
+    
+    Recall that a quantum state $|\phi\rangle$ can be written in an orthonormal 
+    basis $\{ |u_n\rangle \}$ as 
+    $$|\phi\rangle = \underset{n}{\Sigma} \langle u_n | \phi \rangle\, |u_n\rangle.$$ 
+    
+    In the case of Hermitian operators, their eigenfunctions play the role of the orthonormal basis. In the context of our running example,
+    the 1D Schrödinger equation of a free particle, the eigenfunctions 
+    $\sin(\frac{n \pi x}{L})$ play the role of the basis functions $|u_n\rangle$.
+    
+To close our running example, consider the initial condition 
+$\psi(x,0) = \psi_{0}(x)$. Since the eigenfunctions $\sin(\frac{n \pi x}{L})$ 
+form a basis, we can now write the general solution to the problem as 
+
+$$\psi(x,t)  = \overset{\infty}{\underset{n}{\Sigma}} c_n e^{-i \frac{\lambda_n t}{\hbar}} \sin(\frac{n \pi x}{L}),$$
+
+where in the above we have defined the coefficients as a Fourier 
+coefficient,
+
+$$c_n:= \int^{L}_{0} dx \sin(\frac{n \pi x}{L}) \psi_{0}(x). $$
+    
+###8.3.2. General recipe for separable PDEs
+
+!!! tip "General recipe for separable PDEs" 
+
+    1. Make the separation ansatz to obtain separate ordinary differential 
+        equations.
+    2. Choose which equation to treat as the eigenvalue equation. This will depend 
+        upon the boundary conditions. Additionally, verify that the linear 
+        differential operator $L$ in the eigenvalue equation is Hermitian.
+    3. Solve the eigenvalue equation. Substitute the eigenvalues into the other 
+        equations and solve those too. 
+    4. Use the orthonormal basis functions to write down the solution corresponding 
+        to the specified initial and boundary conditions. 
+
+One natural question is: *"What if the operator $L$ from step 2 is not Hermitian?"*
+
+- It is possible to try and make it Hermitian by working on a Hilbert space equipped with a different inner product. This means that one can consider modifications to the definition of $\langle \cdot, \cdot \rangle$ such that $L$ is Hermitian with respect to the modified inner product. This type of technique falls under the umbrella of *Sturm-Liouville Theory*, which forms the foundation for a lot of the analysis that can be done analytically on PDEs.
+
+Another question is of course: *"What if the equation is not separable?"*
+
+- One possible approach is to try working in a different coordinate system. There are a few more analytic techniques available. However, in many situations, it becomes necessary to work with numerical methods of solution.
+
+*** 
+
+##8.4. Problems
+
+1.  [:grinning:] Which of the following equations for $y(x)$ is linear?
+
+    1. $y''' - y'' + x \cos(x) y' + y - 1 = 0$
+
+    2. $y''' + 4 x y' - \cos(x) y = 0$
+
+    3. $y'' + y y' = 0$
+
+    4. $y'' + e^x y' - x y = 0$
+
+2.  [:grinning:] Find the general solution to the equation 
+
+    $$y'' - 4 y' + 4 y = 0. $$
+
+    Show explicitly by computing the Wronski determinant that the 
+    basis for the solution space is actually linearly independent. 
+
+3.  [:grinning:] Find the general solution to the equation 
+
+    $$y''' - y'' + y' - y = 0.$$
+
+    Then find the solution to the initial conditions $y''(0) =0$, $y'(0)=1$, $y(0)=0$. 
+
+4.  [:smirk:] Take the Laplace equation in 2D:
+
+    $$\frac{\partial^2 \phi(x,y)}{\partial x^2} + \frac{\partial^2 \phi(x,y)}{\partial y^2} = 0.$$
+
+    1.  Make a separation ansatz $\phi(x,y) = f(x)g(y)$ and write 
+        down the resulting ordinary differential equations.
+
+    2.  Now assume that the boundary conditions $\phi(0,y) = \phi(L,y) =0$ for all y, i.e.  $f(0)=f(L)=0$. Find all solutions $f(x)$ and the corresponding eigenvalues.
+    3.  Finally, for each eigenvalue, find the general solution $g(y)$ for this eigenvalue. Combine this with all solutions $f(x)$ to write down the general solution (we know from the lecture that the operator $\frac{d^2}{dx^2}$ is Hermitian - you can thus directly assume that the solutions form an orthogonal basis). 
+
+
+5.  [:smirk:] Consider the following partial differential equations, and try to make a separation ansatz $h(x,y)=f(x)g(y)$. What do you observe in each case? (Only attempt the separation, do not solve the problem fully)
+    
+    1.  $\frac{\partial h(x,y)}{\partial x} + x \frac{\partial h(x,y)}{\partial y} = 0. $
+
+    2.  $\frac{\partial h(x,y)}{\partial x} + \frac{\partial h(x,y)}{\partial y} + xy\,h(x,y) = 0$
+
+6.  [:sweat:] We consider the Hilbert space of functions $f(x)$ defined for $x \ \epsilon \ [0,L]$ with $f(0)=f(L)=0$. 
+
+    Which of the following operators $\mathcal{L}$ on this space is Hermitian?
+
+    1.  $\mathcal{L}_1 f(x) = A(x) \frac{d^2 f}{dx^2}$
+
+    2.  $\mathcal{L}_2 f(x) = \frac{d}{dx} \big{(} A(x) \frac{df}{dx} \big{)}$
diff --git a/docs/6_eigenvectors_QM.md b/docs/6_eigenvectors_QM.md
deleted file mode 100644
index 557814a6701d375c763e36ef4b9bcbd2facb3f1c..0000000000000000000000000000000000000000
--- a/docs/6_eigenvectors_QM.md
+++ /dev/null
@@ -1,245 +0,0 @@
----
-title: Eigenvalues and eigenvectors
----
-
-# Eigenvalues and eigenvectors
-
-The lecture on eigenvalues and eigenvectors consists of the following parts:
-
-- [Eigenvalue equations in linear algebra](#Eigenvalue-equations-linear-algebra)
-
-- [Eigenvalue equations in quantum mechanics](#Eigenvalue-equations-quantum-mechanics)
-
-and at the end of the lecture one can find the corresponding [Problems](#problems)
-
-The contents of this lecture are summarised in the following **videos**:
-
-- [6_eigenvectors_QM_video1](https://www.dropbox.com/s/n6hb5cu2iy8i8x4/linear_algebra_09.mov?dl=0)
-
-In the previous lecture we discussed a number of *operator equations*, which are equations of the form
-$$
-\hat{A}|\psi\rangle=|\varphi\rangle \, ,
-$$
-where $|\psi\rangle$ and $|\varphi\rangle$ are state vectors
-belonging to the Hilbert space of the system $\mathcal{H}$.
-
-A specific class of operator equations, that appear frequently
-in quantum mechanics, are equations of the form
-$$
-\hat{A}|\psi\rangle= \lambda_{\psi}|\psi\rangle \, ,
-$$
-where $\lambda_{\psi}$ is a scalar (in general complex). These are equations where the action of the operator $\hat{A}$
-on the state vector $|\psi\rangle$ returns *the same state vector*
-multiplied by the scalar $\lambda_{\psi}$. This type of operator equations are known as *eigenvalue equations*,
-and are of great importance for the description of quantum systems.
-    
-In this lecture we present the main ingredients of these equations
-and how we can apply them to quantum systems.
-
-## Eigenvalue equations in linear algebra
-
-First of all let us review eigenvalue equations in linear algebra. Assume that we have a (square) matrix $A$ with dimensions $n\times n$ and $\vec{v}$ is a column vector in $n$ dimensions. The corresponding eigenvalue equation will be of form
-$$
-A \vec{v} =\lambda \vec{v} .
-$$
-with $\lambda$ being a scalar number (real or complex, depending on the type
-of vector space). We can express the previous equation in terms of its components,
-assuming as usual some specific choice of basis, by using
-the rules of matrix multiplication,
-$$
-\sum_{j=1}^n A_{ij} v_j = \lambda v_i \, .
-$$
-The scalar $\lambda$ is known as the *eigenvalue* of the equation, while the vector $\vec{v}$ is known as the associated *eigenvector*.
- 
-The key feature of such equations is that applying a matrix $A$ to the vector $\vec{v}$ returns *the original vector* up to an overall rescaling, $\lambda \vec{v}$. In general there will be multiple solutions to the eigenvalue equation $A \vec{v} =\lambda \vec{v}$, each one characterised by an specific eigenvalue and eigenvectors. Note that in some cases one has  *degenerate solutions*, whereby a given matrix has two or more eigenvectors that are equal.
-
-In order to determine the eigenvalues of the matrix $A$, we need to evaluate the solutions of the so-called *characteristic equation*
-of the matrix $A$, defined as
-$$
-{\rm det}\left( A-\lambda \mathbb{I} \right)=0 \, ,
-$$
-where $\mathbb{I}$ is the identity matrix of dimensions $n\times n$,
-and ${\rm det}$ is the determinant.
-
-This relations follows from the eigenvalue equation in terms of components
-$$
-\sum_{j=1}^n A_{ij} v_j = \lambda v_i \, ,\quad \to \quad \sum_{j=1}^n A_{ij} v_j - \sum_{j=1}^n\lambda \delta_{ij} v_j =0 \, ,\quad \to \quad \sum_{j=1}^n\left( A_{ij} - \lambda \delta_{ij}\right) v_j =0 \, .
-$$
-Therefore the eigenvalue condition can be written as a set of coupled linear equations
-$$
-\sum_{j=1}^n\left( A_{ij} - \lambda \delta_{ij}\right) v_j =0 \, , \qquad i=1,2,\ldots,n\, ,
-$$
-which only admit non-trivial solutions if the determinant of the matrix $A-\lambda\mathbb{I}$ vanishes
-(the so-called Cramer condition), thus leading to the characteristic equation.
-
-Once we have solved the characteristic equation, we end up with $n$ eigenvalues $\lambda_k$, $k=1,\ldots,n$.
-  
-We can then determine the corresponding eigenvector
-$$
-\vec{v}_k = \left( \begin{array}{c} v_{k,1}  \\ v_{k,2} \\ \vdots \\ v_{k,n} \end{array} \right) \, ,
-$$
-by solving the corresponding system of linear equations
-$$
-\sum_{j=1}^n\left( A_{ij} - \lambda_k \delta_{ij}\right) v_{k,j} =0 \, , \qquad i=1,2,\ldots,n\, ,
-$$
-
-Let us remind ourselves that in $n=2$ dimensions the determinant of  a matrix
-is evaluated as
-$$
-{\rm det}\left( A \right) = \left|  \begin{array}{cc} A_{11}  & A_{12} \\ A_{21}  &  A_{22} \end{array} \right|
-= A_{11}A_{22} - A_{12}A_{21} \, ,
-$$
-while the corresponding expression for a matrix belonging to a vector
-space in $n=3$ dimensions will be given in terms of the previous expression
-$$
-{\rm det}\left( A \right) = \left|  \begin{array}{ccc} A_{11}  & A_{12}  & A_{13} \\ A_{21}  &  A_{22}
-&  A_{23} \\ A_{31}  &  A_{32}
-&  A_{33}  \end{array} \right| = A_{11} \left|  \begin{array}{cc} A_{22}  & A_{23}
-\\ A_{32}  &  A_{33} \end{array} \right|
-- A_{12} \left|  \begin{array}{cc} A_{21}  & A_{23} \\ A_{31}  &  A_{33} \end{array} \right|
-+ A_{13} \left|  \begin{array}{cc} A_{21}  & A_{22} \\ A_{31}  &  A_{32} \end{array} \right|
-$$
-
-Let us illustrate how to compute eigenvalues and eigenvectors by considering a $n=2$ vector space. Consider the following matrix
-$$
-A = \left( \begin{array}{cc} 1  &  2 \\ -1  &  4 \end{array} \right) \, ,
-$$
-which has associated the following characteristic equation
-$$
-{\rm det}\left( A-\lambda\cdot I \right)  = \left| \begin{array}{cc} 1-\lambda  &  2 \\ -1  &  4-\lambda \end{array} \right| = (1-\lambda)(4-\lambda)+2 = \lambda^2 -5\lambda + 6=0 \, .
-$$
-This is a quadratic equation which we know how to solve exactly, and we find
-that the two eigenvalues are $\lambda_1=3$ and $\lambda_2=2$.
-
-Next we can determine the associated eigenvectors $\vec{v}_1$ and $\vec{v}_2$. For the first one the equation that needs to be solved is
-$$
-\left( \begin{array}{cc} 1  &  2 \\ -1  &  4 \end{array} \right)
-\left( \begin{array}{c} v_{1,1}  \\ v_{1,2}  \end{array} \right)=\lambda_1
-\left( \begin{array}{c} v_{1,1}  \\ v_{1,2}  \end{array} \right) = 3 \left( \begin{array}{c} v_{1,1}  \\ v_{1,2}  \end{array} \right) 
-$$
-from where we find the condition that $v_{1,1}=v_{1,2}$: an important property of eigenvalue equations is that the eigenvectors are only fixed up to an  *overall normalisation condition*. This should be clear from its definition: if a vector $\vec{v}$ satisfies $A\vec{v}=\lambda\vec{v} $,
-then the vector $\vec{v}'=c \vec{v}$ with $c$ some constant will also satisfy the same equation. So then we find that the eigenvalue $\lambda_1$ has associated an eigenvector
-$$
-\vec{v}_1 = \left( \begin{array}{c} 1   \\ 1 \end{array} \right) \, ,
-$$
-and indeed one can check that
-$$
-A\vec{v}_1 = \left( \begin{array}{cc} 1  &  2 \\ -1  &  4 \end{array} \right)
-\left( \begin{array}{c} 1   \\ 1 \end{array} \right) = \left( \begin{array}{c} 3  \\ 3 \end{array} \right)=
-3 \vec{v}_1 \, ,
-$$
-as we wanted to demonstrate. As an exercise, you can try to obtain the expression of the eigenvector
-corresponding to the second eigenvalue $\lambda_2=2$.
-
-
-## Eigenvalue equations in quantum mechanics
-
-We can now extend the ideas of eigenvalue equations from linear algebra to the case of quantum mechanics.
-The starting point is the eigenvalue equation for the operator $\hat{A}$,
-$$
-\hat{A}|\psi\rangle= \lambda_{\psi}|\psi\rangle \, ,
-$$
-where the vector state $|\psi\rangle$ is the eigenvector of the equation
-and $ \lambda_{\psi}$ is the corresponding eigenvalue, in general a complex scalar.
-    
-In general this equation will have multiple solutions, which for a Hilbert space $\mathcal{H}$ with $n$ dimensions can be labelled as
-$$
-\hat{A}|\psi_k\rangle= \lambda_{\psi_k}|\psi_k\rangle \, , \quad k =1,\ldots, n \, .
-$$
-  
-In order to determine the eigenvalues and eigenvectors of a given operator $\hat{A}$ we will have to solve the
-corresponding eigenvalue problem for this  operator, what we called above the *characteristic equation*.
-This is most efficiently done in the matrix representation of this operation, where we have
-that the previous operator equation can be expressed in terms of its components as
-$$
-\begin{pmatrix} A_{11} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix} \begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix}=  \lambda_{\psi_k}\begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix} \, , \quad k=1,\ldots,n \, .
-$$
-
-As discussed above, this condition is identical to solving a set of linear equations
-for the form
-$$
-\begin{pmatrix} A_{11}- \lambda_{\psi_k} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi_k} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi_k} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix}
-\begin{pmatrix} \psi_{k,1}\\\psi_{k,2}\\\psi_{k,3} \\\vdots\end{pmatrix}=0 \, , \quad k=1,\ldots,n \, .
-$$
-This set of linear equations only has a non-trivial set of solutions provided that
-the determinant of the matrix vanishes, as follows from the Cramer condition:
-$$
-{\rm det} \begin{pmatrix} A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{pmatrix}=
-\left|  \begin{array}{cccc}A_{11}- \lambda_{\psi} & A_{12} & A_{13} & \ldots \\ A_{21} & A_{22}- \lambda_{\psi} & A_{23} & \ldots\\A_{31} & A_{32} & A_{33}- \lambda_{\psi} & \ldots \\\vdots & \vdots & \vdots & \end{array} \right| = 0
-$$
-which in general will have $n$ independent solutions, which we label as $\lambda_{\psi,k}$.
-
-Once we have solved the $n$ eigenvalues $\{ \lambda_{\psi,k} \} $, we can insert each
-of them in the original evolution equation and determine the components of each of the eigenvectors,
-which we can express as columns vectors
-$$
-|\psi_1\rangle = \begin{pmatrix} \psi_{1,1} \\  \psi_{1,2} \\  \psi_{1,3} \\ \vdots \end{pmatrix} \,, \quad
-|\psi_2\rangle = \begin{pmatrix} \psi_{2,1} \\  \psi_{2,2} \\  \psi_{2,3} \\ \vdots \end{pmatrix} \,, \quad \ldots \, , |\psi_n\rangle = \begin{pmatrix} \psi_{n,1} \\  \psi_{n,2} \\  \psi_{n,3} \\ \vdots \end{pmatrix} \, .
-$$
-
-An important property of eigenvalue equations is that if you have two eigenvectors
-$ |\psi_i\rangle$ and $ |\psi_j\rangle$ that have associated *different* eigenvalues,
-$\lambda_{\psi_i} \ne \lambda_{\psi_j}  $, then these two eigenvectors are orthogonal to each
-other, that is
-$$
-\langle \psi_j | \psi_i\rangle =0 \, \quad {\rm for} \quad {i \ne j} \, .
-$$
-This property is extremely important, since it suggest that we could use the eigenvectors
-of an eigenvalue equation as a *set of basis elements* for this Hilbert space.
-
-Recall from the discussions of eigenvalue equations in linear algebra that
-the eigenvectors $|\psi_i\rangle$ are defined *up to an overall normalisation constant*. Clearly, if $|\psi_i\rangle$ is a solution of $\hat{A}|\psi_i\rangle = \lambda_{\psi_i}|\psi_i\rangle$
-then $c|\psi_i\rangle$ will also be a solution, with $c$ some constant. In the context of quantum mechanics, we need to choose this overall rescaling constant
-to ensure that the eigenvectors are normalised, that is, that they satisfy
-$$
-\langle \psi_i | \psi_i\rangle = 1 \, \quad {\rm for~all}~i \, .
-$$
-With such a choice of normalisation, one says that the set of eigenvectors
-are *orthogonal* among them.
-
-The set of all the eigenvalues of an operator is called *eigenvalue spectrum* of the operator. Note that different eigenvectors can also have the same eigenvalue. If this is the case the eigenvalue is said to be *degenerate*.
-
-***
-
-##Problems
-
-**1)** *Eigenvalues and Eigenvectors* 
-
-Find the characteristic polynomial and eigenvalues for each of the following matrices,
-
-$$A=\begin{pmatrix} 5&3\\2&10 \end{pmatrix}\,  \quad
-B=\begin{pmatrix} 7i&-1\\2&6i \end{pmatrix} \, \quad C=\begin{pmatrix} 2&0&-1\\0&3&1\\1&0&4 \end{pmatrix}$$
-
-**2)** The Hamiltonian for a two-state system is given by 
-$$H=\begin{pmatrix} \omega_1&\omega_2\\  \omega_2&\omega_1\end{pmatrix}$$
-A basis for this system is 
-$$|{0}\rangle=\begin{pmatrix}1\\0  \end{pmatrix}\, ,\quad|{1}\rangle=\begin{pmatrix}0\\1  \end{pmatrix}$$
-
-Find the eigenvalues and eigenvectors of the Hamiltonian $H$, and express the eigenvectors in terms of $\{|0 \rangle,|1\rangle \}$
-
-
-**3)** Find the eigenvalues and eigenvectors of the matrices
-
-$$A=\begin{pmatrix} -2&-1&-1\\6&3&2\\0&0&1 \end{pmatrix}\,  \quad B=\begin{pmatrix} 1&1&2\\2&2&2\\-1&-1&-1 \end{pmatrix} $$.
-
-**4)** *The Hadamard gate*
-
-In one of the problems of the previous section we discussed that an important operator used in quantum computation is the *Hadamard gate*, which is represented by the matrix:
-$$\hat{H}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix} \, .$$
-Determine the eigenvalues and eigenvectors of this operator.
-
-**5)** Show that the Hermitian matrix
-
-$$\begin{pmatrix} 0&0&i\\0&1&0\\-i&0&0 \end{pmatrix}$$
-
-has only two real eigenvalues and find and orthonormal set of three eigenvectors.
-
-**6)** 
-Confirm, by explicit calculation, that the eigenvalues of the real, symmetric matrix
-
-$$\begin{pmatrix} 2&1&2\\1&2&2\\2&2&1 \end{pmatrix}$$
-
-are real, and its eigenvectors are orthogonal.
-
-  
-  
diff --git a/docs/7_differential_equations_1.md b/docs/7_differential_equations_1.md
deleted file mode 100644
index a3b41a06e4ee2d0379d433c09c5e54b3dad37b07..0000000000000000000000000000000000000000
--- a/docs/7_differential_equations_1.md
+++ /dev/null
@@ -1,916 +0,0 @@
----
-title: Differential Equations
----
-
-# Differential equations 1
-
-The first lecture on differential equations consists of three parts, each with their own video:
-
-- [First examples of differential equations](#first-examples-of-differential-equations-definitions-and-strategies)
-- [Theory of systems of first-order differential equations](#theory-of-systems-of-differential-equations)
-- [Solving homogeneous first-order differential equations with constant coefficients](#solving-homogeneous-linear-system-with-constant-coefficients)
-
-**Total video length: 1 hour 15 minutes 4 seconds**
-
-## First examples of differential equations: Definitions and strategies
-
-<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/IUr38H4dcWI?rel=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
-
-### Definitions
-
-A differential equation  or DE is any equation which involves both a function and some 
-derivative of that function. In this lecture we will be focusing on 
-*Ordinary Differential Equations* (ODEs), meaning that our equations will involve 
-functions of one independent variable and hence any derivatives will be full 
-derivatives. Equations which involve a function of several independent variables
-and their partial derivatives are called *Partial Differential Equations* (PDEs). and will 
-be introduced in the follow up lecture. 
-
-We consider functions $x(t)$ and define $\dot{x}(t)=\frac{dx}{dt}$, 
-$x^{(n)}(t)=\frac{d^{n}x}{dt^{n}}$. An $n$*-th* order differential equation is 
-an equation of the form 5
-
-$$x^{(n)}(t) = f(x^{(n-1)}(t), \cdots, x(t), t).$$ 
-
-Typically, $n \leq 2$. Such an equation will usually be presented with a set of 
-initial conditions,
-
-$$x^{(n-1)}(t_{0}) = x^{(n-1)}_{0}, \cdots, x(t_0)=x_0. $$
-
-This is because to fully specify the solution to an $n$*-th* order differential 
-equation, $n-1$ initial conditions are necessary. To understand why we need 
-initial conditions, look at the following example.
-
-!!! check "Example: Initial conditions"
-
-    Consider the following calculus problem,
-    
-    $$\dot{f}(x)=x. $$ 
-
-    By integrating, one finds that the solution to this equation is 
-
-    $$\frac{1}{2}x^2 + c,$$
-
-    where $c$ is an integration constant. In order to specify the integration 
-    constant, an initial condition is needed. For instance, if we know that when 
-    $x=2$ then $f(2)=4$, we can plug this into the equation to get 
-
-    $$\frac{1}{2}*4 + c = 4, $$
-
-    which implies that $c=2$. 
-    
-Essentially initial conditions are needed when solving differential equations so
-that unknowns resulting from integration may be determined.
-
-!!! info Terminology for Differential Equations
-
-    1. If a differential equation does not explicitly contain the 
-        independent variable $t$, it is called an *autonomous equation*.
-    2. If the largest derivative in a differential equation is of first order, 
-        i.e. $n=1$, then the equation is called a first order differential 
-        equation.
-    3. Often you will see differential equations presented using $y(x)$ 
-        instead of $x(t)$. This is just a different nomenclature. 
-            
-In this course we will be focusing on *Linear Differential Equations*, meaning 
-that we consider differential equations $x^{(n)}(t) = f(x^{(n-1)}(t), \cdots, x(t), t)$
-where the function $f$ is a linear ploynomial function of the unknown function
-$x(t)$. A simple way to spot a non-linear differential equation is to look for 
-non-linear terms, such as $x(t)*\dot{x}(t)$ or $x^{(n)}(t)*x^{(2)}(t)$. 
-
-Often, we will be dealing with several coupled differential equations. In this 
-situation we can write the entire system of differential equations as a vector 
-equation, involving a linear operator. For a system of $m$ equations, denote 
-
-$$\vec{x}(t) = \begin{bmatrix}
-x_1(t) \\
-\vdots \\
-x_{m}(t) \\
-\end{bmatrix}.$$
-
-A system of first order linear equations is then written as 
-
-$$\dot{\vec{x}(t)} = \vec{f}(\vec{x}(t),t) $$
-
-with initial condition $\vec{x}(t_0) = \vec{x}_0$.
-
-### Basic examples and strategies
-
-The simplest type of differential equation is the type learned about in the 
-integration portion of a calculus course. Such equations have the form,
-
-$$\dot{x}(t) = f(t). $$
-
-When $F(t)$ is an anti-derivative of $f(t)$ i.e. $\dot{F}=f$, then the solutions
-to this type of equation are 
-
-$$x(t) = F(t) + c. $$
-
-!!! check "Example: First order linear differential equation with constant coefficients"
-
-    Given the equation
-    
-    $$\dot{x}(t)=t, $$
-    
-    one finds by integrating that the solution is $\frac{1}{2}t^2 + c$. 
-    
-For first order linear differential equations, it is possible to use the 
-concept of an anti-derivative from calculus to write a general solution, in 
-terms of the independent varaible.
-    
-$$\dot{x}(t)=f(x(t)).$$ 
-    
-This implies that $\frac{\dot{x}(t)}{f(x)} = 1$. Let $F(x)$ be the 
-anti-derivative of $\frac{1}{f(x)}$. Then, making use of the chain rule
-    
-$$\frac{\dot{x}(t)}{f(x(t))} = \frac{dx}{dt}\,\frac{dF}{dx} = \frac{d}{dt} F(x(t)) = 1$$
-    
-$$\Leftrightarrow F(x(t)) = t + c.$$
-    
-From this we notice that if we can solve for $x(t)$ then we have the 
-solution! Having a specific form for the function $f(x)$ can often makes it 
-possible to solve either implicitly or explicity for the function $x(t)$.
-
-!!! check "Example: Autonomous first order linear differential equation with constant coefficients"
-
-    Given the equation
-    
-    $$\dot{x} = \lambda x, $$
-    
-    we re-write the equation to be in the form 
-    
-    $$\frac{\dot{x}}{\lambda x} = 1.$$
-    
-    Now, applying the same process worked through above, let $f(x)=\lambda x$ 
-    and $F(x)$ be the anti-derivative of the $\frac{1}{f(x)}$. Integrating 
-    allows us to find the form of this anti-derivative. 
-    
-    $$F(x):= \int \frac{dx}{\lambda x} = \frac{1}{\lambda}log{\lambda x} $$
-    
-    Now, making use of the general solution we also have that $F(x(t)) =t+c$. 
-    These two equations can be combined to form an equation for $x(t)$,
-    
-    $$Log(\lambda x)  = \lambda t + c$$
-    $$x(t) = \frac{1}{\lambda} e^c e^{\lambda t} $$
-    $$x(t) = c_0 e^{\lambda t}$$
-    
-    where in the last line we defined a new constant $c_0 =\frac{1}{\lambda}e^c$.
-    Given an initial condition, we could immediately determine this constant $c_0$.
-    
-So far we have considered only DE's with constant coefficients, but it is very 
-common to encounter equations such as the following,
-
-$$\dot{x}(t)=g(t)f(x(t)).$$
-
-This type of differential equation is called a first order differential equation 
-with non-constant coefficients. If $f(x(t))$ is linear in $x$ then it is also 
-said to be a linear equation.  
-
-This equation can be re-written to isolate the coefficient function, g(t)
-
-$$\frac{dot{x}(t)}{f(x(t))} = g(t). $$
-
-Now, define $F(x)$ to be the anti-derivative of $\frac{1}{f(x)}$, and $G(t)$ to
-be the anti-derivative of $g(t)$. Without showing again the use of chain rule on
-the left side of the equation, we have
-
-$$\frac{d}{dt} F(x(t)) = g(t) $$
-$$\Rightarrow F(x(t)) = G(t) + c $$
-
-Given this form of general solution, knowledge of specific functions $f, g$ would
-make it possible to solve for $x(t)$. 
-
-!!! check "Example: First order linear differential equation with coefficient t"
-
-    Let us apply the above strategy to the following equation,
-    
-    $$\dot{x}= t x^2 .$$
-    
-    Comparison to the strategy indicates that we should define $f(x)=x^2$ and 
-    $g(t)=t$. As before, we can re-arrange the equation
-    
-    $$\frac{\dot{x}}{x^2} = t. $$
-    
-    It is then necessary to find $F(x)$, the anti-derivative of $\frac{1}{f(x)}$,
-    or the left hand side of the above equation, as well as $G(t)$, the 
-    anti-derivative of $g(t)$, or the right hand side of the previous equation.
-    
-    Integrating, one finds
-    
-    $$F(x) = - \frac{1}{x} $$
-    $$G(t)=\frac{1}{2}t^2 + c. $$
-    
-    Accordingly then, the equation we have is
-    
-    $$- \frac{1}{x} = \frac{1}{2} t^2 + c. $$
-    
-    At this point, it is possible to solve for $x(t)$ by re-arrangement
-    
-    $$x(t)= \frac{-2}{t^2 + c_0}, $$
-    
-    where in the last line we have defined $c_0 = 2c$. Once again, specification
-    of an initial condition would allow determination of $c_0$ directly. To see 
-    this, suppose $x(0) = 2$. Inserting this into the equation for $x(t)$ we have
-    
-    $$2 = \frac{-2}{c_0} $$
-    $$ \Rightarrow c_0 = -1.$$
-    
-    Having solved for $c_0$, with the choice of initial condition $x(0)=2$, the 
-    full equation for $x(t)$ is 
-    
-    $$x(t)=\frac{-2}{t^2 -1}. $$
-
-!!! check "Example: First order linear differential equation with general non-constant coefficient function"
-
-    Let us apply the strategy for dealing with non-constant coefficient functions
-    to the more general equation
-
-    $$\dot{x}= g(t) \cdot x. $$
-
-    This equation suggests that we first define $f(x)=x$ and then find $F(x)$ and 
-    $G(t)$, the anti-derivatives of $\frac{1}{f(x)}$ and $g(t)$, respectively. Doing
-    so, we determine 
-
-    $$F(x) = log(x) $$
-
-    Continuing to follow the protocol, we arrive at the equation
-
-    $$log(x) = G(t) + c.$$
-
-    Exponentiating and defining $c_0:=e^c$, we obtain an equation for $x(t)$,
-
-    $$x(t)= c_0 e^{G(t)} .$$
-    
-So far we have only considered first order differential equations. If we consider
-extending the strategies we have developed to higher order equations such as
-
-$$x^{(2)}(t)=f(x), $$
-
-with f(x) a linear function, then our work will swiftly become tedious. Later on
-we will develop the general theory for linear equations which will allow us to 
-tackle such higher order equations. For now, we move on to considering systems 
-of coupled first order linear DE's. 
-
-## Theory of systems of differential equations
-
-<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/4VoSMc08nQA?rel=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
-
-
-An intuitive presentation of a system of coupled first order differential 
-equations can be given by a phase portrait. Before demonstrating such a portrait,
-let us introduce notation that is useful for working with systems of DE's. Several
-coupled DE's can be written down consicely as a single vector equation
-
-$$\dot{\vec{x}}=\vec{f}(\vec{x}). $$
-
-In such an equation the vector $\dot{\vec{x}}$ is the rate of change of a vector 
-quantity, for example the velocity which is the rate of change of the position 
-vector. The term $\vec{f}(\vec{x})$ describes a vector field, which has one vecter 
-per point $\vec{x}$. This type of equation can also be extended to include a time 
-varying vector field, $\vec{f}(\vec{x},t)$. 
-
-In the phase portrait below the velocity of the little cars are determined by 
-the vector field $\vec{f}(\vec{x})$, where the velocity corresponds to the slope of 
-each arrow. The position of each of the little cars is determined by an initial 
-condition. Since the field lines do not cross, and the cars begin on different 
-field lines, they will remain on different field lines. 
-
-![image](figures/Phase_portrait_with_cars.png)
-
-If $\vec{f}(\vec{x})$ is not crazy, for axample if it is continuous and 
-differentiable, then it is possible to prove the following two properties for 
-a system of first order linear DE's
-
-1. *Existence of solution*: For any specified initial condition, there is a solution
-2. *Uniqueness of solution*: Any point $\vec{x}(t)$ is uniquely determined by the
-    initial condition and the equation i.e. we know where each point "came from"
-    $\vec{x}(t'<t)$. 
-
-## Systems of linear first order differential equations
-
-### Homogeneous systems
-
-Any homogeneous system of first order linear DE's can be written in the form 
-
-$$\dot{\vec{x}} = A(t) \vec{x}, $$
-
-where $A$ is a linear operator. The system is called homogeneous because it 
-does not contain an additional term which is not dependent on $\vec{x}$ (for 
-example an additive constant or an additional function depending only on t). 
-
-An important property of such a system is *linearity*, which has the following 
-implictions
-
-1. If $\vec{x}(t)$ is a solution then $c \vec{y}(t)$ is as well, for some constant c
-2. If $\vec{x}(t)$ and $\vec{y}(t)$ are both solutions, then so is $a \vec{x}(t)+ b \vec{y}(t)$,
-   where a and b are both constants. 
-
-These properties have special importance for modelling physical systems, due to
-the principle of superposition, which is especially important in quantum physics,
-as well as electromagnetism and fluid dynamics. For example in electromagnetism 
-when there are four charges arranged in a square acting on a test charge 
-located within the square, it is sufficient to sum the individual forces in 
-order to find the total force. Physically, this is the principle of superposition, 
-and mathematically superposition is linearity and applies to linear models. 
-
-### General Solution ###
-
-For a system of $n$ linear first order DE's with $n \times n$ linear operator 
-$A(t)$, the general solution can be written as 
-
-$$\vec{x}(t) = c_1 \vec{\phi}_1 (t) + c_2 \vec{\phi}_2 (t) + \cdots + c_n \vec{\phi}_n (t),$$
-
-where $\{\vec{\phi}_1 (t), \vec{\phi}_2(t), \cdots, \vec{\phi}_n (t) \}$ are $n$ 
-independent soutions which form a basis for the solution space, and 
-$c_1, c_2, \cdots c_n$ are constants. 
-
-$\{\vec{\phi}_1 (t), \vec{\phi}_2(t), \cdots, \vec{\phi}_n (t) \}$ are a basis if and 
-only if they are linearly independent for fixed $t$:
-
-$$\det \big{(}\vec{\phi}_1 (t) | \vec{\phi}_2 (t) | \cdots | \vec{\phi}_n (t) \big{)} \neq 0.$$
-
-If this condition holds for one $t$, it holds for all $t$.
-
-### Inhomogeneous systems
-
-In addition to the homogeneous equation, an inhomogeneous equation has an 
-additional term, which may be a funcction of the independent variable. 
-
-$$ \dot{\vec{x}}(t) = A(t) \vec{x}(t) + \vec{b}(t).$$
-
-There is a simple connection between the general solution of an inhomogeneous 
-equation and the corresponding homogeneous equation. If $\vec{\psi}_1$ and $\vec{\psi}_2$
-are two solutions of the inhomogeneous equation, then their difference is a 
-solution of the homogeneous equation 
-
-$$(\dot{\vec{\psi}_1}-\dot{\vec{\psi}_2}) = A(t) (\vec{\psi}_1 - \vec{\psi}_2). $$
-
-The general solution of the inhomogeneous equation can be written in terms of 
-the basis of solutions for the homogeneous equation, plus one particular solution
-to the inhomogeneous equation,
-
-$$\vec{x}(t) = \vec{\psi}(t) + c_1 \vec{\phi}_1 (t) + c_2 \vec{\phi}_2 (t) + \cdots + c_n \vec{\phi}_n (t). $$
-
-In the above equation, $\{\vec{\phi}_1 (t), \vec{\phi}_2(t), \cdots, \vec{\phi}_n (t) \}$
-form a basis for the solution space of the homogeneous equation and $\vec{\psi}(t)$
-is a particular solution of the inhomogeneous system. 
-
-Now we need a strategy for finding the solution of the inhomogeneous equation. 
-Begin by making an ansatz that $\vec{x}(t)$ can be written as a linear combination 
-of the basis functions for the homogeneous system, with coefficients that are 
-functions of the independent variable. Ansatz:
-
-$$\vec{x}(t) = c_1(t) \vec{\phi}_1 (t)+ c_2(t) \vec{\phi}_2(t) + \cdots + c_n(t) \vec{\phi}_n (t) $$
-
-Define the vector $\vec{c}(t)$ and matrix $\vec{\Phi}(t)$ as
-
-$$\vec{c}(t) = \begin{bmatrix}
-c_1(t) \\
-\vdots \\
-c_n(t) \\
-\end{bmatrix} $$
-$$\vec{\Phi}(t) = \big{(} \vec{\phi}_1 (t) | \cdots | \vec{\phi}_n (t) \big{)} $$
-
-With these definitions, it is possible to re-write the ansatz for $\vec{x}(t)$,
-
-$$ \vec{x}(t) = \vec{\Phi}(t) \vec{c}(t).$$
-
-Using the Leibniz rule, we then have the following expanded equation,
-
-$$\dot{\vec{x}}(t) = \dot{\vec{\Phi}}(t) \vec{c}(t) + \vec{\Phi}(t) \dot{\vec{c}}(t).$$
-
-Substituting the new expression into the differential equation gives,
-
-$$\dot{\vec{\Phi}}(t) \vec{c}(t) + \vec{\Phi}(t) \dot{\vec{c}}(t) = A(t) \vec{\Phi}(t) \vec{c}(t) + \vec{b}(t) $$
-$$\vec{\Phi}(t) \dot{\vec{c}}(t) = \vec{b}(t). $$
-
-In order to cancel terms in the previous line we made use of the fact that 
-$\vec{\Phi}(t)$ solves the homogeneous equation $\dot{\vec{\Phi}} = A \vec{\Phi}$.
-By way of inverting and integrating, we can write an equation for the coefficient
-vector $\vec{c}(t)$
-
-$$\vec{c}(t) = \int \vec{\Phi}^{-1}(t) \vec{b}(t) dt.$$
-
-With access to a concrete form of the coefficient vector, we can then write down
-the particular solution,
-
-$$\vec{\psi}(t)= \vec{\Phi}(t) \cdot \int \vec{\Phi}^{-1}(t) \vec{b}(t) dt .$$
-
-!!! check "Example: Inhomogeneous first order linear differential equation"
-
-    The technique for solving a system of inhomogeneous equations also works for a 
-    single inhomogeneous equation. Let us apply the technique to the equation
-
-    $$ \dot{x} = \lambda x + a. $$
-
-    In this particular inhomogenous equaiton, the function $g(t)=a$. As discussed in
-    an earlier example, the solution to the homogenous equation is
-    $c e^{\lambda t}$. Hence we define $\phi(t)=e^{\lambda t}$ and make the ansatz
-
-    $$\psi(t) = c(t) e^{\lambda t}. $$
-
-    Solving for $c(t)$ results in 
-
-    $$c(t) = \int e^{- \lambda t} a  dt$$
-    $$c(t) = \frac{- a }{\lambda} e^{- \lambda t} $$
-
-    Overall then, the solution (which can be easily verified by substitution) is 
-
-    $$\psi(t) = - \frac{a}{\lambda}.  $$ 
-    
-## Solving homogeneous linear system with constant coefficients
-
-<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/GGIDjgUpsH8?rel=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
-
-The type of equaiton under consideration in this section looks like 
-
-$$ \dot{\vec{x}}(t) = A \vec{x}(t),$$
-
-where throughout the section $A$ will be a constant matrix. It is possible 
-to define a formal solution using the *matrix exponential*, 
-$\vec{x}(t) = e^{A t}$.
-
-!!! info "Definition: Matrix Exponential"
-
-    Before defining the matrix exponential, recall the definition of the regular 
-    exponential function in terms of Taylor series,
-    
-    $$e^{x} = \overset{\infty}{\underset{n=0}{\Sigma}} \frac{x^n}{n!},$$
-    
-    in which it is agreed that $0!=1$. The matrix exponential is defined in 
-    exactly the same way, only now instead of taking powers of a number or 
-    function, powers of a matrix are calculated 
-    
-     $$e^{A} = \overset{\infty}{\underset{n=0}{\Sigma}} \frac{{A}^n}{n!}.$$
-     
-     It is important to use caution when translating the properties of the normal
-     exponential function over to the matrix exponential, because not all of the
-     regular properties hold generally. In particular, 
-     
-     $$e^{X + Y} \neq e^{X} e^{Y},$$
-     
-     unless it happens that 
-     
-     $$[X, Y] = 0.$$
-     
-     The necessary condition for this property to hold, stated on the previous 
-     line, is called *commutativity*. Recall that in general, matrices are not
-     commutative so such a condition is only met for particular choises of 
-     matrices. The property of *non-commutativity* (what happens when the 
-     condition is not met) is of central importance in the mathematical 
-     structure of quantum mechanics. For example, mathematically, 
-     non-commutativity is responsible for the Heisenberg uncertainty relations.
-     
-     On the other hand, one property that does hold, is that $e^{- A t}$ is 
-     the inverse of the matric exponential of $A$. 
-     
-     Furthermore, it is possible to derive the derivative of the matrix 
-     exponential by making use of the Taylor series formulation,
-     
-     $$\frac{d}{dt} e^{A t} = \frac{d}{dt} \overset{\infty}{\underset{n=0}{\Sigma}} \frac{(A t)^n}{n!} $$
-     $$\frac{d}{dt} e^{A t} = \overset{\infty}{\underset{n=0}{\Sigma}} \frac{1}{n!} \frac{d}{dt} (A t)^n$$
-     $$\frac{d}{dt} e^{A t} = \overset{\infty}{\underset{n=0}{\Sigma}} \frac{n A}{n!}(A t)^{n-1}$$
-     $$\frac{d}{dt} e^{A t} = \overset{\infty}{\underset{n=1}{\Sigma}} \frac{A}{(n-1)!}(A t)^{n-1}$$
-     $$\frac{d}{dt} e^{A t} = \overset{\infty}{\underset{n=0}{\Sigma}} \frac{A}{n!}(A t)^n$$
-     $$\frac{d}{dt} e^{A t} = A e^{A t}.$$
-
-Armed with the matrix exponential and it's derivative, 
-$\frac{d}{dt} e^{A t} = A e^{A t}$, it is simple to verify that 
-the matrix exponential solves the differential equation. Properties of this 
-solution are:
-
-1. The columns of $e^{A t}$ form a basis for the solution space.
-2. Accounting for initial conditions, the full solution of the equation is 
-   $\dot{\vec{x}}(t) = e^{A t} {\vec{x}}_{0}$, with initial condition 
-   $\vec{x}(0) = e^{A 0}{\vec{x}}_0 = I {\vec{x}}_{0} = {\vec{x}}_{0}$. (here $I$ is the $n\times n$ identity matrix)
-
-Next we will discuss how to determine a solution in practice, beyond the 
-formal solution just presented. 
-
-### Case 1: **A** diagonalizable
-
-For an $n \times n$ matrix $A$, denote the $n$ distinct eigenvectors as 
-$\{\vec{v}_1, \cdots, \vec{v}_n \}$. By definition, the eigenvectors satisfy the 
-equation
-
-$$A \vec{v}_i = \lambda_i \vec{v}_i, \forall i \epsilon \{1, \cdots, n \}. $$
-
-Here we give consideration to the case of distinct eigenvectors, in which case 
-the $n$ eigenvectors form a basis for $\mathbb{R}^{n}$. 
-
-To solve the equation $\dot{\vec{x}}(t) = A \vec{x}(t)$, define a set of scalar 
-functions $\{u_{1}(t), \cdots u_{n}(t) \}$ and make the following ansatz:
-
-$$\vec{\phi}_{i} = u_{i}(t) \vec{v}_{i}.$$
-
-Then, by differentiating,
-
-$$\dot{\vec{\phi}_i}(t) = \dot{u_i}(t) \vec{v}_{i}(t).$$
-
-The above equation can be combined with the differential equation for 
-$\vec{\phi}_{i}(t)$, $\dot{\vec{\phi}_{i}}(t)=A \vec{\phi}_{i}(t)$, to derive the 
-following equations,
-
-$$\dot{u_i}(t) \vec{v}_{i}(t) = A u_{i}(t) \vec{v}_{i}$$
-$$\dot{u_i}(t) \vec{v}_{i}(t) = u_{i}(t) \lambda_{i} \vec{v}_{i} $$
-$$\vec{v}_{i} (\dot{u_i}(t) - \lambda_i u_{i}(t)) = 0, $$
-
-where in the second last line we mase use of the fact that $\vec{v}_i$ is an eigenvector
-of $A$. The obtained relation implies that 
-
-$$\dot{u_i}(t) = \lambda_i u_{i}(t).$$
-
-This is a simple differential equation, of the type dealt with in the third 
-example. The solution is found to be
-
-$$u_{i}(t) = c_i e^{\lambda_i t},$$
-
-with $c_i$ some constant. The general solution is found by adding all $n$ of the
-solutions $\vec{\phi}_{i}(t)$,
-
-$$\vec{x}(t) = c_{1} e^{\lambda_1 t} \vec{v}_{1} + c_{2} e^{\lambda_2 t} \vec{v}_{2} + \cdots + c_{n} e^{\lambda_n t} \vec{v}_{n}.$$
-
-and the vectors $\{e^{\lambda_1 t} \vec{v}_{1}, \cdots, e^{\lambda_n t} \vec{v}_{n} \}$
-form a basis for the solution space since $\det(\vec{v}_1 | \cdots | \vec{v}_n) \neq 0$
-(the $n$ eigenvectors are linearly independent). 
-
-!!! check "Example: Homogeneous first order linear system with diagonalizable constant coefficient matrix"
-
-    Define the matrix
-    $$A = \begin{bmatrix}
-    0 & -1 \\
-    1 & 0 
-    \end{bmatrix},$$
-    and consider the DE 
-
-    $$\dot{\vec{x}}(t) = A \vec{x}(t), \vec{x}_0 = \begin{bmatrix}
-    1 \\
-    0 
-    \end{bmatrix}. $$
- 
-    To proceed following the solution technique, we determine the eigenvalues of 
-    $A$, 
- 
-    $$\det {\begin{bmatrix} 
-    -\lambda & -1 \\
-    1 & - \lambda \\
-    \end{bmatrix}} = \lambda^2 + 1 = 0. $$
-    
-    By solving the characteristic polynomial, one finds the two eigenvalues 
-    $\lambda_{\pm} = \pm i$. 
-    
-    Focusing first on the positive eigenvalue, we can determine the first 
-    eigenvector,
-    
-    $$\begin{bmatrix} 
-    0 & -1 \\
-    1 & 0 \\
-    \end{bmatrix} \begin{bmatrix}
-    a \\
-    b\\
-    \end{bmatrix} = i \begin{bmatrix}
-    a \\
-    b \\
-    \end{bmatrix}.$$
-
-    A solution to this eigenvector equation is given by $a=1$, $b=-i$, altogether
-    implying that
-    $$\lambda_1=i, \vec{v}_{1} = \begin{bmatrix} 
-    1 \\
-    -i \\
-    \end{bmatrix}.$$
-    
-    As for the second eigenvalue, $\lambda_{2} = -i$, we can solve the analagous
-    eigenvector equation to determine
-    $$\vec{v}_{2} = \begin{bmatrix}
-    1 \\
-    i \\
-    \end{bmatrix}.$$
-    
-    Hence two independent solutions of the differential equation are:
-    
-    $$\vec{\phi}_{1} = e^{i t}\begin{bmatrix}
-    1 \\
-    -i \\
-    \end{bmatrix}, \vec{\phi}_{2}  = e^{-i t} \begin{bmatrix}
-    1 \\
-    i \\
-    \end{bmatrix}.$$
- 
-    To obtain the general solutoin of the equation, the only thing which is 
-    missing is determination of linear combination coefficients for the two 
-    solutions which allow for satisfaction of the initial condition. To this end,
-    we must solve 
-    
-    $$c_1 \vec{\phi}_{1}(t) + c_2 \vec{\phi}_{2}(t) = \begin{bmatrix}
-    1 \\
-    0 \\
-    \end{bmatrix}$$
-    $$\begin{bmatrix}
-    c_1 + c_2 \\
-    -i c_1 + i c_2 \\
-    \end{bmatrix} = \begin{bmatrix} 
-    1 \\
-    0 \\
-    \end{bmatrix}.$$
-    
-    The second row of the vector equation for $c_1, c_2$ implies that $c_1=c_2$. 
-    The first row then implies that $c_1=c_2=\frac{1}{2}$. 
-    
-    Overall then, the general solution of the DE can be summarized
-    
-    $$\dot{\vec{x}}(t) = \begin{bmatrix}
-    \frac{1}{2}(e^{i t} + e^{-i t}) \\
-    \frac{1}{2 i}(e^{i t} - e^{-i t}) \\
-    \end{bmatrix} = \begin{bmatrix}
-    \cos(t) \\
-    \sin(t) \\
-    \end{bmatrix}. $$
-
-### Case 2: **A** $2\times 2$, defective
-
-In this case we consider the situation where $\det(A- \lambda I)$ 
-has a root $\lambda$ with multiplictiy 2, but only one eigenvector $\vec{v}_1$. 
-
-!!! check "Example: Matrix with eigenvalue of multiplicity 2 and only a single eigenvector."
-    
-    Consider the matrix
-    
-    $$A = \begin{bmatrix}
-    1 & 1 \\
-    0 & 1 \\
-    \end{bmatrix}$$
-    
-    The characteristic polynomial can be found by evaluating
-    
-    $$\det \big{(} \begin{bmatrix}
-    1-\lambda & 1 \\
-    0 & 1-\lambda \\
-    \end{bmatrix} \big{)} = 0$$
-    $$(1-\lambda)^2 = 0$$
-    
-    Hence the matrix $A$ has the single eigenvalue $\lambda=1$ with 
-    multiplicty 2. As for finding an eigenvector, we solve
-    
-    $$\begin{bmatrix} 
-    1 & 1 \\
-    0 & 1 \\
-    \end{bmatrix} \begin{bmatrix} 
-    a \\
-    b \\
-    \end{bmatrix} = \begin{bmatrix} 
-    a \\
-    b \\
-    \end{bmatrix}$$
-    $$\begin{bmatrix} 
-    a+b \\
-    b \\
-    \end{bmatrix} = \begin{bmatrix} 
-    a \\
-    b \\
-    \end{bmatrix}.$$
-    
-    These equations, $a+b=a$ and $b=b$ imply that $b=0$ and $a$ can be chosen 
-    arbitrarily, for example as $a=1$. Then, the only eigenvector is
-    
-    $$\vec{v}_1 = \begin{bmatrix} 
-    1 \\
-    0 \\
-    \end{bmatrix}.$$
-    
-What is the problem in this case? Since there are $n$ equations to be solved and
-an $n \times n$ linear operator $A$, the solution space for the equation 
-requires a basis of $n$ solutions. In this case however there are $n-1$ 
-eigenvectors, so we cannot use only these eigenvectors in forming a basis for 
-the solution space. 
-
-Suppse that we have a system of $2$ coupled equations, so that $A$ is a 
-$2 \times 2$ matrix, which has eigenvalue $\lambda_1$ with multiplicity $2$. As 
-in the previous section, we can form one solution using the single eigenvector 
-$\vec{v}_1$,
-
-$$\vec{\phi}_1(t) = e^{\lambda_1 t} \vec{v}_1.$$
-
-To determine a second, linearly independent solution, make the following ansatz:
-
-$$\vec{\phi}_2(t) = t e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_1 t} \vec{v}_2.$$
-
-With this ansatz it is then necessary to determine an appropriate vector $\vec{v}_2$
-such that $\vec{\phi}_2(t)$ is really a solution of the problem. To this end, take
-the derivative of $\vec{\phi}_2(t)$,
-
-$$\dot{\vec{\phi}_2}(t) = e^{\lambda_1 t} \vec{v}_1 + \lambda_1 t e^{\lambda_1 t} \vec{v}_1 + \lambda_1 e^{\lambda_1 t} \vec{v}_2 $$
-
-Also, write the matrix equation for $\vec{\phi}_2(t)$,
-
-$$A \vec{\phi}_2(t) = A t e^{\lambda_1 t} \vec{v}_1 + A e^{\lambda_1 t} \vec{v}_2 $$
-$$A \vec{\phi}_2(t) = \lambda_1 t e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_1 t}A \vec{v}_2$$
-
-Since $\vec{\phi}_2(t)$ must solve the equation 
-$\dot{\vec{\phi}_2(t)} = A \vec{\phi}_2(t)$, we can combine and simplify the 
-previous equations to write 
-
-$$A \vec{v}_2 - \lambda_1  \vec{v}_2 = \vec{v}_1$$
-$$(A- \lambda_1 I) \vec{v}_2 = \vec{v}_1 $$
-
-With this condition it is possible to write the general solution,
-
-$$\vec{x}(t) = c_1  e^{\lambda_1 t} \vec{v}_1 + c_2(t e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_1 t} \vec{v}_2).$$
-
-!!! check "Example: Continuation of example with $A$ defective (previous example)"
-    
-    Now it is our task to apply the condition in order to solve for $\vec{v}_2$,
-    
-    $$\begin{bmatrix}
-    1-1 & 1 \\
-    0 & 1-1 \\
-    \end{bmatrix} \begin{bmatrix}
-    a \\
-    b \\
-    \end{bmatrix} = \begin{bmatrix}
-    1 \\
-    0 \\
-    \end{bmatrix}$$
-    $$\begin{bmatrix} 
-    b \\
-    0 \\
-    \end{bmatrix} = \begin{bmatrix} 
-    1 \\
-    0 \\
-    \end{bmatrix}$$
-    
-    Hence, $b=1$ and $a$ is undetermined, so may be taken as $a=0$. Then, 
-    $$\vec{v}_{2} = \begin{bmatrix} 0 \\1 \end{bmatrix}.$$ 
-    
-    Overall then, the general solution is 
-    
-    $$\vec{x}(t) = c_1 e^t \begin{bmatrix} 
-    1 \\
-    0 \\
-    \end{bmatrix} + c_2 e^t \big{(} t \begin{bmatrix} 
-    1 \\
-    0 \\ 
-    \end{bmatrix}  + \begin{bmatrix} 
-    0 \\
-    1 \\
-    \end{bmatrix}\big{)}.$$
-
-### Bonus case 3: Higher multiplicity eigenvalues
-
-In this case we consider the situation where the matrix $A$ has an 
-eigenvalue $\lambda$ with multiplicity $m>2$, and only one eigenvector $\vec{v}$ 
-corresponding to $\lambda$, $(A - \lambda I)\vec{v}=0$. In this case notice 
-that $A$ must be at least an $m \times m$ matrix. 
-
-To solve such a situation, we will expand upon the result of the previous 
-section and define the vectors $\vec{v}_2$ through $\vec{v}_{m}$ by
-
-$$(A- \lambda I) \vec{v}_2 = \vec{v}_1$$
-$$\vdots$$
-$$(A- \lambda I) \vec{v}_m = \vec{v}_{m-1}.$$
-
-Then, the subset of the basis of solutions corresponding to eigenvalue $\lambda$
-is formed by the vectors
-
-$$\vec{\phi}_{k}(t) = e^{\lambda t} \big{(} \frac{t^{k-1}}{(k-1)!}\vec{v}_1 + \cdots + t \vec{v}_{k-1} + \vec{v}_{k} \big{)} \ \forall k \epsilon \{1, \cdots, m \}.$$
-
-To prove this, first take the derivative of $\vec{\phi}_{k}(t)$,
-
-$$\dot{\vec{\phi}_{k}(t)} = \lambda \vec{\phi}_{k}(t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!}\vec{v}_1 + \cdots + \vec{v}_{k-1} \big{)}.$$
-
-Then, for comparison, mulitply $\vec{\phi}_k(t)$ by $A$
-
-$$A \vec{\phi}_k (t) = e^{\lambda t} \big{(} \frac{t^{k-1}}{(k-1)!}\lambda \vec{v}_1 + \frac{t^{k-2}}{(k-2)!} A \vec{v}_2 + \cdots + A \vec{v}_{k-1} + A \vec{v}_k \big{)}$$
-$$A \vec{\phi}_k (t) = \lambda \vec{\phi}_k (t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!}(A- \lambda I)\vec{v}_2 + \cdots + t (A- \lambda I)\vec{v}_{k-1} + (A- \lambda I)\vec{v}_{k}  \big{)}$$
-$$A \vec{\phi}_k (t) =  \lambda \vec{\phi}_k (t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!} \vec{v}_1 + \cdots + t \vec{v}_{k-2} + \vec{v}_{k-1} \big{)}$$
-$$A \vec{\phi}_k (t) = \dot{\vec{\phi}}_{k}(t).$$
-
-Notice that in the second last line we made use of the relations 
-$(A- \lambda I)\vec{v}_{i} = \vec{v}_{i-1}$. This completes the proof 
-since we have demonstrated that $\vec{\phi}_{k}(t)$ is a solution of the DE.
-
-
-## Problems
-
-1. [:grinning:] Solve:
-
-    (a)  $\dot{x}(t) = t^4$
-
-    (b)  $\dot{x}(t) = \sin(t)$
-
-2. [:grinning:] Solve, subject to the initial condition $x(0)=\frac{1}{2}$
-
-    (a) $\dot{x}(t) = x^2$
-
-    (b) $\dot{x}(t) = t x$
-
-    (c) $\dot{x}(t) = t x^{4}$
-
-3. [:smirk:] Solve, subject to the given initial condition
-
-    (a) $\dot{x}(t)=-\tan(x)\sin(x)$, subject to $x(0)=1$. 
-
-    (b) $\dot{x(t)}=\frac{1}{3} x^2+3$, subject to $x(0)=3$.
-
-    Hint: it is fine if you use a computer algebra program to solve the integrals for these problems.
-
-4. [:smirk:] Solve the following equation and list all possible solutions
-
-    $$\dot{x}=\cos^2(x)$$
-
-    Hint: $\int \frac{1}{\cos^2(x)} dx = \tan(x) $
-      
-5. [:grinning:] Identify which of the following systems of equations is linear.
-   *Note thate you do not need to solve them!*  
-
-    (a) $$\dot{x_1}= t x_1 -t x_2$$
-        $$\dot{x}_2 = x_1 x_2 - x_2$$
-
-    (b) $$\dot{x}_1 = e^{-t}x_1$$
-        $$\dot{x}_2 = \sqrt{t + \cos(t)-1}x_1 + \frac{\sin(t)}{t^2+t-1}x_2$$
-
-    (c) $$x^{(2)}_1 x_1 + \dot{x}_1 = 8 x_2$$
-        $$\dot{x}_2=5tx_2 + x_1$$
-
-6. [:grinning:] Take the system of equations
-
-    $$\dot{x}_1 = \frac{1}{2} (t-1)x_1 + \frac{1}{2} (t+1)x_2$$
-
-    $$\dot{x}_2 = \frac{1}{2}(t+1)x_1 + \frac{1}{2}(t-1)x_2.$$
-
-    Show that 
-
-    $$\vec{\Phi}_1(t) = \begin{bmatrix} 
-    e^{- t} \\
-    -e^{- t} \\
-    \end{bmatrix}$$
-    and
-    $$\vec{\Phi}_2(t)=\begin{bmatrix}
-    e^{\frac{1}{2}(t^2)} \\
-    e^{\frac{1}{2}(t^2)} \\
-    \end{bmatrix}$$
-
-    constitute a basis for the solution space of this system of equations. 
-    To this end, first verify that they are indeed solutions and then that 
-    they form a basis. 
-
-7. [:grinning:] Take the system of equations 
-
-    $$\dot{x}_1=x_1$$
-
-    $$\dot{x}_2=x_1.$$
-
-    Re-write this system of equations into the general form
-
-    $$\dot{\vec{x}} = A \vec{x}$$
-
-    and then find the general solution. Specify the general solution for the 
-    following initial conditions
-
-    (a) $$\vec{x}(0) = \begin{bmatrix} 
-        1 \\
-        0 \\
-        \end{bmatrix}$$
-
-    (b) $$\vec{x}(0) = \begin{bmatrix}
-        0 \\
-        1 \\ 
-        \end{bmatrix}$$
-
-8. [:smirk:] Find the general solution of 
-
-    $$\begin{bmatrix}
-    \dot{x}_1 \\
-    \dot{x}_2 \\
-    \dot{x}_3 \\
-    \end{bmatrix} = \begin{bmatrix} 
-    1 & 1 & 0 \\
-    1 & 1 & 0 \\
-    0 & 0 & 3 \\
-    \end{bmatrix} \begin{bmatrix} 
-    x_1 \\
-    x_2 \\
-    x_3 \\
-    \end{bmatrix}.$$
-
-    Then, specify the solution for the initial conditions 
-
-    (a) $$\begin{bmatrix} 
-        0 \\
-        0 \\
-        1 \\
-        \end{bmatrix}$$
-
-    (b) $$\begin{bmatrix}
-        1 \\
-        0 \\
-        0 \\
-        \end{bmatrix}$$
-
-9. [:sweat:] Find the general solution of the system of equations
-
-    $$\dot{x}_1 = 3 x_1 + x_2$$
-    $$\dot{x}_2 = - x_1 + x_2$$  
-
- 
-
diff --git a/docs/8_differential_equations_2.md b/docs/8_differential_equations_2.md
deleted file mode 100644
index 1bfac80c53eaf87b793ad9325ff5a7ce80b19ef8..0000000000000000000000000000000000000000
--- a/docs/8_differential_equations_2.md
+++ /dev/null
@@ -1,592 +0,0 @@
----
-title: Differential Equations 2
----
-
-# Differential equations 2
-
-The lecture on differential equations consists of three parts, each with their own video:
-
-- [Higher order linear differential equations](#higher-order-linear-differential-equations)
-- [Partial differential equations: Separation of variables](#partial-differential-equations-separation-of-variables)
-- [Self-adjoint differential operators](#self-adjoint-differential-operators)
-
-**Total video length:  1 hour  9 minutes**
-
-## Higher order linear differential equations
-
-<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/ucvIiLgJ2i0?rel=0" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
-
-### Definitions
-
-In the previous lecture, we focused on first order linear differential equations
-as well as systems of such equations. In this lecture we switch focus to DE's 
-which involve higher derivatives of the function we would like to solve for. To
-f`%acilitate this change we are going to change notation. In the previous lecture
-we wrote differential equations for $x(t)$. In this lecture we will write DE's 
-of $y(x)$, where $y$ is an unknown function and $x$ is the independent variable. 
-For this purpose we make the following definitions,
-
-$$y' = \frac{dy}{dx}, \ y'' = \frac{d^2 y}{dx^2}, \ \cdots, \ y^{(n)} = \frac{d^n y}{dx^n}.$$
-
-In the new notation, a linear $n$-th order differential equation with constant
-coefficients reads 
-
-$$y^{(n)} + a_{n-1} y^{(n-1)} + \cdots + a_1 y' + a_0 y = 0. $$
-
-!!! info "Linear combination of solutions are still solutions"
-
-    Note that as was the case for first order linear DE's, the propery of 
-    linearity once again means that if $y_{1}(x)$ and $y_{2}(x)$ are both 
-    solutions, and $a$ and $b$ are constants, 
-    
-    $$a y_{1}(x) + b y_{2}(x)$$
-    
-    then linear combination of the solutions is also a solution.
-
-### Mapping to a linear system of first-order DEs
-
-In order to solve a higher order linear DE we will present a trick that makes it
-possible to map the problem of solving a single $n$-th order linear DE into a
-related problem of solving a system of $n$ first order linear DE's. 
-
-To begin, define:
-
-$$y_{1} = y, \ y_{2} = y', \ \cdots, \ y_{n} = y^{(n-1)}.$$
-
-Then, the differential equation can be re-written as
-
-$$\begin{split}
-y_1 ' & = y_2 \\
-y_2 ' & = y_3 \\
-& \vdots \\
-y_{n-1} '& = y_{n} \\
-y_{n} ' & = - a_{0} y_{1} - a_{1} y_{2} - \cdots - a_{n-1} y_{n}.
-\end{split}$$
-
-Notice that together these $n$ equations form a linear first order system, the 
-first $n-1$ equations of which are trivial. Note that this trick can be used to 
-reduce any system of $n$-th order linear DE's to a larger system of first order 
-linear DE's. 
-
-Since we have discussed already the method of solution for first order linear 
-systems, we will outline the general solution to this system. As before, the 
-general solution will be the linear combination of $n$ linearly independent 
-solutions $f_{i}(x)$, $i \epsilon \{1, \cdots, n \}$, which make up a basis for 
-the solution space. That is the general solution has the form
-
-$$y(x) = c_1 f_1 (x) + c_2 f_2 (x) + \cdots + c_n f_{n}(x). $$
-
-To check that the $n$ solutions form a basis, it is sufficient to verify
-
-$$ \det \begin{bmatrix} 
-f_1(x) & \cdots & f_{n}(x) \\
-f_1 ' (x) & \cdots & f_{n}'(x) \\
-\vdots & \vdots & \vdots \\
-f^{(n-1)}_{1} (x) & \cdots & f^{(n-1)}_{n} (x) \\
-\end{bmatrix}  \neq 0.$$
-
-The determinant in the preceding line is called the *Wronski determinant*.
-
-### General solution
-
-To determine particular solutions, we need to find the eigenvalues of 
-
-$$A = \begin{bmatrix} 
-0 & 1 & 0 & \cdots & 0 \\
-0 & 0 & 1 & \cdots & 0 \\
-\vdots & \vdots & \vdots & \cdots & \vdots \\
--a_0 & -a_1 & -a_2 & \cdots & -a_{n-1} \\
-\end{bmatrix}.$$
-
-It is possible to show that 
-
-$$\det(A - \lambda I) = -P(\lambda),$$
-
-in which $P(\lambda)$ is the characteristic polynomial of the system matrix $A$,
-
-$$P(\lambda) = \lambda^n + a_{n-1} \lambda^{n-1} + \cdots + a_0.$$
-
-
-!!! info "Proof of $\det(A - \lambda I) = -P(\lambda)$"
-
-    As we demonstrate below, the proof relies on the co-factor expansion
-    technique for calculating a determinant. 
-
-    $$- \det(A - \lambda I) = \begin{bmatrix} 
-    \lambda & -1 & 0 & \cdots & 0 \\
-    0 & \lambda & -1 & \cdots & 0 \\
-    \vdots & \vdots & \vdots & \cdots & \vdots \\
-    a_0 & a_1 & a_2 & \cdots & a_{n-1} + \lambda \\
-    \end{bmatrix} $$
-    $$- \det(A - \lambda I) =  \lambda \det \begin{bmatrix}
-    \lambda & -1 & 0 & \cdots & 0 \\
-    0 & \lambda & -1 & \cdots & 0 \\
-    \vdots & \vdots & \vdots & \cdots & \vdots \\
-    a_1 & a_2 & a_3 & \cdots & a_{n-1} + \lambda \\
-    \end{bmatrix} + (-1)^{n+1}a_0 \det \begin{bmatrix} 
-    -1 & 0 & 0 & \cdots & 0 \\
-    \lambda & -1 & 0 & \cdots & 0 \\
-    \vdots & \vdots & \vdots & \cdots & \vdots \\
-    0 & 0 & \cdots & \lambda & -1 \\
-    \end{bmatrix}$$
-    $$- \det(A - \lambda I) = \lambda \det \begin{bmatrix}
-    \lambda & -1 & 0 & \cdots & 0 \\
-    0 & \lambda & -1 & \cdots & 0 \\
-    \vdots & \vdots & \vdots & \cdots & \vdots \\
-    a_1 & a_2 & a_3 & \cdots & a_{n-1} + \lambda \\
-    \end{bmatrix} + (-1)^{n+1} a_0 (-1)^{n-1}$$
-    $$- \det(A - \lambda I) = \lambda \det \begin{bmatrix}
-    \lambda & -1 & 0 & \cdots & 0 \\
-    0 & \lambda & -1 & \cdots & 0 \\
-    \vdots & \vdots & \vdots & \cdots & \vdots \\
-    a_1 & a_2 & a_3 & \cdots & a_{n-1} + \lambda \\
-    \end{bmatrix} + a_0$$
-    $$- \det(A - \lambda I) = \lambda (\lambda (\lambda \cdots + a_2) + a_1)
-      + a_0$$
-    $$- \det(A - \lambda I) = P(\lambda).$$
-
-    In the second last line of the proof we indicated that the method of
-    co-factor  expansion demonstrated is repeated an additional $n-2$ times.
-    This completes the proof. 
-
-With the characteristic polynomial, it is possible to write the differential 
-equation as 
-
-$$P(\frac{d}{dx})y(x) = 0.$$
-
-To determine solutions, we need to find $\lambda_i$ such that $P(\lambda_i) = 0$. 
-By the fundamental theorem of algebra, we know that $P(\lambda)$ can be written 
-as
-
-$$P(\lambda) = \overset{l}{\underset{k=1}{\Sigma}} (\lambda - \lambda_k)^{m_k}.$$
-
-In the previous equation $\lambda_k$ are the k roots of the equations, and $m_k$
-is the multiplicity of each root. Note that the multiplicities satisfy 
-$\overset{l}{\underset{k=1}{\Sigma}} m_k = n$. 
-
-If the multiplicity of each eigenvalue is one, then solutions which form the 
-basis are then given as:
-
-$$f(x) = e^{\lambda_1 x}, \ e^{\lambda_2 x}, \ \cdots, \ e^{\lambda_n x}.$$
-
-If there are eigenvalues with multiplicity greater than one, the the solutions
-which form the basis are given as 
-
-$$f(x) = e^{\lambda_1 x}, \ x e^{\lambda_1 x} , \ \cdots, \ x^{m_{1}-1} e^{\lambda_1 x}, \ etc.$$
-
----ADD PROOF HERE---
-
-!!! check "Example: Second order homogeneous linear DE with constant coefficients"
-
-    Consider the equation 
-    
-    $$y'' + Ey = 0.$$ 
-    
-    The characteristic polynomial of this equation is 
-    
-    $$P(\lambda) = \lambda^2 + E.$$
-    
-    There are three cases for the possible solutions, depending upon the value 
-    of E.
-    
-    **Case 1: $E>0$**
-    For ease of notation, define $E=k^2$ for some constant $k$. The 
-    characteristic polynomial can then be factored as
-    
-    $$P(\lambda) = (\lambda+ i k)(\lambda - i k). $$
-    
-    Following our formulation for the solution, the two basis functions for the 
-    solution space are 
-    
-    $$f_1(x) = e^{i k x}, \ f_2=e^{- i k x}.$$
-    
-    Alternatively, the trigonometric functions can serve as basis functions, 
-    since they are linear combinations of $f_1$ and $f_2$ which remain linearly
-    independent,
-    
-    $$\tilde{f_1}(x)=\cos(kx), \tilde{f_2}(x)=\sin(kx).$$
-    
-    **Case 2: $E<0$**
-    This time, define $E=-k^2$, for constant $k$. The characteristic polynomial 
-    can then be factored as 
-    
-    $$P(\lambda) = (\lambda+ k)(\lambda -  k).$$
-
-    The two basis functions for this solution are then 
-    
-    $$f_1(x)=e^{k x}, \ f_2(x) = e^{-k x}.$$
-    
-    **Case 3: $E=0$**
-    In this case, there is a repeated eigenvalue (equal to $0$), since the 
-    characteristic polynomial reads
-    
-    $$P(\lambda) = (\lambda-0)^2.$$
-    
-    Hence the basis functions for the solution space read 
-    
-    $$f_1(x)=e^{0 x} = 1, \ f_{2}(x) = x e^{0 x} = x. $$
-
-
-## Partial differential equations: Separation of variables
-
-<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/I4ghpYsFLFY?rel=0" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
-
-### Definitions and examples
-
-A partial differential equation (PDE) is an equation involving a function of two or 
-more indepenedent variables and derivatives of said function. These equations
-are classified similarly to ordinary differential equations (the subject of
-our earlier study). For example, they are called linear if no terms such as
-
-$$\frac{\partial y(x,t)}{\partial x} \cdot \frac{d y(x,t)}{\partial t} \ or $$
-$$\frac{\partial^2 y(x,t)}{\partial x^2} y(x,t)$$ 
-
-occur. A PDE can be classified as $n$-th order accorind to the highest 
-derivative order of either variable occuring in the equation. For example, the 
-equation
-
-$$\frac{\partial^3 f(x,y)}{\partial x^3} + \frac{\partial f(x,t)}{\partial t} = 5$$
-
-is a $3$-rd order equation because of the third derivative with respect to x
-in the equation.
-
-To begin, we demonstrate that PDE's are of fundamental importance in physics, 
-especially in quantum physics. In particular, the Schrödinger equation, 
-which is of central importance in quantum physics is a partial differential 
-equation with respect to time and space. This equation is very important 
-because it describes the evolution in time and space of the entire description
-of a quantum system $\psi(x,t)$, which is known as the wavefunction. 
-
-For a free particle in one dimension, the Schrödinger equation is 
-
-$$i \hbar \frac{\partial \psi(x,t)}{\partial t} = - \frac{\hbar^2}{2m} \frac{\partial^2 \psi(x,t)}{\partial x^2}. $$
-
-When we studied ODEs, an initial condition was necessary in order to fully 
-specify a solution. Similarly, in the study of PDEs an initial condition is 
-required but now boundary conditions are also required. Going back to the 
-intuitive discussion from the lecture on ODEs, each of these conditions is 
-necassary in order to specify an integration constant that occurs in solving 
-the equation. In partial differential equations at least one such constant will
-arise from the time derivative and likewise at least one from the spatial 
-derivative. 
-
-For the Schrödinger equation, we could supply the initial conditions
-$$\psi(x,0)=\psi_0(x)$$
-together with the boundary conditions
-$$\psi(0,t) = \psi(L, t) = 0$$
-
-This particular set of boundary conditions corresponds to a particle in a box,
-a situation which is used as the base model for many derivations in quantum 
-physics. 
-
-Another example of a partial differential equation common in physics is the 
-Laplace equation
-
-$$\frac{\partial^2 \phi(x,y)}{\partial x^2}+\frac{\partial^2 \phi(x,y)}{\partial y^2}=0.$$
-
-In quantum physics Laplace's equation is important for the study of the hydrogen
-atom. In three dimensions and using spherical coordinates, the solutions to 
-Laplace's equation are special functions called spherical harmonics. In the 
-context of the hydrogen atom, these functions describe the wave function of the 
-system and a unique spherical harmonic function corresponds to each distinct set
-of quantum numbers.
-
-In the study of PDEs there is not a comprehensive overall treatment to the same 
-extent as there is for ODEs. There are several techniques which can be applied 
-to solving these equations, but the choice of technique must be tailored to the
-equation at hand. Hence we focus on some specific examples that are common in
-physics.
-
-### Separation of variables
-
-Let us focus on the one dimensional Schrödinger equation of a free particle
-
-$$i \hbar \frac{\partial \psi(x,t)}{\partial t} = - \frac{\hbar^2}{2m} \frac{\partial^2 \psi(x,t)}{\partial x^2}. $$
-
-To attempt a solution, we will make a *separation ansatz*,
-
-$$\psi(x,t)=\phi(x) f(t).$$
-
-!!! info "Separation ansatz"
-    The separation ansatz is a restrictive ansatz, not a fully general one. In
-    general, for such a treatment to be valid an equation and the boundary 
-    conditions given with it have to fulfill certain properties. In this course
-    however you will only be asked to use this technique when it is suitable.
-    
-Substituting the separation ansatz into the PDE,
-
-$$i \hbar \frac{\partial \phi(x)f(t)}{\partial t} = - \frac{\hbar^2}{2m} \frac{\partial^2 \phi(x)f(t)}{\partial x^2} $$
-$$i \hbar \dot{f}(t) \phi(x) = - \frac{\hbar^2}{2m} \phi''(x)f(t). $$
-
-Notice that in the above equation the derivatives on $f$ and $\phi$ can each be
-written as ordinary derivatives, $\dot{f}=\frac{df(t)}{dt}$, 
-$\phi''(x)=\frac{d^2 \phi}{dx^2}$. This is so because each is only a function of 
-one variable. 
-
-Next, divide both sides of the equation through by $\psi(x,t)=\phi(x) f(t)$,
-
-$$i \hbar \frac{\dot{f}(t)}{f(t)} = - \frac{\hbar^2}{2m} \frac{\phi''(x)}{\phi(x)} = constant := \lambda. $$
-
-In the previous line we concluded that each part of the equation must be equal 
-to a constant, which we defined as $\lambda$. This follows because the left hand
-side of the equation only has a dependence on the spatial coordinate $x$, whereas 
-the right hand side only has dependence on the time coordinate $t$. If we have 
-two functions $a(x)$ and $b(t)$ such that 
-$a(x)=b(t) \ \forall x, \ t \ \epsilon \mathbb{R}$, then $a(x)=b(t)=const$.
-
-The constant we defined, $lambda$, is called a *separation constant*. With it, we 
-can break the spatial and time dependent parts of the equation into two separate
-equations,
-
-$$i \hbar \dot{f}(t) = \lambda f(t)$$
-
-$$-\frac{\hbar^2}{2m} \phi''(x) = \lambda \phi(x) .$$
-
-To summarize, this process has broken one partial differential equation into two
-ordinary differential equations of different variables. In order to do this, we 
-needed to introduce a separation constant, which remains to be determined.
-
-### Boundary and eigenvalue problems
-
-Continuing on with the Schrödinger equation example from the previous 
-section, let us focus on 
-
-$$-\frac{\hbar^2}{2m} \phi''(x) = \lambda \phi(x),$$
-$$\phi(0)=\phi(L)=0.$$
-
-This has the form of an eigenvalue equation, in which $\lambda$ is the 
-eigenvalue, $- \frac{\hbar^2}{2m} \frac{d^2}{dx^2}[\cdot]$ is the linear 
-operator and $\phi(x)$ is the eigenfunction. 
-
-Notice that when stating the ordinary differential equation, it is specified 
-along with it's boundary conditions. Note that in contrast to an initial value
-problem, a boundary value problem does not always have a solution. For example, 
-in the figure below, regardless of the initial slope, the curves never reach $0$
-when $x=L$. 
-
-![image](figures/DE2_1.png)
-
-For boundary value problems like this, there are only solutions for particular 
-eigenvalues $\lambda$. Coming back to the example, it turns out that solutions
-only exist for $\lambda>0$ --this can be shown quickly, feel free to try it! 
-Define for simplicity $k^2:= \frac{2m \lambda}{\hbar^2}$. The equation then 
-reads
-
-$$\phi''(x)+k^2 \phi(x)=0.$$
-
-Two linearly independent solutions to this equation are 
-
-$$\phi_{1}(x)=\sin(k x), \ \phi_{2}(x) = \cos(k x).$$
-
-The solution to this homogeneous equation is then 
-
-$$\phi(x)=c_1 \phi_1(x)+c_2 \phi_2(x).$$
-
-The eigenvalue, $\lambda$ as well as one of the constant coefficients can be 
-determined using the boundary conditions. 
-
-$$\phi(0)=0 \ \Rightarrow \ \phi(x)=c_1 \sin(k x), \ c_2=0.$$
-
-$$\phi(L)=0 \ \Rightarrow \ 0=c_1 \sin(k L) .$$
-
-In turn, using the properties of the $\sin(\cdot)$ function, it is now possible
-to find the allowed values of $k$ and hence also $\lambda$. The previous 
-equation implies, 
-
-$$k L = n \pi, \ n \ \epsilon \ \mathbb{N}$$
-
-$$\lambda_n = \big{(}\frac{n \pi \hbar}{L} \big{)}^2.$$
-
-The values $\lambda_n$ are the eigenvalues. Now that we have determined 
-$\lambda$, it enters into the time equation, $i \hbar \dot{f}(t) = \lambda f(t)$
-only as a constant. We can hence simply solve,
-
-$$\dot{f}(t) = -i \frac{\lambda}{\hbar} f(t)$$
-
-$$f(t) = A e^{\frac{-i \lambda t}{\hbar}}.$$
-
-In the previous equation, the coefficient $A$ can be determined if the original
-PDE was supplied with an initial condition. 
-
-Putting the solutions to the two ODEs together and redefining 
-$\tilde{A}=A \cdot c_1$, we arrive at the solutions for theb PDE,
-
-$\psi_n(x,t) = \tilde{A}_n e^{-i \frac{\lambda_n t}{\hbar}} \sin(\frac{n \pi x}{L}).$
-
-Notice that there is one solution $\psi_{n}(x,t)$ for each natural number $n$. 
-These are still very special solutions. We will begin discussing next how to 
-obtain the general solution in our example. 
-
-
-## Self-adjoint differential operators
-
-<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/p4MHW0yMMvY?rel=0" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
-
-### Connection to Hilbert spaces
-
-As we hinted was possible earlier, let us re-write the previous equation by 
-defining a linear operator, $L$, acting on the space of functions which satisfy
-$\phi(0)=\phi(L)=0$:
-
-$$L[\cdot]:= \frac{- \hbar^2}{2m} \frac{d^2}{dx^2}[\cdot]. $$
-
-Then, the ODE can be writted as 
-
-$$L[\phi]=\lambda \phi.$$
-
-This equation looks exactly like, and turns out to be, an eigenvalue equation!
-
-!!! info "Connecting function spaces to Hilbert spaces"
-    
-    Recall that a space of functions can be transformed into a Hilbert space by 
-    equipping it with a inner product,
-    
-    $$\langle f, g \rangle = \int^{L}_{0} dx f^*(x) g(x) $$
-    
-    Use of this inner product also has utility in demonstrating that particular 
-    operators are *Hermitian*. The term hermitian is precisely defined below.
-    Of considerable interest is that hermition operators have a set of nice 
-    properties including all real eigenvalues and orthonormal eigenfunctions. 
-    
-The nicest type of operators for many practical purposes are hermitian 
-operators. In quantum physics for example, any physical operator must be 
-hermitian. Denote a hilbert space $\mathcal{H}$. An opertor 
-$H: \mathcal{H} \mapsto \mathcal{H}$ is said to be hermitian if it satisfies
-
-$$\langle f, H g \rangle = \langle H f, g \rangle \ \forall \ f, \ g \ \epsilon \ \mathcal{H}.$$
-
-Now, we would like to investigate whether the operator we have been working with,
-$L$ satisfies the criterion of being hermitian over the function space 
-$\phi(0)=\phi(L)=0$ equipped with the above defined inner product (i.e. it is a
-Hilbert space). Denote this Hilbert space $\mathcal{H}_{0}$ and consider let 
-$f, \ g \ \epsilon \ \mathcal{H}_0$ denote two functions from the Hilbert space.
-Then, we can investigate
-
-$$\langle f, L g \rangle = \frac{- \hbar^2}{2m} \int^{L}_{0} dx f^*(x) \frac{d^2}{dx^2}g(x).$$
-
-As a first step, it is possible to do integration by parts in the integral,
-
-$$\langle f, L g \rangle = \frac{+ \hbar^2}{2m} ( \int^{L}_{0} dx \frac{d f^*}{dx} \frac{d g}{dx} - [f^*(x)\frac{d g}{dx}] \big{|}^{L}_{0} )$$
-
-The boundary term vansishes, due to the boundary conditions $f(0)=f(L)=0$, 
-which directly imply $f^*(0)=f^*(L)=0$. Now, intergrate by parts a second time
-
-$$\langle f, L g \rangle = \frac{- \hbar^2}{2m} (\int^{L}_{0} dx \frac{d^2 f^*}{dx^2} g(x) - [\frac{d f^*}{dx} g(x)] \big{|}^{L}_{0} ).$$
-
-As before, the boundary term vanishes, due to the boundary conditions 
-$g(0)=g(L)=0$. Upon cancelling the boundary term however, the expression on 
-the right hand side, contained in the integral is simply 
-$\langle L f, g \rangle$. Therefore,
-
-$$\langle f, L g \rangle=\langle L f, g \rangle. $$
-
-We have demonstrated that $L$ is a hermitian operator on the space 
-$\mathcal{H}_0$. As a hermitian operator, $L$ has the property that it's 
-eigenfunctions form an orthonormal basis for the space $\mathcal{H}_0$. Hence it
-is possible to expand any function $f \ \epsilon \ \mathcal{H}_0$ in terms of
-the eigenfunctions of $L$. 
-
-!!! info "Connection to quantum states"
-    
-    Recall that q quantum state $|\phi\rangle$ can be written in an orthonormal 
-    basis $\{ |u_n\rangle \}$ as 
-    $$|\phi\rangle = \underset{n}{\Sigma} \langle u_n | \phi \rangle\, |u_n\rangle.$$ 
-    
-    In terms of hermitian operators and their eigenfunctions, the eigenfunctions
-    play the role of the orthonormal basis. In reference to our running example,
-    the 1D Schrödinger equation of a free particle, the eigenfunctions 
-    $\sin(\frac{n \pi x}{L})$ play the role of the basis functions $|u_n\rangle$.
-    
-To close our running example, consider the initial condition 
-$\psi(x,o) = \psi_{0}(x)$. Since the eigenfunctions $\sin(\frac{n \pi x}{L})$ 
-form a basis, we can now write the general solution to the problem as 
-
-$$\psi(x,t)  = \overset{\infty}{\underset{n}{\Sigma}} c_n e^{-i \frac{\lambda_n t}{\hbar}} \sin(\frac{n \pi x}{L}),$$
-
-where in the above we have defined the coefficients using the Fourier 
-coefficient,
-
-$$c_n:= \int^{L}_{0} dx \sin(\frac{n \pi x}{L}) \psi_{0}(x). $$
-    
-### General recipie for seperable PDEs
-
-1. Make the separation ansatz to obtain separate ordinary differential 
-    equations.
-2. Choose which euation to treat as the eigenvalue equation. This will depend 
-    upon the boundary conditions. Additionally, verify that the linear 
-    differential operator $L$ in the eigenvalue equation is hermitian. 
-3.Solve the eigenvalue equation. Substitute the eigenvalues into the other 
-    equations and solve those too. 
-4. Use the orthonormal basis functions to write down the solution corresponding 
-    to the specified initial and boundary conditions. 
-
-One natural question is what if the operator $L$ from setp 2 is not hermitian? 
-It is possible to try and make it hermitian by working on a Hilbert space 
-equipped with a different inner product. This means one can consider 
-modifications to the definition of $\langle \cdot, \cdot \rangle$ such that $L$
-is hermitian with respect to the modified inner product. This type of technique 
-falls under the umbrella of *Sturm-Liouville Theory*, which forms the foundation
-for a lot of the analysis that can be done analytically on PDEs. 
-
-Another question is of course what if the equation is not separable? One 
-possible approach is to try working in a different coordinate system. There are 
-a few more analytic techniques available, however in many situations it becomes
-necessary to work with numerical methods of solution. 
-
-## Problems
-
-1.  [:grinning:] Which of the following equations for $y(x)$ is linear?
-
-    (a) $y''' - y'' + x \cos(x) y' + y - 1 = 0$
-
-    (b) $y''' + 4 x y' - \cos(x) y = 0$
-
-    (c) $y'' + y y' = 0$
-
-    (d) $y'' + e^x y' - x y = 0$
-
-2.  [:grinning:] Find the general solution to the equation 
-
-    $$y'' - 4 y' + 4 y = 0. $$
-
-    Show explicitly by computing the Wronski determinant that the 
-    basis for the solution space is actually linearly independent. 
-
-3.  [:grinning:] Find the general solution to the equation 
-
-    $$y''' - y'' + y' - y = 0.$$
-
-    Then find the solution to the initial conditions $y''(0) =0$, $y'(0)=1$, $y(0)=0$. 
-
-4.  [:smirk:] Take the Laplace equation in 2D:
-
-    $$\frac{\partial^2 \phi(x,y)}{\partial x^2} + \frac{\partial^2 \phi(x,y)}{\partial y^2} = 0.$$
-
-    (a) Make a separation ansatz $\phi(x,y) = f(x)g(y)$ and write 
-        down the resulting ordinary differential equations.
-
-    (b) Now assume that the boundary conditions $\phi(0,y) = \phi(L,y) =0$ for 
-    	all y, i.e.  f(0)=f(L)=0. Find all solutions $f(x)$ and the
-	corresponding eigenvalues.
-
-    (c) Finally, for each eigenvalue, find the general solution $g(y)$ for
-    	this eigenvalue. Combine this with all solutions $f(x)$ to write down
-	the general solution (we know from the lecture that the operator
-	$\frac{d^2}{dx^2}$ is Hermitian - you can thus directly assume that
-	the solutions form an orthogonal basis). 
-
-5.  [:smirk:] Consider the following partial differential equations, and try to make a separation ansatz $h(x,y)=f(x)g(y)$. What do you observe in each case? (Only attempt the separation, do not solve the problem fully)
-
-    (a) $\frac{\partial h(x,y)}{\partial x} + x \frac{\partial h(x,y)}{\partial y} = 0. $
-
-    (b) $\frac{\partial h(x,y)}{\partial x} + \frac{\partial h(x,y)}{\partial y} + xy\,h(x,y) = 0$
-
-6.  [:sweat:] We consider the Hilbert space of functions $f(x)$ defined
-    for $x \ \epsilon \ [0,L]$ with $f(0)=f(L)=0$. 
-
-    Which of the following operators on this space is Hermitian?
-
-    (a) $L = A(x) \frac{d^2 f}{dx^2}$
-
-    (b) $L = \frac{d}{dx} \big{(} A(x) \frac{df}{dx} \big{)}$
-    
diff --git a/docs/index.md b/index.md
similarity index 70%
rename from docs/index.md
rename to index.md
index 76af9f579e66bdb0576ae220c46a46237a389cc5..caa61103dfc58183dcc8dcf11febd1153acab4fd 100644
--- a/docs/index.md
+++ b/index.md
@@ -1,6 +1,12 @@
-# Math for Quantum
+# Mathematics for Quantum Physics
 
-!!! summary "Learning goals"
+*Mathematics for Quantum Mechanics* gives you a compact introduction and review
+of the basic mathematical tools commonly used in quantum mechanics. Throughout
+the course, we keep quantum mechanics applications in mind, but at the
+core, this is still a mathematics course. For this reason, applying what you learned
+to examples and exercises is **crucial**!
+
+!!! tip "Learning goals"
 
     After following this course you will be able to:
 
@@ -8,20 +14,15 @@
     - solve mathematical problems encountered in the follow-up courses of the minor.
     - explain Hilbert spaces of (in)finite dimension. 
 
-*Mathematics for Quantum Mechanics* gives you a compact introduction and review
-of the basic mathematical tools commonly used in quantum mechanics. Throughout
-the course, we keep quantum mechanics applications in mind, but at the
-core, this is still a math course. For this reason, applying what you learned
-to examples and exercises is **crucial**!
 
-Each lecture note comes with an extensive set of exercises, and each exercise is labeled
-according to its difficulty:
+!!! note "Exercises"
+    Each lecture note comes with an extensive set of exercises, and each exercise is labeled according to its difficulty:
 
-- [:grinning:] easy
-- [:smirk:] intermediate
-- [:sweat:] difficult
+    - [:grinning:] easy
+    - [:smirk:] intermediate
+    - [:sweat:] difficult
 
-In these notes, our aim is to provide learning materials which are:
+With these notes, our aim is to provide learning materials which are:
 
 - self-contained
 - easy to modify and remix, so we provide the full source, including the code