diff --git a/mkdocs.yml b/mkdocs.yml
index a53ecd5f085cb468523618ff75be3b1f3b41339f..b7ef564f623b35ccb656d0ac2d8d81c4b819e797 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -6,12 +6,15 @@ site_description: |
   Lecture notes for TN3105 - Mathematics for Quantum Physics
 nav:
   - Intro: 'index.md'
+  - Coordinates: 'coordinates.md'
   - Complex Numbers: '1_complex_numbers.md'
   - Coordinates: '2_coordinates.md'
   - Vector spaces: '3_vector_spaces.md'
   - Vector spaces in quantum mechanics: '4_vector_spaces_QM.md'
   - Operators in quantum mechanics: '5_operators_QM.md'
   - Eigenvectors and eigenvalues: '6_eigenvectors_QM.md'
+  - Differential equations 1: '7_differential_equations_1.md'
+
 theme:
   name: material
   custom_dir: theme
diff --git a/src/7_differential_equations_1.md b/src/7_differential_equations_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..78aa0fd0b911644c2bf99c35b07a74ac3aaf9dd2
--- /dev/null
+++ b/src/7_differential_equations_1.md
@@ -0,0 +1,897 @@
+---
+title: Differential Equations
+---
+
+# Differential equations
+
+A differential equation  or DE is any equation which involves both a function and some 
+derivative of that function. In this lecture we will be focusing on 
+*Ordinary Differential Equations* (ODEs), meaning that our equations will involve 
+functions of one independent variable and hence any derivatives will be full 
+derivatives. Equations which involve a function of several independent variables
+and their partial derivatives are called *Partial Differential Equations* (PDEs). and will 
+be introduced in the follow up lecture. 
+
+We consider functions $x(t)$ and define $\dot{x}(t)=\frac{dx}{dt}$, 
+$x^{(n)}(t)=\frac{d^{n}x}{dt^{n}}$. An $n$*-th* order differential equation is 
+an equation of the form 5
+
+$$x^{(n)}(t) = f(x^{(n-1)}(t), \cdots, x(t), t).$$ 
+
+Typically, $n \leq 2$. Such an equation will usually be presented with a set of 
+initial conditions,
+
+$$x^{(n-1)}(t_{0}) = x^{(n-1)}_{0}, \cdots, x(t_0)=x_0. $$
+
+This is because to fully specify the solution to an $n$*-th* order differential 
+equation, $n-1$ initial conditions are necessary. To understand why we need 
+initial conditions, look at the following example.
+
+!!! check "Example: Initial conditions"
+
+    Consider the following calculus problem,
+    
+    $$\dot{f}(x)=x. $$ 
+
+    By integrating, one finds that the solution to this equation is 
+
+    $$\frac{1}{2}x^2 + c,$$
+
+    where $c$ is an integration constant. In order to specify the integration 
+    constant, an initial condition is needed. For instance, if we know that when 
+    $x=2$ then $f(2)=4$, we can plug this into the equation to get 
+
+    $$\frac{1}{2}*4 + c = 4, $$
+
+    which implies that $c=2$. 
+    
+Essentially initial conditions are needed when solving differential equations so
+that unknowns resulting from integration may be determined.
+
+!!! info Terminology for Differential Equations
+
+    1. If a differential equation does not explicitly contain the 
+        independent variable $t$, it is called an *autonomous equation*.
+    2. If the largest derivative in a differential equation is of first order, 
+        i.e. $n=1$, then the equation is called a first order differential 
+        equation.
+    3. Often you will see differential equations presented using $y(x)$ 
+        instead of $x(t)$. This is just a different nomenclature. 
+            
+In this course we will be focusing on *Linear Differential Equations*, meaning 
+that we consider differential equations $x^{(n)}(t) = f(x^{(n-1)}(t), \cdots, x(t), t)$
+where the function $f$ is a linear ploynomial function of the unknown function
+$x(t)$. A simple way to spot a non-linear differential equation is to look for 
+non-linear terms, such as $x(t)*\dot{x}(t)$ or $x^{(n)}(t)*x^{(2)}(t)$. 
+
+Often, we will be dealing with several coupled differential equations. In this 
+situation we can write the entire system of differential equations as a vector 
+equation, involving a linear operator. For a system of $m$ equations, denote 
+
+$$\vec{x}(t) = \begin{bmatrix}
+x_1(t) \\
+\vdots \\
+x_{m}(t) \\
+\end{bmatrix}.$$
+
+A system of first order linear equations is then written as 
+
+$$\dot{\vec{x}(t)} = \vec{f}(\vec{x}(t),t) $$
+
+with initial condition $\vec{x}(t_0) = \vec{x}_0$.
+
+# Basic examples and strategies
+
+The simplest type of differential equation is the type learned about in the 
+integration portion of a calculus course. Such equations have the form,
+
+$$\dot{x}(t) = f(t). $$
+
+When $F(t)$ is an anti-derivative of $f(t)$ i.e. $\dot{F}=f$, then the solutions
+to this type of equation are 
+
+$$x(t) = F(t) + c. $$
+
+!!! check "Example: First order linear differential equation with constant coefficients"
+
+    Given the equation
+    
+    $$\dot{x}(t)=t, $$
+    
+    one finds by integrating that the solution is $\frac{1}{2}t^2 + c$. 
+    
+For first order linear differential equations, it is possible to use the 
+concept of an anti-derivative from calculus to write a general solution, in 
+terms of the independent varaible.
+    
+$$\dot{x}(t)=f(x(t)).$$ 
+    
+This implies that $\frac{\dot{x}(t)}{f(x)} = 1$. Let $F(x)$ be the 
+anti-derivative of $\frac{1}{f(x)}$. Then, making use of the chain rule
+    
+$$\frac{\dot{x}(t)}{f(x(t))} = \frac{dx}{dt}\,\frac{dF}{dx} = \frac{d}{dt} F(x(t)) = 1$$
+    
+$$\Leftrightarrow F(x(t)) = t + c.$$
+    
+From this we notice that if we can solve for $x(t)$ then we have the 
+solution! Having a specific form for the function $f(x)$ can often makes it 
+possible to solve either implicitly or explicity for the function $x(t)$.
+
+!!! check "Example: Autonomous first order linear differential equation with constant coefficients"
+
+    Given the equation
+    
+    $$\dot{x} = \lambda x, $$
+    
+    we re-write the equation to be in the form 
+    
+    $$\frac{\dot{x}}{\lambda x} = 1.$$
+    
+    Now, applying the same process worked through above, let $f(x)=\lambda x$ 
+    and $F(x)$ be the anti-derivative of the $\frac{1}{f(x)}$. Integrating 
+    allows us to find the form of this anti-derivative. 
+    
+    $$F(x):= \int \frac{dx}{\lambda x} = \frac{1}{\lambda}log{\lambda x} $$
+    
+    Now, making use of the general solution we also have that $F(x(t)) =t+c$. 
+    These two equations can be combined to form an equation for $x(t)$,
+    
+    $$Log(\lambda x)  = \lambda t + c$$
+    $$x(t) = \frac{1}{\lambda} e^c e^{\lambda t} $$
+    $$x(t) = c_0 e^{\lambda t}$$
+    
+    where in the last line we defined a new constant $c_0 =\frac{1}{\lambda}e^c$.
+    Given an initial condition, we could immediately determine this constant $c_0$.
+    
+So far we have considered only DE's with constant coefficients, but it is very 
+common to encounter equations such as the following,
+
+$$\dot{x}(t)=g(t)f(x(t)).$$
+
+This type of differential equation is called a first order differential equation 
+with non-constant coefficients. If $f(x(t))$ is linear in $x$ then it is also 
+said to be a linear equation.  
+
+This equation can be re-written to isolate the coefficient function, g(t)
+
+$$\frac{dot{x}(t)}{f(x(t))} = g(t). $$
+
+Now, define $F(x)$ to be the anti-derivative of $\frac{1}{f(x)}$, and $G(t)$ to
+be the anti-derivative of $g(t)$. Without showing again the use of chain rule on
+the left side of the equation, we have
+
+$$\frac{d}{dt} F(x(t)) = g(t) $$
+$$\Rightarrow F(x(t)) = G(t) + c $$
+
+Given this form of general solution, knowledge of specific functions $f, g$ would
+make it possible to solve for $x(t)$. 
+
+!!! check "Example: First order linear differential equation with coefficient t"
+
+    Let us apply the above strategy to the following equation,
+    
+    $$\dot{x}= t x^2 .$$
+    
+    Comparison to the strategy indicates that we should define $f(x)=x^2$ and 
+    $g(t)=t$. As before, we can re-arrange the equation
+    
+    $$\frac{\dot{x}}{x^2} = t. $$
+    
+    It is then necessary to find $F(x)$, the anti-derivative of $\frac{1}{f(x)}$,
+    or the left hand side of the above equation, as well as $G(t)$, the 
+    anti-derivative of $g(t)$, or the right hand side of the previous equation.
+    
+    Integrating, one finds
+    
+    $$F(x) = - \frac{1}{x} $$
+    $$G(t)=\frac{1}{2}t^2 + c. $$
+    
+    Accordingly then, the equation we have is
+    
+    $$- \frac{1}{x} = \frac{1}{2} t^2 + c. $$
+    
+    At this point, it is possible to solve for $x(t)$ by re-arrangement
+    
+    $$x(t)= \frac{-2}{t^2 + c_0}, $$
+    
+    where in the last line we have defined $c_0 = 2c$. Once again, specification
+    of an initial condition would allow determination of $c_0$ directly. To see 
+    this, suppose $x(0) = 2$. Inserting this into the equation for $x(t)$ we have
+    
+    $$2 = \frac{-2}{c_0} $$
+    $$ \Rightarrow c_0 = -1.$$
+    
+    Having solved for $c_0$, with the choice of initial condition $x(0)=2$, the 
+    full equation for $x(t)$ is 
+    
+    $$x(t)=\frac{-2}{t^2 -1}. $$
+
+!!! check "Example: First order linear differential equation with general non-constant coefficient function"
+
+    Let us apply the strategy for dealing with non-constant coefficient functions
+    to the more general equation
+
+    $$\dot{x}= g(t) \cdot x. $$
+
+    This equation suggests that we first define $f(x)=x$ and then find $F(x)$ and 
+    $G(t)$, the anti-derivatives of $\frac{1}{f(x)}$ and $g(t)$, respectively. Doing
+    so, we determine 
+
+    $$F(x) = log(x) $$
+
+    Continuing to follow the protocol, we arrive at the equation
+
+    $$log(x) = G(t) + c.$$
+
+    Exponentiating and defining $c_0:=e^c$, we obtain an equation for $x(t)$,
+
+    $$x(t)= c_0 e^{G(t)} .$$
+    
+So far we have only considered first order differential equations. If we consider
+extending the strategies we have developed to higher order equations such as
+
+$$x^{(2)}(t)=f(x), $$
+
+with f(x) a linear function, then our work will swiftly become tedious. Later on
+we will develop the general theory for linear equations which will allow us to 
+tackle such higher order equations. For now, we move on to considering systems 
+of coupled first order linear DE's. 
+
+# Systems of first order differential equations
+
+An intuitive presentation of a system of coupled first order differential 
+equations can be given by a phase portrait. Before demonstrating such a portrait,
+let us introduce notation that is useful for working with systems of DE's. Several
+coupled DE's can be written down consicely as a single vector equation
+
+$$\dot{\vec{x}}=\vec{f}(\vec{x}). $$
+
+In such an equation the vector $\dot{\vec{x}}$ is the rate of change of a vector 
+quantity, for example the velocity which is the rate of change of the position 
+vector. The term $\vec{f}(\vec{x})$ describes a vector field, which has one vecter 
+per point $\vec{x}$. This type of equation can also be extended to include a time 
+varying vector field, $\vec{f}(\vec{x},t)$. 
+
+In the phase portrait below the velocity of the little cars are determined by 
+the vector field $\vec{f}(\vec{x})$, where the velocity corresponds to the slope of 
+each arrow. The position of each of the little cars is determined by an initial 
+condition. Since the field lines do not cross, and the cars begin on different 
+field lines, they will remain on different field lines. 
+
+![image](figures/Phase_portrait_with_cars.png)
+
+If $\vec{f}(\vec{x})$ is not crazy, for axample if it is continuous and 
+differentiable, then it is possible to prove the following two properties for 
+a system of first order linear DE's
+
+1. *Existence of solution*: For any specified initial condition, there is a solution
+2. *Uniqueness of solution*: Any point $\vec{x}(t)$ is uniquely determined by the
+    initial condition and the equation i.e. we know where each point "came from"
+    $\vec{x}(t'<t)$. 
+
+# Systems of linear first order differential equations
+
+## Homogeneous systems ##
+Any homogeneous system of first order linear DE's can be written in the form 
+
+$$\dot{\vec{x}} = A(t) \vec{x}, $$
+
+where $A$ is a linear operator. The system is called homogeneous because it 
+does not contain an additional term which is not dependent on $\vec{x}$ (for 
+example an additive constant or an additional function depending only on t). 
+
+An important property of such a system is *linearity*, which has the following 
+implictions
+
+1. If $\vec{x}(t)$ is a solution then $c \vec{y}(t)$ is as well, for some constant c
+2. If $\vec{x}(t)$ and $\vec{y}(t)$ are both solutions, then so is $a \vec{x}(t)+ b \vec{y}(t)$,
+   where a and b are both constants. 
+
+These properties have special importance for modelling physical systems, due to
+the principle of superposition, which is especially important in quantum physics,
+as well as electromagnetism and fluid dynamics. For example in electromagnetism 
+when there are four charges arranged in a square acting on a test charge 
+located within the square, it is sufficient to sum the individual forces in 
+order to find the total force. Physically, this is the principle of superposition, 
+and mathematically superposition is linearity and applies to linear models. 
+
+### General Solution ###
+
+For a system of $n$ linear first order DE's with $n \times n$ linear operator 
+$A(t)$, the general solution can be written as 
+
+$$\vec{x}(t) = c_1 \vec{\phi}_1 (t) + c_2 \vec{\phi}_2 (t) + \cdots + c_n \vec{\phi}_n (t),$$
+
+where $\{\vec{\phi}_1 (t), \vec{\phi}_2(t), \cdots, \vec{\phi}_n (t) \}$ are $n$ 
+independent soutions which form a basis for the solution space, and 
+$c_1, c_2, \cdots c_n$ are constants. 
+
+$\{\vec{\phi}_1 (t), \vec{\phi}_2(t), \cdots, \vec{\phi}_n (t) \}$ are a basis if and 
+only if they are linearly independent for fixed $t$:
+
+$$det \big{(}\vec{\phi}_1 (t) | \vec{\phi}_2 (t) | \cdots | \vec{\phi}_n (t) \big{)} \neq 0.$$
+
+If this condition holds for one $t$, it holds for all $t$.
+
+## Inhomogeneous systems ##
+
+In addition to the homogeneous equation, an inhomogeneous equation has an 
+additional term, which may be a funcction of the independent variable. 
+
+$$ \dot{\vec{x}}(t) = A(t) \vec{x}(t) + \vec{b}(t).$$
+
+There is a simple connection between the general solution of an inhomogeneous 
+equation and the corresponding homogeneous equation. If $\vec{\psi}_1$ and $\vec{\psi}_2$
+are two solutions of the inhomogeneous equation, then their difference is a 
+solution of the homogeneous equation 
+
+$$(\dot{\vec{\psi}_1}-\dot{\vec{\psi}_2}) = A(t) (\vec{\psi}_1 - \vec{\psi}_2). $$
+
+The general solution of the inhomogeneous equation can be written in terms of 
+the basis of solutions for the homogeneous equation, plus one particular solution
+to the inhomogeneous equation,
+
+$$\vec{x}(t) = \vec{\psi}(t) + c_1 \vec{\phi}_1 (t) + c_2 \vec{\phi}_2 (t) + \cdots + c_n \vec{\phi}_n (t). $$
+
+In the above equation, $\{\vec{\phi}_1 (t), \vec{\phi}_2(t), \cdots, \vec{\phi}_n (t) \}$
+form a basis for the solution space of the homogeneous equation and $\vec{\psi}(t)$
+is a particular solution of the inhomogeneous system. 
+
+Now we need a strategy for finding the solution of the inhomogeneous equation. 
+Begin by making an ansatz that $\vec{x}(t)$ can be written as a linear combination 
+of the basis functions for the homogeneous system, with coefficients that are 
+functions of the independent variable. Ansatz:
+
+$$\vec{x}(t) = c_1(t) \vec{\phi}_1 (t)+ c_2(t) \vec{\phi}_2(t) + \cdots + c_n(t) \vec{\phi}_n (t) $$
+
+Define the vector $\vec{c}(t)$ and matrix $\vec{\Phi}(t)$ as
+
+$$\vec{c}(t) = \begin{bmatrix}
+c_1(t) \\
+\vdots \\
+c_n(t) \\
+\end{bmatrix} $$
+$$\vec{\Phi}(t) = \big{(} \vec{\phi}_1 (t) | \cdots | \vec{\phi}_n (t) \big{)} $$
+
+With these definitions, it is possible to re-write the ansatz for $\vec{x}(t)$,
+
+$$ \vec{x}(t) = \vec{\Phi}(t) \vec{c}(t).$$
+
+Using the Leibniz rule, we then have the following expanded equation,
+
+$$\dot{\vec{x}}(t) = \dot{\vec{\Phi}}(t) \vec{c}(t) + \vec{\Phi}(t) \dot{\vec{c}}(t).$$
+
+Substituting the new expression into the differential equation gives,
+
+$$\dot{\vec{\Phi}}(t) \vec{c}(t) + \vec{\Phi}(t) \dot{\vec{c}}(t) = A(t) \vec{\Phi}(t) \vec{c}(t) + \vec{b}(t) $$
+$$\vec{\Phi}(t) \dot{\vec{c}}(t) = \vec{b}(t). $$
+
+In order to cancel terms in the previous line we made use of the fact that 
+$\vec{\Phi}(t)$ solves the homogeneous equation $\dot{\vec{\Phi}} = A \vec{\Phi}$.
+By way of inverting and integrating, we can write an equation for the coefficient
+vector $\vec{c}(t)$
+
+$$\vec{c}(t) = \int \vec{\Phi}^{-1}(t) \vec{b}(t) dt.$$
+
+With access to a concrete form of the coefficient vector, we can then write down
+the particular solution,
+
+$$\vec{\psi}(t)= \vec{\Phi}(t) \cdot \int \vec{\Phi}^{-1}(t) \vec{b}(t) dt .$$
+
+!!! check "Example: Inhomogeneous first order linear differential equation"
+
+    The technique for solving a system of inhomogeneous equations also works for a 
+    single inhomogeneous equation. Let us apply the technique to the equation
+
+    $$ \dot{x} = \lambda x + a. $$
+
+    In this particular inhomogenous equaiton, the function $g(t)=a$. As discussed in
+    an earlier example, the solution to the homogenous equation is
+    $c e^{\lambda t}$. Hence we define $\phi(t)=e^{\lambda t}$ and make the ansatz
+
+    $$\psi(t) = c(t) e^{\lambda t}. $$
+
+    Solving for $c(t)$ results in 
+
+    $$c(t) = \int e^{- \lambda t} a  dt$$
+    $$c(t) = \frac{- a }{\lambda} e^{- \lambda t} $$
+
+    Overall then, the solution (which can be easily verified by substitution) is 
+
+    $$\psi(t) = - \frac{a}{\lambda}.  $$ 
+    
+## Homogeneous linear system with constant coefficients ##
+
+The type of equaiton under consideration in this section looks like 
+
+$$ \dot{\vec{x}}(t) = A \vec{x}(t),$$
+
+where throughout the section $A$ will be a constant matrix. It is possible 
+to define a formal solution using the *matrix exponential*, 
+$\vec{x}(t) = e^{A t}$.
+
+!!! info "Definition: Matrix Exponential"
+
+    Before defining the matrix exponential, recall the definition of the regular 
+    exponential function in terms of Taylor series,
+    
+    $$e^{x} = \overset{\infty}{\underset{n=0}{\Sigma}} \frac{x^n}{n!},$$
+    
+    in which it is agreed that $0!=1$. The matrix exponential is defined in 
+    exactly the same way, only now instead of taking powers of a number or 
+    function, powers of a matrix are calculated 
+    
+     $$e^{A} = \overset{\infty}{\underset{n=0}{\Sigma}} \frac{{A}^n}{n!}.$$
+     
+     It is important to use caution when translating the properties of the normal
+     exponential function over to the matrix exponential, because not all of the
+     regular properties hold generally. In particular, 
+     
+     $$e^{X + Y} \neq e^{X} e^{Y},$$
+     
+     unless it happens that 
+     
+     $$[X, Y] = 0.$$
+     
+     The necessary condition for this property to hold, stated on the previous 
+     line, is called *commutativity*. Recall that in general, matrices are not
+     commutative so such a condition is only met for particular choises of 
+     matrices. The property of *non-commutativity* (what happens when the 
+     condition is not met) is of central importance in the mathematical 
+     structure of quantum mechanics. For example, mathematically, 
+     non-commutativity is responsible for the Heisenberg uncertainty relations.
+     
+     On the other hand, one property that does hold, is that $e^{- A t}$ is 
+     the inverse of the matric exponential of $A$. 
+     
+     Furthermore, it is possible to derive the derivative of the matrix 
+     exponential by making use of the Taylor series formulation,
+     
+     $$\frac{d}{dt} e^{A t} = \frac{d}{dt} \overset{\infty}{\underset{n=0}{\Sigma}} \frac{(A t)^n}{n!} $$
+     $$\frac{d}{dt} e^{A t} = \overset{\infty}{\underset{n=0}{\Sigma}} \frac{1}{n!} \frac{d}{dt} (A t)^n$$
+     $$\frac{d}{dt} e^{A t} = \overset{\infty}{\underset{n=0}{\Sigma}} \frac{n A}{n!}(A t)^{n-1}$$
+     $$\frac{d}{dt} e^{A t} = \overset{\infty}{\underset{n=1}{\Sigma}} \frac{A}{(n-1)!}(A t)^{n-1}$$
+     $$\frac{d}{dt} e^{A t} = \overset{\infty}{\underset{n=0}{\Sigma}} \frac{A}{n!}(A t)^n$$
+     $$\frac{d}{dt} e^{A t} = A e^{A t}.$$
+
+Armed with the matrix exponential and it's derivative, 
+$\frac{d}{dt} e^{A t} = A e^{A t}$, it is simple to verify that 
+the matrix exponential solves the differential equation. Properties of this 
+solution are:
+
+1. The columns of $e^{A t}$ form a basis for the solution space.
+2. Accounting for initial conditions, the full solution of the equation is 
+   $\dot{\vec{x}}(t) = e^{A t} {\vec{x}}_{0}$, with initial condition 
+   $\vec{x}(0) = e^{A 0}{\vec{x}}_0 = I {\vec{x}}_{0} = {\vec{x}}_{0}$. (here $I$ is the $n\times n$ identity matrix)
+
+Next we will discuss how to determine a solution in practice, beyond the 
+formal solution just presented. 
+
+### Case 1: **A** diagonalizable ###
+
+For an $n \times n$ matrix $A$, denote the $n$ distinct eigenvectors as 
+$\{\vec{v}_1, \cdots, \vec{v}_n \}$. By definition, the eigenvectors satisfy the 
+equation
+
+$$A \vec{v}_i = \lambda_i \vec{v}_i, \forall i \epsilon \{1, \cdots, n \}. $$
+
+Here we give consideration to the case of distinct eigenvectors, in which case 
+the $n$ eigenvectors form a basis for $\mathbb{R}^{n}$. 
+
+To solve the equation $\dot{\vec{x}}(t) = A \vec{x}(t)$, define a set of scalar 
+functions $\{u_{1}(t), \cdots u_{n}(t) \}$ and make the following ansatz:
+
+$$\vec{\phi}_{i} = u_{i}(t) \vec{v}_{i}.$$
+
+Then, by differentiating,
+
+$$\dot{\vec{\phi}_i}(t) = \dot{u_i}(t) \vec{v}_{i}(t).$$
+
+The above equation can be combined with the differential equation for 
+$\vec{\phi}_{i}(t)$, $\dot{\vec{\phi}_{i}}(t)=A \vec{\phi}_{i}(t)$, to derive the 
+following equations,
+
+$$\dot{u_i}(t) \vec{v}_{i}(t) = A u_{i}(t) \vec{v}_{i}$$
+$$\dot{u_i}(t) \vec{v}_{i}(t) = u_{i}(t) \lambda_{i} \vec{v}_{i} $$
+$$\vec{v}_{i} (\dot{u_i}(t) - \lambda_i u_{i}(t)) = 0, $$
+
+where in the second last line we mase use of the fact that $\vec{v}_i$ is an eigenvector
+of $A$. The obtained relation implies that 
+
+$$\dot{u_i}(t) = \lambda_i u_{i}(t).$$
+
+This is a simple differential equation, of the type dealt with in the third 
+example. The solution is found to be
+
+$$u_{i}(t) = c_i e^{\lambda_i t},$$
+
+with $c_i$ some constant. The general solution is found by adding all $n$ of the
+solutions $\vec{\phi}_{i}(t)$,
+
+$$\vec{x}(t) = c_{1} e^{\lambda_1 t} \vec{v}_{1} + c_{2} e^{\lambda_2 t} \vec{v}_{2} + \cdots + c_{n} e^{\lambda_n t} \vec{v}_{n}.$$
+
+and the vectors $\{e^{\lambda_1 t} \vec{v}_{1}, \cdots, e^{\lambda_n t} \vec{v}_{n} \}$
+form a basis for the solution space since $det(\vec{v}_1 | \cdots | \vec{v}_n) \neq 0$
+(the $n$ eigenvectors are linearly independent). 
+
+!!! check "Example: Homogeneous first order linear system with diagonalizable constant coefficient matrix"
+
+    Define the matrix
+    $$A = \begin{bmatrix}
+    0 & -1 \\
+    1 & 0 
+    \end{bmatrix},$$
+    and consider the DE 
+
+    $$\dot{\vec{x}}(t) = A \vec{x}(t), \vec{x}_0 = \begin{bmatrix}
+    1 \\
+    0 
+    \end{bmatrix}. $$
+ 
+    To proceed following the solution technique, we determine the eigenvalues of 
+    $A$, 
+ 
+    $$det {\begin{bmatrix} 
+    -\lambda & -1 \\
+    1 & - \lambda \\
+    \end{bmatrix}} = \lambda^2 + 1 = 0. $$
+    
+    By solving the characteristic polynomial, one finds the two eigenvalues 
+    $\lambda_{\pm} = \pm i$. 
+    
+    Focusing first on the positive eigenvalue, we can determine the first 
+    eigenvector,
+    
+    $$\begin{bmatrix} 
+    0 & -1 \\
+    1 & 0 \\
+    \end{bmatrix} \begin{bmatrix}
+    a \\
+    b\\
+    \end{bmatrix} = i \begin{bmatrix}
+    a \\
+    b \\
+    \end{bmatrix}.$$
+
+    A solution to this eigenvector equation is given by $a=1$, $b=-i$, altogether
+    implying that
+    $$\lambda_1=i, \vec{v}_{1} = \begin{bmatrix} 
+    1 \\
+    -i \\
+    \end{bmatrix}.$$
+    
+    As for the second eigenvalue, $\lambda_{2} = -i$, we can solve the analagous
+    eigenvector equation to determine
+    $$\vec{v}_{2} = \begin{bmatrix}
+    1 \\
+    i \\
+    \end{bmatrix}.$$
+    
+    Hence two independent solutions of the differential equation are:
+    
+    $$\vec{\phi}_{1} = e^{i t}\begin{bmatrix}
+    1 \\
+    -i \\
+    \end{bmatrix}, \vec{\phi}_{2}  = e^{-i t} \begin{bmatrix}
+    1 \\
+    i \\
+    \end{bmatrix}.$$
+ 
+    To obtain the general solutoin of the equation, the only thing which is 
+    missing is determination of linear combination coefficients for the two 
+    solutions which allow for satisfaction of the initial condition. To this end,
+    we must solve 
+    
+    $$c_1 \vec{\phi}_{1}(t) + c_2 \vec{\phi}_{2}(t) = \begin{bmatrix}
+    1 \\
+    0 \\
+    \end{bmatrix}$$
+    $$\begin{bmatrix}
+    c_1 + c_2 \\
+    -i c_1 + i c_2 \\
+    \end{bmatrix} = \begin{bmatrix} 
+    1 \\
+    0 \\
+    \end{bmatrix}.$$
+    
+    The second row of the vector equation for $c_1, c_2$ implies that $c_1=c_2$. 
+    The first row then implies that $c_1=c_2=\frac{1}{2}$. 
+    
+    Overall then, the general solution of the DE can be summarized
+    
+    $$\dot{\vec{x}}(t) = \begin{bmatrix}
+    \frac{1}{2}(e^{i t} + e^{-i t}) \\
+    \frac{1}{2 i}(e^{i t} - e^{-i t}) \\
+    \end{bmatrix} = \begin{bmatrix}
+    cos(t) \\
+    sin(t) \\
+    \end{bmatrix}. $$
+
+### Case 2: **A** $2\times 2$, defective ###
+
+In this case we consider the situation where $det(A- \lambda I)$ 
+has a root $\lambda$ with multiplictiy 2, but only one eigenvector $\vec{v}_1$. 
+
+!!! check "Example: Matrix with eigenvalue of multiplicity 2 and only a single eigenvector."
+    
+    Consider the matrix
+    
+    $$A = \begin{bmatrix}
+    1 & 1 \\
+    0 & 1 \\
+    \end{bmatrix}$$
+    
+    The characteristic polynomial can be found by evaluating
+    
+    $$det \big{(} \begin{bmatrix}
+    1-\lambda & 1 \\
+    0 & 1-\lambda \\
+    \end{bmatrix} \big{)} = 0$$
+    $$(1-\lambda)^2 = 0$$
+    
+    Hence the matrix $A$ has the single eigenvalue $\lambda=1$ with 
+    multiplicty 2. As for finding an eigenvector, we solve
+    
+    $$\begin{bmatrix} 
+    1 & 1 \\
+    0 & 1 \\
+    \end{bmatrix} \begin{bmatrix} 
+    a \\
+    b \\
+    \end{bmatrix} = \begin{bmatrix} 
+    a \\
+    b \\
+    \end{bmatrix}$$
+    $$\begin{bmatrix} 
+    a+b \\
+    b \\
+    \end{bmatrix} = \begin{bmatrix} 
+    a \\
+    b \\
+    \end{bmatrix}.$$
+    
+    These equations, $a+b=a$ and $b=b$ imply that $b=0$ and $a$ can be chosen 
+    arbitrarily, for example as $a=1$. Then, the only eigenvector is
+    
+    $$\vec{v}_1 = \begin{bmatrix} 
+    1 \\
+    0 \\
+    \end{bmatrix}.$$
+    
+What is the problem in this case? Since there are $n$ equations to be solved and
+an $n \times n$ linear operator $A$, the solution space for the equation 
+requires a basis of $n$ solutions. In this case however there are $n-1$ 
+eigenvectors, so we cannot use only these eigenvectors in forming a basis for 
+the solution space. 
+
+Suppse that we have a system of $2$ coupled equations, so that $A$ is a 
+$2 \times 2$ matrix, which has eigenvalue $\lambda_1$ with multiplicity $2$. As 
+in the previous section, we can form one solution using the single eigenvector 
+$\vec{v}_1$,
+
+$$\vec{\phi}_1(t) = e^{\lambda_1 t} \vec{v}_1.$$
+
+To determine a second, linearly independent solution, make the following ansatz:
+
+$$\vec{\phi}_2(t) = t e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_1 t} \vec{v}_2.$$
+
+With this ansatz it is then necessary to determine an appropriate vector $\vec{v}_2$
+such that $\vec{\phi}_2(t)$ is really a solution of the problem. To this end, take
+the derivative of $\vec{\phi}_2(t)$,
+
+$$\dot{\vec{\phi}_2}(t) = e^{\lambda_1 t} \vec{v}_1 + \lambda_1 t e^{\lambda_1 t} \vec{v}_1 + \lambda_1 e^{\lambda_1 t} \vec{v}_2 $$
+
+Also, write the matrix equation for $\vec{\phi}_2(t)$,
+
+$$A \vec{\phi}_2(t) = A t e^{\lambda_1 t} \vec{v}_1 + A e^{\lambda_1 t} \vec{v}_2 $$
+$$A \vec{\phi}_2(t) = \lambda_1 t e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_1 t}A \vec{v}_2$$
+
+Since $\vec{\phi}_2(t)$ must solve the equation 
+$\dot{\vec{\phi}_2(t)} = A \vec{\phi}_2(t)$, we can combine and simplify the 
+previous equations to write 
+
+$$A \vec{v}_2 - \lambda_1  \vec{v}_2 = \vec{v}_1$$
+$$(A- \lambda_1 I) \vec{v}_2 = \vec{v}_1 $$
+
+With this condition it is possible to write the general solution,
+
+$$\vec{x}(t) = c_1  e^{\lambda_1 t} \vec{v}_1 + c_2(t e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_1 t} \vec{v}_2).$$
+
+!!! check "Example: Continuation of example with $A$ defective (previous example)"
+    
+    Now it is our task to apply the condition in order to solve for $\vec{v}_2$,
+    
+    $$\begin{bmatrix}
+    1-1 & 1 \\
+    0 & 1-1 \\
+    \end{bmatrix} \begin{bmatrix}
+    a \\
+    b \\
+    \end{bmatrix} = \begin{bmatrix}
+    1 \\
+    0 \\
+    \end{bmatrix}$$
+    $$\begin{bmatrix} 
+    b \\
+    0 \\
+    \end{bmatrix} = \begin{bmatrix} 
+    1 \\
+    0 \\
+    \end{bmatrix}$$
+    
+    Hence, $b=1$ and $a$ is undetermined, so may be taken as $a=0$. Then, 
+    $$\vec{v}_{2} = \begin{bmatrix} 0 \\1 \end{bmatrix}.$$ 
+    
+    Overall then, the general solution is 
+    
+    $$\vec{x}(t) = c_1 e^t \begin{bmatrix} 
+    1 \\
+    0 \\
+    \end{bmatrix} + c_2 e^t \big{(} t \begin{bmatrix} 
+    1 \\
+    0 \\ 
+    \end{bmatrix}  + \begin{bmatrix} 
+    0 \\
+    1 \\
+    \end{bmatrix}\big{)}.$$
+
+### Bonus case 3: Higher multiplicity eigenvalues ###
+In this case we consider the situation where the matrix $A$ has an 
+eigenvalue $\lambda$ with multiplicity $m>2$, and only one eigenvector $\vec{v}$ 
+corresponding to $\lambda$, $(A - \lambda I)\vec{v}=0$. In this case notice 
+that $A$ must be at least an $m \times m$ matrix. 
+
+To solve such a situation, we will expand upon the result of the previous 
+section and define the vectors $\vec{v}_2$ through $\vec{v}_{m}$ by
+
+$$(A- \lambda I) \vec{v}_2 = \vec{v}_1$$
+$$\vdots$$
+$$(A- \lambda I) \vec{v}_m = \vec{v}_{m-1}.$$
+
+Then, the subset of the basis of solutions corresponding to eigenvalue $\lambda$
+is formed by the vectors
+
+$$\vec{\phi}_{k}(t) = e^{\lambda t} \big{(} \frac{t^{k-1}}{(k-1)!}\vec{v}_1 + \cdots + t \vec{v}_{k-1} + \vec{v}_{k} \big{)} \ \forall k \epsilon \{1, \cdots, m \}.$$
+
+To prove this, first take the derivative of $\vec{\phi}_{k}(t)$,
+
+$$\dot{\vec{\phi}_{k}(t)} = \lambda \vec{\phi}_{k}(t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!}\vec{v}_1 + \cdots + \vec{v}_{k-1} \big{)}.$$
+
+Then, for comparison, mulitply $\vec{\phi}_k(t)$ by $A$
+
+$$A \vec{\phi}_k (t) = e^{\lambda t} \big{(} \frac{t^{k-1}}{(k-1)!}\lambda \vec{v}_1 + \frac{t^{k-2}}{(k-2)!} A \vec{v}_2 + \cdots + A \vec{v}_{k-1} + A \vec{v}_k \big{)}$$
+$$A \vec{\phi}_k (t) = \lambda \vec{\phi}_k (t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!}(A- \lambda I)\vec{v}_2 + \cdots + t (A- \lambda I)\vec{v}_{k-1} + (A- \lambda I)\vec{v}_{k}  \big{)}$$
+$$A \vec{\phi}_k (t) =  \lambda \vec{\phi}_k (t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!} \vec{v}_1 + \cdots + t \vec{v}_{k-2} + \vec{v}_{k-1} \big{)}$$
+$$A \vec{\phi}_k (t) = \dot{\vec{\phi}}_{k}(t).$$
+
+Notice that in the second last line we made use of the relations 
+$(A- \lambda I)\vec{v}_{i} = \vec{v}_{i-1}$. This completes the proof 
+since we have demonstrated that $\vec{\phi}_{k}(t)$ is a solution of the DE.
+
+
+# Problems
+
+1. [:grinning:] Solve:
+
+    (a)  $\dot{x}(t) = t^4$
+
+    (b)  $\dot{x}(t) = sin(t)$
+
+2. [:grinning:] Solve, subject to the initial condition $x(0)=\frac{1}{2}$
+
+    (a) $\dot{x}(t) = x^2$
+
+    (b) $\dot{x}(t) = t x$
+
+    (c) $\dot{x}(t) = t x^{4}$
+
+3. [:smirk:] Solve, subject to the given initial condition
+
+    (a) $\dot{x}(t)=-tan(x)sin(x)$, subject to $x(0)=1$. 
+
+    (b) $\dot{x(t)}=\frac{1}{3} x^2+9$, subject to $x(0)=3$.
+
+4. [:smirk:] Solve the following equation and list all possible solutions
+
+    $$\dot{x}=cos^2(x)$$
+
+    Hint: $\int \frac{1}{cos^2(x)} dx = tan(x) $
+      
+5. [:grinning:] Identify which of the following systems of equations is linear.
+   *Note thate you do not need to solve them!*  
+
+    (a) $$\dot{x_1}= t x_1 -t x_2$$
+        $$\dot{x}_2 = x_1 x_2 - x_2$$
+
+    (b) $$\dot{x}_1 = e^{-t}x_1$$
+        $$\dot{x}_2 = \sqrt{t + cos(t)-1}x_1 + \frac{sin(t)}{t^2+t-1}x_2$$
+
+    (c) $$x^{(2)}_1 x_1 + \dot{x}_1 = 8 x_2$$
+        $$\dot{x}_2=5tx_2 + x_1$$
+
+6. [:grinning:] Take the system of equations
+
+    $$\dot{x}_1 = \frac{1}{2} (t-1)x_1 + \frac{1}{2} (t+1)x_2$$
+
+    $$\dot{x}_2 = \frac{1}{2}(t+1)x_1 + \frac{1}{2}(t-1)x_2.$$
+
+    Show that 
+
+    $$\vec{\Phi}_1(t) = \begin{bmatrix} 
+    e^{- t} \\
+    -e^{- t} \\
+    \end{bmatrix}$$
+    and
+    $$\vec{\Phi}_2(t)=\begin{bmatrix}
+    e^{\frac{1}{2}(t^2)} \\
+    e^{\frac{1}{2}(t^2)} \\
+    \end{bmatrix}$$
+
+    constitute a basis for the solution space of this system of equations. 
+    To this end, first verify that they are indeed solutions and then that 
+    they form a basis. 
+
+7. [:grinning:] Take the system of equations 
+
+    $$\dot{x}_1=x_1$$
+
+    $$\dot{x}_2=x_1.$$
+
+    Re-write this system of equations into the general form
+
+    $$\dot{\vec{x}} = A \vec{x}$$
+
+    and then find the general solution. Specify the general solution for the 
+    following initial conditions
+
+    (a) $$\vec{x}(0) = \begin{bmatrix} 
+        1 \\
+        0 \\
+        \end{bmatrix}$$
+
+    (b) $$\vec{x}(0) = \begin{bmatrix}
+        0 \\
+        1 \\ 
+        \end{bmatrix}$$
+
+8. [:smirk:] Find the general solution of 
+
+    $$\begin{bmatrix}
+    \dot{x}_1 \\
+    \dot{x}_2 \\
+    \dot{x}_3 \\
+    \end{bmatrix} = \begin{bmatrix} 
+    1 & 1 & 0 \\
+    1 & 1 & 0 \\
+    0 & 0 & 3 \\
+    \end{bmatrix} \begin{bmatrix} 
+    x_1 \\
+    x_2 \\
+    x_3 \\
+    \end{bmatrix}.$$
+
+    Then, specify the solution for the initial conditions 
+
+    (a) $$\begin{bmatrix} 
+        0 \\
+        0 \\
+        1 \\
+        \end{bmatrix}$$
+
+    (b) $$\begin{bmatrix}
+        1 \\
+        0 \\
+        0 \\
+        \end{bmatrix}$$
+
+9. [:sweat:] *Bonus question - A question of this kind will not be on the exam*
+
+    Show that $e^{A t} e^{-At} = I$, where $A$ is an 
+    $n \times n$ matrix and $I$ is the $n \times n$ identity matrix,
+    by using the definition of the matrix exponential in terms of the Taylor series!
+
+
+
+
+ 
+
diff --git a/src/figures/Phase_portrait_with_cars.png b/src/figures/Phase_portrait_with_cars.png
new file mode 100644
index 0000000000000000000000000000000000000000..e8059f25620732400ba13a1d644d0ebdfee584e4
Binary files /dev/null and b/src/figures/Phase_portrait_with_cars.png differ