diff --git a/src/3_vector_spaces.md b/src/3_vector_spaces.md
index 3ba8511135ccc16b9e8876904e097f1127fbcde2..b85d03f70c8237a8e32709548b7e3d8319649b0a 100644
--- a/src/3_vector_spaces.md
+++ b/src/3_vector_spaces.md
@@ -15,442 +15,136 @@ The lecture on vector spaces consists of two parts, each with their own video:
 
 <iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/fLMdaMuEp8s" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
 
-  A vector $\vec{v}$ is essentially a mathematical object characterised by both
-  a **magnitude** and a **direction**, that is, an orientation in a given space.
+A vector $\vec{v}$ is essentially a mathematical object characterised by both
+a **magnitude** and a **direction**, that is, an orientation in a given space.
   
-  We can express a vector in terms of its individual **components**.
+We can express a vector in terms of its individual **components**.
   
-  Let's assume we have an $n$-dimensional space, meaning that the vector $\vec{v}$ can be oriented
-  in different ways along each of $n$ dimensions.
+Let's assume we have an $n$-dimensional space, meaning that the vector $\vec{v}$ can be oriented
+in different ways along each of $n$ dimensions.
   
-  The expression of $\vec{v}$ in terms of its components is
-  $$
-  \vec{v} = (v_1, v_2,..., v_n) \, ,
-  $$
+The expression of $\vec{v}$ in terms of its components is
+
+$$\vec{v} = (v_1, v_2,\ldots, v_n) \, ,$$
  
-  We will denote by ${\mathcal V}^n$ the **vector space** composed
-  by all possible vectors of the above form.
+We will denote by ${\mathcal V}^n$ the **vector space** composed
+by all possible vectors of the above form.
 
-  The components of a vector, $\{ v_i\}$ can be **real numbers** or **complex numbers**,
-  depending on whether we have a real or a complex vector space.
+The components of a vector, $\{ v_i\}$ can be **real numbers** or **complex numbers**,
+depending on whether we have a real or a complex vector space.
 
-  The expression above of $\vec{v}$ in terms of its components assume that we are
-  using some specific **basis**.
+The expression above of $\vec{v}$ in terms of its components assume that we are
+using some specific **basis**.
 
-  It is important to recall that the same vector can be expressed in terms of different bases.
+It is important to recall that the same vector can be expressed in terms of different bases.
 
-  A **vector basis** is a set of $n$ vectors that can be used to generate all the elements
-  of a vector space.
-  
-  For example, a possible basis of  ${\mathcal V}^n$ could be denoted by $\vec{a}_1,\vec{a}_2,\ldots,\vec{a_n}$,
-  and we can write a generic vector  $\vec{v}$  as
-  $$
-  \vec{v} = \lp v_1, v_2, \ldots, v_n\rp = v_1 \vec{a}_1 + v_2 \vec{a}_2 + \ldots v_n \vec{a}_n \, .
-  $$
-  However, one could choose another different basis, denoted by $\vec{b}_1,\vec{b}_2,\ldots,\vec{b_n}$,
-  where the same vector would be expressed in terms of a different set of components
-  $$
-  \vec{v} = \lp v'_1, v'_2, \ldots, v'_n\rp = v'_1 \vec{b}_1 + v'_2 \vec{b}_2 + \ldots v'_n \vec{b}_n \, .
-  $$
-  so while the vector remains the same, the values of its components depends on the specific choice
-  of basis.
-
-  The most common basis is the **Cartesian basis**, where for example for $n=3$ one has
-  $$
-  \vec{a}_1 = \lp 1, 0, 0 \rp \, ,\qquad \vec{a}_2 = \lp 0, 1, 0 \rp
-  \, ,\qquad \vec{a}_3 = \lp 0, 0, 1 \rp \, ,
-  $$
-  but other choices of basis are possible.
+A **vector basis** is a set of $n$ vectors that can be used to generate all the elements
+of a vector space.
   
-  The elements of a vector basis must be **linearly independent** from each other, meaning
-  that none of them can be expressed as linear combination of the rest of basis vectors.
+For example, a possible basis of  ${\mathcal V}^n$ could be denoted by $\vec{a}_1,\vec{a}_2,\ldots,\vec{a_n}$,
+and we can write a generic vector  $\vec{v}$  as
 
-   All this is a bit abstract so let's consider some examples in the two-dimensional real
-   vector space $\mathbb{R}$, namely the $(x,y)$ coordinate plane, shown below.
+$$\vec{v} = (v_1, v_2, \ldots, v_n) = v_1 \vec{a}_1 + v_2 \vec{a}_2 + \ldots v_n \vec{a}_n \, .$$
 
+However, one could choose another different basis, denoted by $\vec{b}_1,\vec{b}_2,\ldots,\vec{b_n}$, where the same vector would be expressed in terms of a different set of components
 
-   ![image](figures/3_vector_spaces_1.jpg)
-  
-   We see how the same vector $\vec{v}$ can be expressed in two different basis.
-    
-   In the first one, the Cartesian basis, its components are $\vec{v}=\lp 2,2\rp$.
+$$ \vec{v} = (v'_1, v'_2, \ldots, v'_n) = v'_1 \vec{b}_1 + v'_2 \vec{b}_2 + \ldots v'_n \vec{b}_n \, .$$
+
+so while the vector remains the same, the values of its components depends on the specific choice of basis.
+
+The most common basis is the **Cartesian basis**, where for example for $n=3$ one has
+
+$$\vec{a}_1 = (1, 0, 0) \, ,\qquad \vec{a}_2 = (0, 1, 0)\, ,\qquad \vec{a}_3 = (0, 0, 1) \, ,$$
   
+The elements of a vector basis must be **linearly independent** from each other, meaning
+that none of them can be expressed as linear combination of the rest of basis vectors.
 
-   But in the second basis, the components are different, being instead $\vec{v}=\lp 2.4 ,0.8\rp$,
-    though the magnitude and direction of the vector itself remain unchanged.
+Let's consider one example in the two-dimensional real vector space $\mathbb{R}$, namely the $(x,y)$ coordinate plane, shown below.
+
+![image](figures/3_vector_spaces_1.jpg)
+  
+We see how the same vector $\vec{v}$ can be expressed in two different basis.
     
+In the first one, the Cartesian basis, its components are $\vec{v}=(2,2)$.
   
 
-  %%%%%%%%%%%%%%%%%%%%%%%%
-\begin{figure}[h]
-  \centering
-  \includegraphics[scale=0.50]{plots/L1-R2.pdf}
-  \caption{\small 
-   The components of the vector $\vec{v}$ depend on the specific basis chosen.
-}  
-\label{fig:L1-R2}
-\end{figure}
-%%%%%%%%%%%%%%%%%%%%%
-
-
-
-
-\item You might be familiar with the concept that one can perform a number of {\bf operations} between
-  vectors. Some important operations that are relevant in this course are are:
-  \begin{itemize}
-
-  \item {\bf Addition}: I can add two vectors to produce a third vector, $\vec{a} + \vec{b}= \vec{c}$.
-    %
-    As with scalar addition, also vectors satisfy the commutative property, $\vec{a} + \vec{b} = \vec{b} + \vec{a}$.
-    %
-    Vector addition can be carried out in terms of their components,
-    \be
-    \vec{c} = \vec{a} + \vec{b} = \lp a_1 + b_1, a_2 + b_2, \ldots, a_n + b_n  \rp = \lp c_1, c_2, \ldots, c_n\rp
-    \ee
-
-  \item  {\bf Scalar multiplication}: I can multiply a vector by a scalar number (either real
-    or complex) to produce another vector, $\vec{c} = \lambda \vec{a}$.
-%
-    Addition and scalar
-    multiplication of vectors are both {\bf associative} and {\bf distributive}, so the following
-    relations hold
-    \be
-    \lp \lambda \mu\rp \vec{a} = \lambda (\mu \vec{a}) = \mu (\lambda \vec{a})
-    \ee
-     \be
-    \lambda \lp \vec{a} + \vec{b}\rp = \lambda \vec{a} + \lambda \vec{b}
-    \ee
-
-\be
-    \lp \lambda + \mu\rp\vec{a} = \lambda \vec{a} +\mu \vec{a}
-    \ee
-
-  \item {\bf Vector product}: in addition to multiplying a vector by a scalar, as mentioned
-    above, one can also multiply two vectors among them.
-    %
-    There are two types of vector productions, one where the end result is a scalar (so just a number)
-    and the other where the end result is another vector.
-
-    The {\bf scalar production of vectors} is given by
-    \be
-    \vec{a}\cdot \vec{b} = a_1b_1 + a_2b_2 + \ldots + a_nb_n \, .
-    \ee
-    Note that since the scalar product is just a number, its value will not depend on the specific
-    basis in which we express the vectors: the scalar product is said to be {\bf basis-independent}.
+But in the second basis, the components are different, being instead $\vec{v}=(2.4 ,0.8)$,
+though the magnitude and direction of the vector itself remain unchanged.
     
-  \end{itemize}  
+## Properties of a vector space
+
+You might be familiar with the concept that one can perform a number of **operations** betweenvectors. Some important operations that are relevant in  this course are are:
 
+- **Addition**: I can add two vectors to produce a third vector, $\vec{a} + \vec{b}= \vec{c}$.
+  As with scalar addition, also vectors satisfy the commutative property, $\vec{a} + \vec{b} = \vec{b} + \vec{a}$.
+  Vector addition can be carried out in terms of their components,
+  $$ \vec{c} = \vec{a} + \vec{b} = (a_1 + b_1, a_2 + b_2, \ldots, a_n + b_n) =  (c_1, c_2, \ldots, c_n)$$
 
+-  **Scalar multiplication**: I can multiply a vector by a scalar number (either real
+  or complex) to produce another vector, $\vec{c} = \lambda \vec{a}$.
+  Addition and scalar multiplication of vectors are both {\bf associative} and {\bf distributive}, so the following
+  relations hold
+  
+  1. $$(\lambda \mu) \vec{a} = \lambda (\mu \vec{a}) = \mu (\lambda \vec{a})$$
+  
+  2. $$\lambda (\vec{a} + \vec{b}) = \lambda \vec{a} + \lambda \vec{b}$$
 
+  3. $$(\lambda + \mu)\vec{a} = \lambda \vec{a} +\mu \vec{a}$$
 
- \item Now we are ready to define in a more formal way what are vector spaces,
-   an essential concept for the description of quantum mechanics.
+- **Vector product**: in addition to multiplying a vector by a scalar, as mentioned above, one can also multiply two vectors among them.
+  There are two types of vector productions, one where the end result is a scalar (so just a number) and
+  the other where the end result is another vectors. 
 
-   The main properties of {\bf vector spaces} are the following:
+- The **scalar production of vectors** is given by
+  $$ \vec{a}\cdot \vec{b} = a_1b_1 + a_2b_2 + \ldots + a_nb_n \, .$$
+  Note that since the scalar product is just a number, its value will not depend on the specific
+  basis in which we express the vectors: the scalar product is said to be **basis-independent**.
 
-\begin{enumerate}
+Now we are ready to define in a more formal way what are vector spaces,
+an essential concept for the description of quantum mechanics.
 
-\item A vector space is {\bf complete upon vector addition}.
+The main properties of **vector spaces** are the following:
 
-  This property means that if
-  two arbitrary vectors  $\vec{a}$ and $\vec{b}$
+- A vector space is **complete upon vector addition**.
+  This property means that if two arbitrary vectors  $\vec{a}$ and $\vec{b}$
   are elements of a given vector space ${\mathcal V}^n$,
   then their addition should also be an element of the same vector space
-  \be
-  \vec{a}, \vec{b} \in {\mathcal V}, \qquad \vec{c} = \lp \vec{a} + \vec{b}\rp
-  \in {\mathcal V}^n \qquad \forall\,\, \vec{a}, \vec{b}
-  \ee
-
-  \item A vector space is {\bf complete upon scalar multiplication}.
-
-    This property means that when I multiply one arbitrary vector  $\vec{a}$,
-    element of the vector space ${\mathcal V}^n$,
-    by a general scalar $\lambda$, the result is another vector which also belongs
-    to the same vector space
-  \be
-  \vec{a} \in {\mathcal V}, \qquad \vec{c} = \lambda \vec{a}
-  \in {\mathcal V}^n \qquad \forall\,\, \vec{a},\lambda
-  \ee
+  
+  $$\vec{a}, \vec{b} \in {\mathcal V}, \qquad \vec{c} = (\vec{a} + \vec{b})
+  \in {\mathcal V}^n \qquad \forall\,\, \vec{a}, \vec{b}$$
+
+  - A vector space is **complete upon scalar multiplication**.
+  This property means that when I multiply one arbitrary vector  $\vec{a}$,
+  element of the vector space ${\mathcal V}^n$,
+  by a general scalar $\lambda$, the result is another vector which also belongs
+  to the same vector space
+  
+  $$\vec{a} \in {\mathcal V}, \qquad \vec{c} = \lambda \vec{a}
+  \in {\mathcal V}^n \qquad \forall\,\, \vec{a},\lambda$$
 
   The property that a vector space is complete upon scalar multiplication and vector addition is
-  also known as the {\bf closure condition}.
-\item There exists a {\bf null element} $\vec{0}$ such that $\vec{a}+\vec{0} =\vec{0}+\vec{a}=\vec{a} $.
+  also known as the **closure condition**.
+
+  - There exists a **null element** $\vec{0}$ such that $\vec{a}+\vec{0} =\vec{0}+\vec{a}=\vec{a} $.
 
-\item {\bf Inverse element}: for each vector $\vec{a} \in \mathcal{V}^n$ there exists another
+  - **Inverse element**: for each vector $\vec{a} \in \mathcal{V}^n$ there exists another
   element of the same vector space, $-\vec{a}$, such that their addition results
   in the null element, $\vec{a} + \lp -\vec{a}\rp = \vec{0}$.
-  %
-  This element it called the inverse element.
-
-  \item A vector space comes often equipped with various multiplication operations between vectors, such as the scalar product mentioned above, but also other operations such as the vector product or the tensor product.
-
-\item There are other properties, both for what we are interested in these are sufficient.
-
-\end{enumerate}
-
-\item You will find in Brightspace additional material and examples that you can use to
-  extend your knowledge of linear vector spaces.
-
-\item In the next video we will discuss how to apply these ideas to the case of quantum mechanics.
-
-\end{itemize}
-
-
-
-
-
-
-
-
-
-Some definitions:
-
--   For a complex number $z = a + b {{\rm i}}$, $a$ is called the *real
-    part*, and $b$ the *imaginary part*.
-
--   The *complex conjugate* $z^*$ of $z = a + b {{\rm i}}$ is defined as
-    $$z^* = a - b{{\rm i}},$$ i.e., taking the complex conjugate means
-    flipping the sign of the imaginary part.
-
-### Addition
-
-
-For two complex numbers, $z_1 = a_1 + b_1 {{\rm i}}$ and
-$z_2 = a_2 + b_2 {{\rm i}}$, the sum $w = z_1 + z_2$ is given as
-$$w = w_1 + w_2 {{\rm i}}= (a_1 + a_2) + (b_1 + b_2) {{\rm i}}$$ where
-the parentheses in the rightmost expression have been added to group the
-real and the imaginary part. A consequence of this definition is that
-the sum of a complex number and its complex conjugate is real:
-$$z + z^* = a + b {{\rm i}}+ a - b {{\rm i}}= 2a,$$ i.e., this results
-in twice the real part of $z$. Similarly, subtracting $z^*$ from $z$
-yields $$z - z^* = a + b {{\rm i}} - a + b {{\rm i}}= 2b{\rm i},$$ i.e.,
-twice the imaginary part of $z$ (times $\rm i$).
-
-### Multiplication
-
-
-For the same two complex numbers $z_1$ and $z_2$ as above, their product
-is calculated as
-$$w = z_1 z_2 = (a_1 + b_1 {{\rm i}}) (a_2 + b_2 {{\rm i}}) = (a_1 a_2 - b_1 b_2) + (a_1 b_2 + a_2 b_1) {{\rm i}},$$
-where the parentheses have again be used to indicate the real and
-imaginary parts.
-
-A consequence of this definition is that the product of a complex number
-$z = a + b {{\rm i}}$ with its conjugate is real:
-$$z z^* = (a+b{{\rm i}})(a-b{{\rm i}}) = a^2 + b^2.$$ The square root of
-this number is the *norm* $|z|$ of $z$:
-$$|z| = \sqrt{z z^*} = \sqrt{a^2 + b^2}.$$
-
-### Division
-
-The quotient $z_1/z_2$ of two complex numbers $z_1$ and $z_2$ as above,
-can be evaluated by multiplying the numerator and denominator by the
-complex conjugate of $z_2$:
-$$\frac{z_1}{z_2} = \frac{z_1 z_2^*}{z_2 z_2^*} = \frac{(a_1 a_2 + b_1 b_2) + (-a_1 b_2 + a_2 b_1) {{\rm i}}}{a_2^2 + b_2^2}.$$
-Check this!
-
-**Example** 
-$$\begin{align} 
-\frac{1 + 2{\rm i}}{1 - 2{\rm i}} &= \frac{(1 + 2{\rm i})(1 + 2{\rm i})}{1^2 + 2^2} = \frac{1+8{\rm i} -4}{5}\\
-&= -\frac{3}{5} + {\rm i} \frac{8}{5}
-\end{align}$$
-
-### Visualization: the complex plane
-
-Complex numbers can be rendered on a two-dimensional (2D) plane, the
-*complex plane*. This plane is spanned by two unit vectors, one
-horizontal, which represents the real number 1, whereas the vertical
-unit vector represents ${\rm i}$.
-
-![image](figures/complex_numbers_5_0.svg)
-
-Note that the norm of $z$ is the length of this vector.
-
-#### Addition in the complex plane
-
-Adding two numbers in the complex plane corresponds to adding the
-horizontal and vertical components:
-
-![image](figures/complex_numbers_8_0.svg)
+  
+  This element it called the **inverse element**.
 
-We see that the sum is found as the diagonal of a parallelogram spanned
-by the two numbers.
+  - A vector space comes often equipped with various multiplication operations between vectors, such as the scalar product mentioned above, but also  other operations such as the vector product or the tensor product.
 
-## Complex functions
+  - There are other properties, both for what we are interested in these are sufficient.
 
-<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/7XtR_wDSqRc" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
 
 
-Real functions can (most of the times) be written in terms of a Taylor series:
-$$f(x) = \sum \limits_{n=0}^{\infty} \frac{f^{(n)}(x_{0})}{n!} (x-x_{0})^{n}$$
-We can write something similar for complex functions, 
-when replacing the *real* variable $x$ with its *complex* counterpart $z$:
-$$f(z) = \sum \limits_{n=0}^{\infty} \frac{f^{(n)}(x_{0})}{n!} (z-x_{0})^{n}$$
 
-For this course, the most important function is the *complex exponential function*, at which we will have a look below.
 
-### The complex exponential function
-The complex exponential is used *extremely often*. 
-It occurs in Fourier transforms and it is very convenient for doing calculations 
-involving cosines and sines. 
-It also makes doing many common operations on complex number a lot easier.
 
-The exponential function $f(z) = \exp(z) = e^z$ is defined as:
-$$\exp(z) = e^{x + {\rm i}y} = e^{x} + e^{{\rm i} y} = e^{x} \left( \cos y + {\rm i} \sin y\right).$$
-The last expression is called the *Euler identity*.
 
-**Exercise** Check that this function obeys
-$$\exp(z_1) \exp(z_2) = \exp(z_1 + z_2).$$ You need sum- and difference
-formulas of cosine and sine.
 
-### The polar form
 
-A complex number can be represented by two real numbers, $a$ and $b$
-which represent the real and imaginary part of the complex number. An
-alternative representation is a *vector* in the complex plane, whose
-horizontal component is the real, and vertical component the imaginary
-part. However, it is also possible to characterize that vector by its
-*length* and *direction*, where the latter can be represented by the
-angle the vector makes with the horizontal axis:
-
-![image](figures/complex_numbers_10_0.svg)
-
-The angle with the horizontal axis is denoted by $\varphi$, just as in
-the case of polar coordinates. In the context of complex numbers, this
-angle is denoted as the *argument*. We have:
-
-> A complex number can be represented either by its real and imaginary
-> part, corresponding to the Cartesian coordinates in the complex plane,
-> or by its *norm* and its *argument*, corresponding to polar
-> coordinates. The norm is the length of the vector, and the argument is
-> the angle it makes with the horizontal axis.
-
-From our previous discussion on polar coordinates we can conclude that
-for a complex number $z = a + b {\rm i}$, its real and imaginary parts
-can be expressed as $$a = |z| \cos\varphi$$ $$b = |z| \sin\varphi$$ The
-inverse equations are $$|z| = \sqrt{a^2 + b^2}$$
-$$\varphi = \arctan(b/a)$$ for $a>0$. In general:
-$$\varphi = \begin{cases} \arctan(b/a) &{\rm for ~} a>0; \\
- \pi + \arctan(b/a) & {\rm for ~} a<0 {\rm ~ and ~} b>0;\\
- -\pi + \arctan(b/a) &{\rm for ~} a<0 {\rm ~ and ~} b<0.
- \end{cases}$$
-
- It turns out that using this magnitude $|z|$ and phase $\varphi$, we can write any complex number as
- $$z = |z| e^{{\rm i} \varphi}$$
-When increasing $\varphi$ with $2 \pi$, we make a full circle and reach the same point on the complex plane. In other words, when adding $2 \pi$ to our argument, we get the same complex number!
-As a result, the argument $\varphi$ is defined up to $2 \pi$, and we are free to make any choice we like, such as
-$$\begin{align}
--\pi < \varphi < \pi  \textrm{ (left)} \\
--\frac{\pi}{2} < \varphi < \frac{3 \pi}{2} \textrm{ (right)} \end{align} $$
-
-![image](figures/complex_numbers_11_0.svg)
-
-
-Some useful values of the complex exponential to know by heart are $e^{2{\rm i } \pi} = 1 $, $e^{{\rm i} \pi} = -1 $ and $e^{{\rm i} \pi/2} = {\rm i}$. 
-From the first expression, it also follows that 
-$$e^{{\rm i} (y + 2\pi n)} = e^{{\rm i}\pi} {\rm ~ for ~} n \in \mathbb{Z}$$
-As a result, $y$ is only defined up to $2\pi$.
-
-Furthermore, we can define the sine and cosine in terms of complex exponentials:
-$$\cos(x) = \frac{e^{{\rm i} x} + e^{-{\rm i} x}}{2}$$
-$$\sin(x) = \frac{e^{{\rm i} x} - e^{-{\rm i} x}}{2}$$
-
-Most operations on complex numbers are easiest when converting the complex number to its *polar form*, using the exponential.
-Some operations which are common in real analysis are then easily derived for their complex counterparts:
-$$z^{n} = \left(r e^{{\rm i} \varphi}\right)^{n} = r^{n} e^{{\rm i} n \varphi}$$
-$$\sqrt[n]{z} = \sqrt[n]{r e^{{\rm i} \varphi} } = \sqrt[n]{r} e^{{\rm i}\varphi/n} $$
-$$\log(z) = log \left(r e^{{\rm i} \varphi}\right) = log(r) + {\rm i} \varphi$$
-$$z_{1}z_{2} = r_{1} e^{{\rm i} \varphi_{1}} r_{2} e^{{\rm i} \varphi_{2}} = r_{1} r_{2} e^{{\rm i} (\varphi_{1} + \varphi_{2})}$$
-We see that during multiplication, the norm of the new number is the *product* of the norms of the multiplied numbers, and its argument is the *sum* of the arguments of the multiplied numbers. In the complex plane, this looks as follows:
-
-![image](figures/complex_numbers_12_0.svg)
-
-**Example** Find all solutions solving $z^4 = 1$. 
-
-Of course, we know that $z = \pm 1$ are two solutions, but which other solutions are possible? We take a systematic approach:
-$$\begin{align} z = e^{{\rm i} \varphi} & \Rightarrow z^4 = e^{4{\rm i} \varphi} = 1 \\
-& \Leftrightarrow 4 \varphi = n 2 \pi \\
-& \Leftrightarrow \varphi = 0, \varphi = \frac{\pi}{2}, \varphi = -\frac{\pi}{2}, \varphi = \pi \\
-& \Leftrightarrow z = 1, z = i, z = -i, z = -1 \end{align}$$
-
-## Differentiation and integration
-
-<iframe width="100%" height=315 src="https://www.youtube-nocookie.com/embed/JyftSqmmVdU" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
-
-
-We only consider differentiation and integration over *real* variables. We can then regard the complex ${\rm i}$ as another constant, and use our usual differentiation and integration rules:
-$$\frac{d}{d\varphi} e^{{\rm i} \varphi} = e^{{\rm i} \varphi} \frac{d}{d\varphi} ({\rm i} \varphi) ={\rm i} e^{{\rm i} \varphi} .$$
-$$\int_{0}^{\pi} e^{{\rm i} \varphi} = \frac{1}{{\rm i}} \left[ e^{{\rm i} \varphi} \right]_{0}^{\pi} = -{\rm i}(-1 -1) = 2 {\rm i}$$
-
-## Bonus: the complex exponential function and trigonometry
-
-Let us show some tricks where the simple properties of the exponential
-function helps in re-deriving trigonometric identities.
-
-1.  Take $|z_1| = |z_2| = 1$, and $\arg{(z_1)} = \varphi_1$ and
-    $\arg{(z_2)} = \varphi_2$. Then it is easy to see that
-    $z_i = \exp({\rm i} \varphi_i)$, $i=1, 2$. Then:
-    $$z_1 z_2 = \exp[{\rm i} (\varphi_1 + \varphi_2)].$$ The left hand
-    side can be written as
-    $$\begin{align}
-    z_1 z_2 & = \left[ \cos(\varphi_1) + {\rm i} \sin(\varphi_1) \right] \left[ \cos(\varphi_2) + {\rm i} \sin(\varphi_2) \right] \\
-    & = \cos\varphi_1 \cos\varphi_2 - \sin\varphi_1 \sin\varphi_2 + {\rm i} \left( \cos\varphi_1 \sin\varphi_2 + 
-    \sin\varphi_1 \cos\varphi_2 \right).
-    \end{align}$$
-    On the other hand, the right
-    hand side can be written as
-    $$\exp[{\rm i} (\varphi_1 + \varphi_2)] = \cos(\varphi_1 + \varphi_2) + {\rm i} \sin(\varphi_1 + \varphi_2).$$
-    Comparing the two expressions, equating their real and imaginary
-    parts, we find
-    $$\cos(\varphi_1 + \varphi_2) = \cos\varphi_1 \cos\varphi_2 - \sin\varphi_1 \sin\varphi_2;$$
-    $$\sin(\varphi_1 + \varphi_2) = \cos\varphi_1 \sin\varphi_2 + 
-    \sin\varphi_1 \cos\varphi_2.$$ Note that we used the resulting
-    formulas already in order to derive the properties of the
-    exponential function. The point is that you can use the properties
-    of the complex exponential to quickly find the form of gonometric
-    formulas which you easily forget.
-
-2.  As a final example, consider what we can learn from the derivative
-    of the exponential function:
-    $$\frac{d}{d\varphi} \exp({\rm i} \varphi) = {\rm i} \exp({\rm i} \varphi) .$$
-    Writing out the exponential in terms of cosine and sine, we see that
-    $$\cos'\varphi + {\rm i} \sin'\varphi = {\rm i} \cos\varphi - \sin\varphi.$$
-    where the prime $'$ denotes the derivative as usual. Equating real
-    and imaginary parts leads to $$\cos'\varphi = - \sin\varphi;$$
-    $$\sin'\varphi = \cos\varphi.$$
-
-## Summary
-
--   A complex number $z$ has the form $$z = a + b \rm i$$ where $a$ and
-    $b$ are both real, and $\rm i^2 = 1$. The real number $a$ is called
-    the *real part* of $z$ and $b$ is the *imaginary part*. Two complex
-    numbers can be added, subtracted and multiplied straightforwardly.
-    The quotient of two complex numbers $z_1=a_1 + \rm i b_1$ and
-    $z_2=a_2 + \rm i b_2$ is
-    $$\frac{z_1}{z_2} = \frac{z_1 z_2^*}{z_2 z_2^*} = \frac{(a_1 a_2 + b_1 b_2) + (-a_1 b_2 + a_2 b_1) {{\rm i}}}{a_2^2 + b_2^2}.$$
-
--   Complex numbers can also be characterised by their *norm*
-    $|z|=\sqrt{a^2+b^2}$ and *argument* $\varphi$. These coordinates
-    correspond to polar coordinates in the complex plane. For a complex
-    number $z = a + b {\rm i}$, its real and imaginary parts can be
-    expressed as $$a = |z| \cos\varphi$$ $$b = |z| \sin\varphi$$ The
-    inverse equations are $$|z| = \sqrt{a^2 + b^2}$$
-    $$\varphi = \begin{cases} \arctan(b/a) &{\rm for ~} a>0; \\
-     \pi + \arctan(b/a) & {\rm for ~} a<0 {\rm ~ and ~} b>0;\\
-     -\pi + \arctan(b/a) &{\rm ~ for ~} a<0 {\rm ~ and ~} b<0.
-     \end{cases}$$
-    The complex number itself then becomes
-    $$z = |z| e^{{\rm i} \varphi}$$
-
--   The most important complex function for us is the complex exponential function, which simplifies many operations on complex numbers
-    $$\exp(z) = e^{x + {\rm i}y} = e^{x} \left( \cos y + {\rm i} \sin y\right).$$
-    where $y$ is defined up to $2 \pi$.
-    The $\sin$ and $\cos$ can be rewritten in terms of this complex exponential as
-    $$\cos(x) = \frac{e^{{\rm i} x} + e^{-{\rm i} x}}{2}$$
-    $$\sin(x) = \frac{e^{{\rm i} x} - e^{-{\rm i} x}}{2}$$
-    Because we only consider *differentiation* and *integration* over *real variables*, the usual rules apply:
-    $$\frac{d}{d\varphi} e^{{\rm i} \varphi} = e^{{\rm i} \varphi} \frac{d}{d\varphi} ({\rm i} \varphi) ={\rm i} e^{{\rm i} \varphi} .$$
-    $$\int_{0}^{\pi} e^{{\rm i} \varphi} = \frac{1}{{\rm i}} \left[ e^{{\rm i} \varphi} \right]_{0}^{\pi} = -{\rm i}(-1 -1) = 2 {\rm i}$$
 
 ## Problems