diff --git a/src/1_complex_numbers.md b/src/1_complex_numbers.md
index bb163865b3e4241be15e04edbfe609c98f5991b4..ea01b4f70ae8fb969601c38f458ced421ba45ed7 100644
--- a/src/1_complex_numbers.md
+++ b/src/1_complex_numbers.md
@@ -182,7 +182,7 @@ Furthermore, we can define the sine and cosine in terms of complex exponentials:
     $$\sin(x) = \frac{e^{{\rm i} x} - e^{-{\rm i} x}}{2i}$$
 
 Most operations on complex numbers become easier when complex numbers are converted to their *polar form* using the complex exponential.
-Some functions and operations, which are common in real analysis, can be easily derived for their complex counterparts by sustituting the real variable $x$ with the complex variable $z$ in its polar form:
+Some functions and operations, which are common in real analysis, can be easily derived for their complex counterparts by substituting the real variable $x$ with the complex variable $z$ in its polar form:
 !!! info "Examples of some complex functions stated using polar form"
     $$z^{n} = \left(r e^{{\rm i} \varphi}\right)^{n} = r^{n} e^{{\rm i} n \varphi}$$
     $$\sqrt[n]{z} = \sqrt[n]{r e^{{\rm i} \varphi} } = \sqrt[n]{r} e^{{\rm i}\varphi/n} $$
@@ -219,7 +219,7 @@ We can then regard the complex ${\rm i}$ as another constant, and use our usual
 
 ## 1.4. Bonus: the complex exponential function and trigonometry
 
-Let us show some tricks in the folloiwing examples where the simple properties of the exponential
+Let us show some tricks in the following examples where the simple properties of the exponential
 function help in re-deriving trigonometric identities.
 
 !!! example "Properties of the complex exponential function I"
@@ -300,7 +300,7 @@ function help in re-deriving trigonometric identities.
     (b) Evaluate $$\left| \frac{a+b\rm i}{a-b\rm i} \right|$$
     for real $a$ and $b$.
 
-5.  [:sweat:] For any given complex number $z$, we can take the inverse $\frac{1}{z}$. Visualize taking the inverse in the complex plane. What geomtric operation does taking the inverse correspond to? (Hint: first consider what geometric operation $\frac{1}{z^*}$ corresponds to.)
+5.  [:sweat:] For any given complex number $z$, we can take the inverse $\frac{1}{z}$. Visualize taking the inverse in the complex plane. What geometric operation does taking the inverse correspond to? (Hint: first consider what geometric operation $\frac{1}{z^*}$ corresponds to.)
 
 6.  [:grinning:] Compute (a) 
     $$\frac{d}{dt} e^{{\rm i} (kx-\omega t)},$$
diff --git a/src/2_coordinates.md b/src/2_coordinates.md
index 2abb3ae5c7607e0bd8ef3de0273eb43a0a73f0ad..1d7e964d703426f1beed86f499fe8ba85efe3e33 100644
--- a/src/2_coordinates.md
+++ b/src/2_coordinates.md
@@ -1,7 +1,7 @@
 ---
 title: Coordinates
 ---
-# Coordinate systems
+# 2. Coordinate systems
 
 The lecture on coordinate systems consists of 3 parts, each with their own video:
 
@@ -252,7 +252,7 @@ chosen in physical space, we have two coordinates which have the
 dimension of a distance: $r$ and $z$. The other coordinate,
 $\varphi$ is of course dimensionless.
 
-What is the distance travelled along a path when we express this in
+What is the distance traveled along a path when we express this in
 cylindrical coordinates? Let’s consider an example shown in the figure below.
 
 <figure markdown>
@@ -284,11 +284,11 @@ sphere which is centered at the origin:
 <figure markdown>
   ![image](figures/Coordinates_15_0.svg)
   <figcaption>The position of a point on the sphere is specified using the radius $r$ and two angles
-$\theta$ and $\phi</figcaption>
+$\theta$ and $\phi$</figcaption>
 </figure>
 
 !!! warning
-    In mathematics, the angles are often labelled the other way
+    In mathematics, the angles are often labeled the other way
     around: there, $\phi$ is used for the angle between a line running from
     the origin to the point of interest and the $z$-axis, and $\theta$ for
     the angle of the projection of that line with the $x$-axis. The
@@ -300,7 +300,7 @@ The relation between Cartesian and spherical coordinates is defined by:
     $$y = r \sin\varphi \sin \vartheta$$ $$z = r \cos\vartheta$$ 
 
 The inverse transformation is easy to find: 
-!!! info "The inverse relatuion between Cartesian and spherical coordinates"
+!!! info "The inverse relation between Cartesian and spherical coordinates"
     $$r = \sqrt{x^2+y^2+z^2}$$
     $$\theta = \arccos(z/\sqrt{x^2+y^2+z^2})$$
     $$\phi = \begin{cases} \arctan(y/x) &{\rm for ~} x>0; \\
@@ -341,7 +341,7 @@ From these arguments we can again also find the volume element, it is
 here given as
 
 !!! info "Infinitesimal volume element in spherical coordinates"
-$$dV = r^2 \sin\theta dr d\theta d\varphi.$$
+    $$dV = r^2 \sin\theta dr d\theta d\varphi.$$
 
 ## 2.4. Summary
 
@@ -370,7 +370,7 @@ We have discussed four different coordinate systems:
     Infinitesimal volume: $$dV = r dr d\varphi dz.$$
 
 4.  !!! tip "Spherical coordinates" 
-    $${\bf r} = (r, \theta, \phi).$$ This sysytem can be
+    $${\bf r} = (r, \theta, \phi).$$ This system can be
     used in three dimensions. It is particularly suitable for systems with spherical
     symmetry or functions given in terms of these coordinates. <br/>
     Infinitesimal distance: 
diff --git a/src/3_vector_spaces.md b/src/3_vector_spaces.md
index f75528cd7e3d30e1a7fd8e46e44608a4d6eeb166..a53e6a6b6385d081e11aec95a350e248468c9cb5 100644
--- a/src/3_vector_spaces.md
+++ b/src/3_vector_spaces.md
@@ -2,28 +2,33 @@
 title: Vector Spaces
 ---
 
-# Vector spaces
+# 3. Vector spaces
 
 The lecture on vector spaces consists of **three parts**:
 
-- [Definition and basis dependence](#definition-and-basis-dependence)
+- [3.1. Definition and basis dependence](#31-definition-and-basis-dependence)
 
-- [Properties of a vector space](#properties-vector-space)
+- [3.2. Properties of a vector space](#32-properties-vector-space)
 
-- [Matrix representation of vectors](#matrix-representation-vectors)
+- [3.3. Matrix representation of vectors](#33-matrix-representation-vectors)
 
-and at the end of the lecture one can find the corresponding [Problems](#problems)
+and at the end of this lecture note, there is a set of corresponding exercises
+
+- [3.4 Problems](#34-problems)
+
+---
 
 The contents of this lecture are summarised in the following **videos**:
 
-- [3_vector_spaces_video1](https://www.dropbox.com/s/evytrbb55fgrcze/linear_algebra_01.mov?dl=0)
+1. [Vector spaces: Introduction](https://www.dropbox.com/s/evytrbb55fgrcze/linear_algebra_01.mov?dl=0)
 
-- [3_vector_spaces_video2](https://www.dropbox.com/s/1530xb7zbuhwu6u/linear_algebra_02.mov?dl=0)
+2. [Operations in vector spaces](https://www.dropbox.com/s/1530xb7zbuhwu6u/linear_algebra_02.mov?dl=0)
 
-- [3_vector_spaces_video3](https://www.dropbox.com/s/5lwkxd8lw5uwri9/linear_algebra_03.mov?dl=0)
+3. [Properties of vector spaces](https://www.dropbox.com/s/5lwkxd8lw5uwri9/linear_algebra_03.mov?dl=0)
 
+**Total video lentgh: ~16 minutes**
 
-## Definition and basis dependence
+## 3.1. Definition and basis dependence
 
 A vector $\vec{v}$ is a mathematical object characterised by both a **magnitude** and a **direction**, that is, an orientation in a given space.
   
@@ -34,172 +39,162 @@ $$\vec{v} = (v_1, v_2,\ldots, v_n) \, ,$$
 We will denote by ${\mathcal V}^n$ the **vector space** composed by all possible vectors of the above form.
 
 The components of a vector, $\{ v_i\}$ can be **real numbers** or **complex numbers**,
-depending on whether we have a real or a complex vector space. Note that the expression above of $\vec{v}$ in terms of its components assume that we are
-using some specific **basis**. It is important to recall that the same vector can be expressed in terms of different bases. A **vector basis** is a set of $n$ vectors that can be used to generate all the elements
-of a vector space.
-  
+depending on whether we have a real or a complex vector space. 
+
+!!! info "Vector basis" 
+    Note that the above expression of $\vec{v}$ in terms of its components assume that we are using a specific **basis**. 
+    It is important   to  recall that the same vector can be expressed in terms of different bases. 
+    A **vector basis** is a set of $n$ vectors that can be used to generate all the elements of a vector space.
+
 For example, a possible basis of  ${\mathcal V}^n$ could be denoted by $\vec{a}_1,\vec{a}_2,\ldots,\vec{a_n}$,
 and we can write a generic vector  $\vec{v}$  as
 
 $$\vec{v} = (v_1, v_2, \ldots, v_n) = v_1 \vec{a}_1 + v_2 \vec{a}_2 + \ldots v_n \vec{a}_n \, .$$
 
-However, one could choose another different basis, denoted by $\vec{b}_1,\vec{b}_2,\ldots,\vec{b_n}$, where the same vector would be expressed in terms of a different set of components
+However, one could choose a different basis, denoted by $\vec{b}_1,\vec{b}_2,\ldots,\vec{b_n}$, where the same vector would be expressed in terms of a different set of components
 
 $$ \vec{v} = (v'_1, v'_2, \ldots, v'_n) = v'_1 \vec{b}_1 + v'_2 \vec{b}_2 + \ldots v'_n \vec{b}_n \, .$$
 
-so while the vector remains the same, the values of its components depends on the specific choice of basis.
+Thus, while the vector remains the same, the values of its components depend on the specific choice of basis.
 
-The most common basis is the **Cartesian basis**, where for example for $n=3$ one has
+The most common basis is the **Cartesian basis**, where for example for $n=3$:
 
 $$\vec{a}_1 = (1, 0, 0) \, ,\qquad \vec{a}_2 = (0, 1, 0)\, ,\qquad \vec{a}_3 = (0, 0, 1) \, .$$
   
-The elements of a vector basis must be **linearly independent** from each other, meaning
-that none of them can be expressed as linear combination of the rest of basis vectors.
+!!! warning ""
+    The elements of a vector basis must be **linearly independent** from one another, meaning
+    that none of them can be expressed as a linear combination of the other basis vectors.
 
 We can consider one example in the two-dimensional real vector space $\mathbb{R}$, namely the $(x,y)$ coordinate plane, shown below.
 
-![image](figures/3_vector_spaces_1.jpg)
+<figure markdown>
+  ![image](figures/3_vector_spaces_1.jpg)
+  <figcaption></figcaption>
+</figure>
   
-We see how the same vector $\vec{v}$ can be expressed in two different bases. In the first one (left panel), the Cartesian basis, its components are $\vec{v}=(2,2)$. But in the second basis (right panel), the components are different, being instead $\vec{v}=(2.4 ,0.8)$,
-though the magnitude and direction of the vector itself remain unchanged.
+In this figure, you can see how the same vector $\vec{v}$ can be expressed in two different bases. In the first one (left panel), the Cartesian basis is used and its components are $\vec{v}=(2,2)$. In the second basis (right panel), the components are different, namely $\vec{v}=(2.4 ,0.8)$, while the magnitude and direction of the vector remain unchanged.
 
-For many problems, both in mathematics and in physics, the appropiate choice of the vector space basis will significantly facilitate
-its solution.
+For many problems, both in mathematics and in physics, the appropriate choice of the vector space basis may significantly simplify the
+solution process.
     
-## Properties of a vector space
+## 3.2. Properties of a vector space
+
+You might be already familiar with the concept of performing a number of various **operations** between vectors, so in this course, let us review some essential operations that are relevant to start working with quantum mechanics:
+
+!!! info "Addition" 
+    I can add two vectors to produce a third vector, $$\vec{a} + \vec{b}= \vec{c}.$$
+    As with scalar addition, also vectors satisfy the commutative property, $$\vec{a} + \vec{b} = \vec{b} + \vec{a}.$$
+    Vector addition can be carried out in terms of their components,
+    $$ \vec{c} = \vec{a} + \vec{b} = (a_1 + b_1, a_2 + b_2, \ldots, a_n + b_n) =  (c_1, c_2, \ldots, c_n).$$
+
+!!! info "Scalar multiplication" 
+    I can multiply a vector by a scalar number (either real or complex) to produce another vector, $$\vec{c} = \lambda \vec{a}.$$ 
+    Addition and scalar multiplication of vectors are both *associative* and *distributive*, so the following relations hold
+    $$\begin{align} &1. \qquad (\lambda \mu) \vec{a} = \lambda (\mu \vec{a}) = \mu (\lambda \vec{a})\\
+    &2. \qquad \lambda (\vec{a} + \vec{b}) = \lambda \vec{a} + \lambda \vec{b}\\
+    &3. \qquad (\lambda + \mu)\vec{a} = \lambda \vec{a} +\mu \vec{a} \end{align}$$
+ 
+### Vector products
 
-You might be familiar with the concept that one can perform a number of **operations** between vectors. Some important operations that are relevant in  this course are are:
+In addition to multiplying a vector by a scalar, as mentioned above, one can also multiply two vectors among them. 
+There are two types of vector products; where the end result is a scalar (so just a number) and where the end result is another vector. 
 
-- **Addition**: I can add two vectors to produce a third vector, $\vec{a} + \vec{b}= \vec{c}$.
-  As with scalar addition, also vectors satisfy the commutative property, $\vec{a} + \vec{b} = \vec{b} + \vec{a}$.
-  Vector addition can be carried out in terms of their components,
-  $$ \vec{c} = \vec{a} + \vec{b} = (a_1 + b_1, a_2 + b_2, \ldots, a_n + b_n) =  (c_1, c_2, \ldots, c_n) \, .$$
+!!! info "Scalar product of vectors" 
+    The scalar product of vectors is given by $$ \vec{a}\cdot \vec{b} = a_1b_1 + a_2b_2 + \ldots + a_nb_n \, .$$
+    Note that since the scalar product is just a number, its value will not depend on the specific
+    basis in which we express the vectors: the scalar product is said to be *basis-independent*. The scalar product is also found via 
+    $$\vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}| \cos \theta$$ with $\theta$ the angle between the vectors.
 
--  **Scalar multiplication**: I can multiply a vector by a scalar number (either real
-or complex) to produce another vector, $\vec{c} = \lambda \vec{a}$.
-Addition and scalar multiplication of vectors are both *associative* and *distributive*, so the following
-relations hold
-  
-1. $(\lambda \mu) \vec{a} = \lambda (\mu \vec{a}) = \mu (\lambda \vec{a})$
-2. $\lambda (\vec{a} + \vec{b}) = \lambda \vec{a} + \lambda \vec{b}$
-3. $(\lambda + \mu)\vec{a} = \lambda \vec{a} +\mu \vec{a}$
-
-- **Vector product**: in addition to multiplying a vector by a scalar, as mentioned above, one can also multiply two vectors among them.
-There are two types of vector productions, one where the end result is a scalar (so just a number) and
-the other where the end result is another vectors. 
-
-- The **scalar production of vectors** is given by $$ \vec{a}\cdot \vec{b} = a_1b_1 + a_2b_2 + \ldots + a_nb_n \, .$$
-Note that since the scalar product is just a number, its value will not depend on the specific
-basis in which we express the vectors: the scalar product is said to be *basis-independent*. The scalar product is also found via
-$$\vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}| \cos \theta$$
-with $\theta$ the angle between the vectors.
-
-- The **vector product** (or cross product) between two vectors $\vec{a}$ and $\vec{b}$ is given by
-$$
-\vec{a}\times \vec{b} = |\vec{a}||\vec{b}|\sin\theta \hat{n} \, ,
-$$
-where $|\vec{a}|=\sqrt{ \vec{a}\cdot\vec{a} }$ (and likewise for $|\vec{b}|$) is the norm of the vector $\vec{a}$, $\theta$ is the angle
-between the two vectors, and $\hat{n}$ is a unit vector which is *perpendicular* to the plane that contains $\vec{a}$ and $\vec{b}$. 
-Note that this cross-product can only be defined in *three-dimensional vector spaces*. The resulting vector $\vec{c}=\vec{a}\times \vec{b} $ will have as components $c_1 = a_2b_3-a_3b_2$, $c_2= a_3b_1 - a_1b_3$, and $c_3= a_1b_2 - a_2b_1$.
-
-- A special vector is the **unit vector**, which has a norm of 1 *by definition*. A unit vector is often denoted with a hat, rather than an arrow ($\hat{i}$ instead of $\vec{i}$). To find the unit vector in the direction of an arbitrary vector $\vec{v}$, we divide by the norm: 
-$$\hat{v} = \frac{\vec{v}}{|\vec{v}|}$$
-
-- Two vectors are said to be **orthonormal** of they are perpendicular (orthogonal) *and* both are unit vectors.
-
-Now we are ready to define in a more formal way what are vector spaces,
+!!! info "Cross product"
+    The vector product (or cross product) between two vectors $\vec{a}$ and $\vec{b}$ is given by 
+    $$ \vec{a}\times \vec{b} = |\vec{a}||\vec{b}|\sin\theta \hat{n}$$
+    where $|\vec{a}|=\sqrt{ \vec{a}\cdot\vec{a} }$ (and likewise for $|\vec{b}|$) is the norm of the vector $\vec{a}$, $\theta$ is the angle between the two vectors, and $\hat{n}$ is a unit vector which is *perpendicular* to the plane that contains $\vec{a}$ and $\vec{b}$. 
+    Note that this cross-product can only be defined in *three-dimensional vector spaces*. The resulting vector 
+    $\vec{c}=\vec{a}\times \vec{b} $ will have as components $c_1 = a_2b_3-a_3b_2$, $c_2= a_3b_1 - a_1b_3$, and $c_3= a_1b_2 - a_2b_1$.
+
+### Unit vector and orthonormality
+
+!!! info "Unit vector"
+    A special vector is the **unit vector**, which has a norm of 1 *by definition*. A unit vector is often denoted with a hat, rather than an arrow ($\hat{i}$ instead of $\vec{i}$). To find the unit vector in the direction of an arbitrary vector $\vec{v}$, we divide by the norm: $$\hat{v} = \frac{\vec{v}}{|\vec{v}|}$$
+
+!!! info "Orthonormality"    
+    Two vectors are said to be **orthonormal** of they are perpendicular (orthogonal) *and* both are unit vectors.
+
+Now we are ready to define in a more formal way what vector spaces are,
 an essential concept for the description of quantum mechanics.
 
-The main properties of **vector spaces** are the following:
+### The main properties
 
-- A vector space is **complete upon vector addition**.
-This property means that if two arbitrary vectors  $\vec{a}$ and $\vec{b}$
-are elements of a given vector space ${\mathcal V}^n$,
-then their addition should also be an element of the same vector space
-  
-$$\vec{a}, \vec{b} \in {\mathcal V}^n, \qquad \vec{c} = (\vec{a} + \vec{b})
-\in {\mathcal V}^n  \, ,\qquad \forall\,\, \vec{a}, \vec{b} \,.$$
+The main properties of **vector spaces** are the following:
 
-- A vector space is **complete upon scalar multiplication**.
-This property means that when I multiply one arbitrary vector  $\vec{a}$,
-element of the vector space ${\mathcal V}^n$,
-by a general scalar $\lambda$, the result is another vector which also belongs
-to the same vector space
-$$\vec{a} \in {\mathcal V}^n, \qquad \vec{c} = \lambda \vec{a}
-\in {\mathcal V}^n \qquad \forall\,\, \vec{a},\lambda \, .$$
-The property that a vector space is complete upon scalar multiplication and vector addition is
-also known as the **closure condition**.
+!!! info ""
+    A vector space is **complete upon vector addition**.
+    This property means that if two arbitrary vectors  $\vec{a}$ and $\vec{b}$
+    are elements of a given vector space ${\mathcal V}^n$,
+    then their addition should also be an element of the same vector space 
+    $$\vec{a}, \vec{b} \in {\mathcal V}^n, \qquad \vec{c} = (\vec{a} + \vec{b}) \in {\mathcal V}^n  \, ,\qquad \forall\,\, \vec{a}, \vec{b} \,.$$
+
+!!! info "" 
+    A vector space is **complete upon scalar multiplication**.
+    This property means that when I multiply one arbitrary vector  $\vec{a}$,
+    element of the vector space ${\mathcal V}^n$, by a general scalar $\lambda$, the result is another vector which also belongs to the same vector space $$\vec{a} \in {\mathcal V}^n, \qquad \vec{c} = \lambda \vec{a}
+    \in {\mathcal V}^n \qquad \forall\,\, \vec{a},\lambda \, .$$
+    
+The property that a vector space is complete upon scalar multiplication and vector addition is also known as the **closure condition**.
 
-- There exists a **null element** $\vec{0}$ such that $\vec{a}+\vec{0} =\vec{0}+\vec{a}=\vec{a} $.
+!!! info ""
+    There exists a **null element** $\vec{0}$ such that $\vec{a}+\vec{0} =\vec{0}+\vec{a}=\vec{a} $.
 
-- **Inverse element**: for each vector $\vec{a} \in \mathcal{V}^n$ there exists another
-element of the same vector space, $-\vec{a}$, such that their addition results
-in the null element, $\vec{a} + ( -\vec{a}) = \vec{0}$. This element it called the **inverse element**.
+!!! info ""
+    **Inverse element**: for each vector $\vec{a} \in \mathcal{V}^n$ there exists another
+    element of the same vector space, $-\vec{a}$, such that their addition results
+    in the null element, $\vec{a} + ( -\vec{a}) = \vec{0}$. This element it called the **inverse element**.
 
 A vector space comes often equipped with various multiplication operations between vectors, such as the scalar product mentioned above
-(also known as *inner product*), but also  other operations such as the vector product or the tensor product. There are other properties, both for what we are interested in these are sufficient.
+(also known as *inner product*), but also many other operations such as *vector product* or *tensor product*. There are also many other properties, but for what we are interested in right now, these are sufficient.
 
 
-## Matrix representation of vectors
+## 3.3. Matrix representation of vectors
 
 It is advantageous to represent vectors with a notation suitable for matrix manipulation and operations. As we will show in the next lectures, the operations involving states in quantum systems can be expressed in the language of linear algebra.
 
 First of all, let us remind ourselves how we express vectors in the standard Euclidean space. In two dimensions, the position of a point $\vec{r}$ when making explicit the Cartesian basis vectors reads
-$$
-\vec{r}=x \hat{i}+y\hat{j} \, .
-$$
-As mentioned above, the  unit vectors $\hat{i}$ and $\hat{j}$ form an *orthonormal basis* of this vector space, and we call $x$ and $y$ the *components* of $\vec{r}$ with respect to the directions spanned by the basis vectors.
+$$ \vec{r}=x \hat{i}+y\hat{j} \, .$$
+As mentioned above, the unit vectors $\hat{i}$ and $\hat{j}$ form an *orthonormal basis* of this vector space, and we call $x$ and $y$ the *components* of $\vec{r}$ with respect to the directions spanned by the basis vectors.
 
-Recall also that the  choice of basis vectors is not unique, we can use any other pair of orthonormal unit vectors $\hat{i}$ and $\hat{j}$, and express the vector $\vec{r}$ in terms of these new basis vectors as
-$$
-\vec{r}=x'\hat{i}'+y'\hat{j}'=x\hat{i}+y\hat{i} \, ,
-$$
-with $x'\neq x$ and $y'\neq y$. So while the vector itself does not depend on the basis, the values of its components are basis dependent.
+Recall also that the choice of basis vectors is not unique, we can use any other pair of orthonormal unit vectors $\hat{i}$ and $\hat{j}$, and express the vector $\vec{r}$ in terms of these new basis vectors as 
+$$ \vec{r}=x'\hat{i}'+y'\hat{j}'=x\hat{i}+y\hat{i} \, ,$$
+with $x'\neq x$ and $y'\neq y$. So, while the vector itself does not depend on the basis, the values of its components are basis dependent.
 
 We can also express the vector $\vec{r}$ in the following form
-$$
-\vec{r} =  \begin{pmatrix}x\\y\end{pmatrix} \, .
-$$
+$$ \vec{r} =  \begin{pmatrix}x\\y\end{pmatrix} \, ,$$
 which is known as a *column vector*. Note that this notation assumes a specific choice of basis vectors, which is left
-implicit, and displays only the information on its components along this specific basis.
+implicit and displays only the information on its components along this specific basis.
 
-For instance, if we had chosen another set of basis vectors  $\hat{i}'$ and $\hat{j}'$, the components would be $x'$ and $y'$, and the corresponding column vector representing the same vector $\vec{r}$ in such case would be given by
-$$
-\vec{r}= \begin{pmatrix}x'\\y'\end{pmatrix}.
-$$
+For instance, if we had chosen another set of basis vectors $\hat{i}'$ and $\hat{j}'$, the components would be $x'$ and $y'$, and the corresponding column vector representing the same vector $\vec{r}$ in such case would be given by
+$$ \vec{r}= \begin{pmatrix}x'\\y'\end{pmatrix}.$$
 
 We also know that Euclidean space is equipped with a scalar vector product. 
-The scalar product $\vec{r_1}\cdot\vec{r_2}$ of two vectors in 2d Euclidean space is given by
-$$
-\vec{r_1}\cdot\vec{r_2}=r_1\,r_2\,\cos\theta \, ,
-$$
-where $r_1$ and $r_2$ indicate the *magnitude* (length) of the vectors
-and $\theta$ indicates its relative angle. Note that the scalar product of two vectors is just a number, and thus
-it must be *independent of the choice of basis*.
-
-The same scalar product can also be expressed in terms of components of $\vec{r_1}$ and $\vec{r_2}$.  When using the $\{ \hat{i}, \hat{j} \}$ basis, the scalar product will be given by
-$$
-\vec{r_1}\cdot\vec{r_2}=x_1\,x_2\,+\,y_1\,y_2 \, .
-$$
-Note that the same result would be obtained if the $\{ \hat{i}', \hat{j}' \}$basis
-had been chosen instead
-$$
-\vec{r_1}\cdot\vec{r_2}=x_1'\,x_2'\,+\,y_1'\,y_2' \, .
-$$
+The scalar product $\vec{r_1}\cdot\vec{r_2}$ of two vectors in 2D Euclidean space is given by
+$$ \vec{r_1}\cdot\vec{r_2}=r_1\,r_2\,\cos\theta \, ,$$
+where $r_1$ and $r_2$ indicate the *magnitude* (length) of the vectors and $\theta$ indicates its relative angle. Note that the scalar product of two vectors is just a number, and thus it must be *independent of the choice of basis*.
+
+The same scalar product can also be expressed in terms of components of $\vec{r_1}$ and $\vec{r_2}$. When using the $\{ \hat{i}, \hat{j} \}$ basis, the scalar product will be given by
+$$ \vec{r_1}\cdot\vec{r_2}=x_1\,x_2\,+\,y_1\,y_2 \, .$$
+Note that the same result would be obtained if the basis $\{ \hat{i}', \hat{j}' \}$ 
+had been chosen instead 
+$$ \vec{r_1}\cdot\vec{r_2}=x_1'\,x_2'\,+\,y_1'\,y_2' \, .$$
 
 The scalar product of two vectors can also be expressed, taking into
-account the properties of matrix multiplication, in the following
-form
-$$
-\vec{r_1}\cdot\vec{r_2} = \begin{pmatrix}x_1, y_1\end{pmatrix}\begin{pmatrix}x_2\\y_2\end{pmatrix} = x_1x_2+y_1y_2.
-$$
+account the properties of matrix multiplication, in the following form
+$$ \vec{r_1}\cdot\vec{r_2} = \begin{pmatrix}x_1, y_1\end{pmatrix}\begin{pmatrix}x_2\\y_2\end{pmatrix} = x_1x_2+y_1y_2 \, ,$$
 where here we say that the vector $\vec{r_1}$ is represented by a *row vector*.
 
-Therefore, we see that the scalar product of vectors in Euclidean space can be expressed as the matrix multiplication of row and column vectors. The same formalism, as we will see, can be applied for the case of Hilbert spaces in quantum mechanics.
+Therefore, we see that the scalar product of vectors in Euclidean space can be expressed as the matrix multiplication of row and column vectors. The same formalism, as we will see in the next class, can be applied for the case of Hilbert spaces in quantum mechanics.
 
 ***
 
-## Problems
+## 3.4. Problems
 
 **1)** [:grinning:] Find a unit vector parallel to the sum of $\vec{r}_1$ and $\vec{r}_2$, where we have defined
 $$\vec{r}_1=2\vec{i}+4\vec{j}-5\vec{k} \, , $$ and $$\vec{r}_2=\vec{i}+2\vec{j}+3\vec{k} \, .$$.