diff --git a/src/3_vector_spaces.md b/src/3_vector_spaces.md
index b1fb080d82b41a4b66083c311ac26a9de05024fb..6ff33734368865e4ad29981ebc3828577f0fc226 100644
--- a/src/3_vector_spaces.md
+++ b/src/3_vector_spaces.md
@@ -112,27 +112,29 @@ There are two types of vector products; where the end result is a scalar (so jus
     Note that this cross-product can only be defined in *three-dimensional vector spaces*. The resulting vector 
     $\vec{c}=\vec{a}\times \vec{b} $ will have as components $c_1 = a_2b_3-a_3b_2$, $c_2= a_3b_1 - a_1b_3$, and $c_3= a_1b_2 - a_2b_1$.
 
-### Unit vector and orthogonality
+### Unit vector and orthonormality
 
 !!! info "Unit vector"
     A special vector is the **unit vector**, which has a norm of 1 *by definition*. A unit vector is often denoted with a hat, rather than an arrow ($\hat{i}$ instead of $\vec{i}$). To find the unit vector in the direction of an arbitrary vector $\vec{v}$, we divide by the norm: $$\hat{v} = \frac{\vec{v}}{|\vec{v}|}$$
 
-!!! info "Orthogonality"    
+!!! info "Orthonormality"    
     Two vectors are said to be **orthonormal** of they are perpendicular (orthogonal) *and* both are unit vectors.
 
 Now we are ready to define in a more formal way what vector spaces are,
 an essential concept for the description of quantum mechanics.
 
+### The main properties
+
 The main properties of **vector spaces** are the following:
 
-!!! ""
+!!! info ""
     A vector space is **complete upon vector addition**.
     This property means that if two arbitrary vectors  $\vec{a}$ and $\vec{b}$
     are elements of a given vector space ${\mathcal V}^n$,
     then their addition should also be an element of the same vector space 
     $$\vec{a}, \vec{b} \in {\mathcal V}^n, \qquad \vec{c} = (\vec{a} + \vec{b}) \in {\mathcal V}^n  \, ,\qquad \forall\,\, \vec{a}, \vec{b} \,.$$
 
-!!! "" 
+!!! info "" 
     A vector space is **complete upon scalar multiplication**.
     This property means that when I multiply one arbitrary vector  $\vec{a}$,
     element of the vector space ${\mathcal V}^n$, by a general scalar $\lambda$, the result is another vector which also belongs to the same vector space $$\vec{a} \in {\mathcal V}^n, \qquad \vec{c} = \lambda \vec{a}
@@ -140,16 +142,16 @@ The main properties of **vector spaces** are the following:
     The property that a vector space is complete upon scalar multiplication and vector addition is
     also known as the **closure condition**.
 
-!!! ""
+!!! info ""
     There exists a **null element** $\vec{0}$ such that $\vec{a}+\vec{0} =\vec{0}+\vec{a}=\vec{a} $.
 
-!!! ""
+!!! info ""
     **Inverse element**: for each vector $\vec{a} \in \mathcal{V}^n$ there exists another
     element of the same vector space, $-\vec{a}$, such that their addition results
     in the null element, $\vec{a} + ( -\vec{a}) = \vec{0}$. This element it called the **inverse element**.
 
 A vector space comes often equipped with various multiplication operations between vectors, such as the scalar product mentioned above
-(also known as *inner product*), but also  other operations such as the vector product or the tensor product. There are other properties, both for what we are interested in these are sufficient.
+(also known as *inner product*), but also many other operations such as *vector product* or *tensor product*. There are also many other properties, but for what we are interested in right now, these are sufficient.
 
 
 ## 3.3. Matrix representation of vectors
@@ -157,57 +159,38 @@ A vector space comes often equipped with various multiplication operations betwe
 It is advantageous to represent vectors with a notation suitable for matrix manipulation and operations. As we will show in the next lectures, the operations involving states in quantum systems can be expressed in the language of linear algebra.
 
 First of all, let us remind ourselves how we express vectors in the standard Euclidean space. In two dimensions, the position of a point $\vec{r}$ when making explicit the Cartesian basis vectors reads
-$$
-\vec{r}=x \hat{i}+y\hat{j} \, .
-$$
-As mentioned above, the  unit vectors $\hat{i}$ and $\hat{j}$ form an *orthonormal basis* of this vector space, and we call $x$ and $y$ the *components* of $\vec{r}$ with respect to the directions spanned by the basis vectors.
+$$ \vec{r}=x \hat{i}+y\hat{j} \, .$$
+As mentioned above, the unit vectors $\hat{i}$ and $\hat{j}$ form an *orthonormal basis* of this vector space, and we call $x$ and $y$ the *components* of $\vec{r}$ with respect to the directions spanned by the basis vectors.
 
-Recall also that the  choice of basis vectors is not unique, we can use any other pair of orthonormal unit vectors $\hat{i}$ and $\hat{j}$, and express the vector $\vec{r}$ in terms of these new basis vectors as
-$$
-\vec{r}=x'\hat{i}'+y'\hat{j}'=x\hat{i}+y\hat{i} \, ,
-$$
-with $x'\neq x$ and $y'\neq y$. So while the vector itself does not depend on the basis, the values of its components are basis dependent.
+Recall also that the choice of basis vectors is not unique, we can use any other pair of orthonormal unit vectors $\hat{i}$ and $\hat{j}$, and express the vector $\vec{r}$ in terms of these new basis vectors as 
+$$ \vec{r}=x'\hat{i}'+y'\hat{j}'=x\hat{i}+y\hat{i} \, ,$$
+with $x'\neq x$ and $y'\neq y$. So, while the vector itself does not depend on the basis, the values of its components are basis dependent.
 
 We can also express the vector $\vec{r}$ in the following form
-$$
-\vec{r} =  \begin{pmatrix}x\\y\end{pmatrix} \, .
-$$
+$$ \vec{r} =  \begin{pmatrix}x\\y\end{pmatrix} \, ,$$
 which is known as a *column vector*. Note that this notation assumes a specific choice of basis vectors, which is left
-implicit, and displays only the information on its components along this specific basis.
+implicit and displays only the information on its components along this specific basis.
 
-For instance, if we had chosen another set of basis vectors  $\hat{i}'$ and $\hat{j}'$, the components would be $x'$ and $y'$, and the corresponding column vector representing the same vector $\vec{r}$ in such case would be given by
-$$
-\vec{r}= \begin{pmatrix}x'\\y'\end{pmatrix}.
-$$
+For instance, if we had chosen another set of basis vectors $\hat{i}'$ and $\hat{j}'$, the components would be $x'$ and $y'$, and the corresponding column vector representing the same vector $\vec{r}$ in such case would be given by
+$$ \vec{r}= \begin{pmatrix}x'\\y'\end{pmatrix}.$$
 
 We also know that Euclidean space is equipped with a scalar vector product. 
 The scalar product $\vec{r_1}\cdot\vec{r_2}$ of two vectors in 2d Euclidean space is given by
-$$
-\vec{r_1}\cdot\vec{r_2}=r_1\,r_2\,\cos\theta \, ,
-$$
-where $r_1$ and $r_2$ indicate the *magnitude* (length) of the vectors
-and $\theta$ indicates its relative angle. Note that the scalar product of two vectors is just a number, and thus
-it must be *independent of the choice of basis*.
-
-The same scalar product can also be expressed in terms of components of $\vec{r_1}$ and $\vec{r_2}$.  When using the $\{ \hat{i}, \hat{j} \}$ basis, the scalar product will be given by
-$$
-\vec{r_1}\cdot\vec{r_2}=x_1\,x_2\,+\,y_1\,y_2 \, .
-$$
-Note that the same result would be obtained if the $\{ \hat{i}', \hat{j}' \}$basis
-had been chosen instead
-$$
-\vec{r_1}\cdot\vec{r_2}=x_1'\,x_2'\,+\,y_1'\,y_2' \, .
-$$
+$$ \vec{r_1}\cdot\vec{r_2}=r_1\,r_2\,\cos\theta \, ,$$
+where $r_1$ and $r_2$ indicate the *magnitude* (length) of the vectors and $\theta$ indicates its relative angle. Note that the scalar product of two vectors is just a number, and thus it must be *independent of the choice of basis*.
+
+The same scalar product can also be expressed in terms of components of $\vec{r_1}$ and $\vec{r_2}$. When using the $\{ \hat{i}, \hat{j} \}$ basis, the scalar product will be given by
+$$ \vec{r_1}\cdot\vec{r_2}=x_1\,x_2\,+\,y_1\,y_2 \, .$$
+Note that the same result would be obtained if the basis $\{ \hat{i}', \hat{j}' \}$ 
+had been chosen instead 
+$$ \vec{r_1}\cdot\vec{r_2}=x_1'\,x_2'\,+\,y_1'\,y_2' \, .$$
 
 The scalar product of two vectors can also be expressed, taking into
-account the properties of matrix multiplication, in the following
-form
-$$
-\vec{r_1}\cdot\vec{r_2} = \begin{pmatrix}x_1, y_1\end{pmatrix}\begin{pmatrix}x_2\\y_2\end{pmatrix} = x_1x_2+y_1y_2.
-$$
+account the properties of matrix multiplication, in the following form
+$$ \vec{r_1}\cdot\vec{r_2} = \begin{pmatrix}x_1, y_1\end{pmatrix}\begin{pmatrix}x_2\\y_2\end{pmatrix} = x_1x_2+y_1y_2 \, ,$$
 where here we say that the vector $\vec{r_1}$ is represented by a *row vector*.
 
-Therefore, we see that the scalar product of vectors in Euclidean space can be expressed as the matrix multiplication of row and column vectors. The same formalism, as we will see, can be applied for the case of Hilbert spaces in quantum mechanics.
+Therefore, we see that the scalar product of vectors in Euclidean space can be expressed as the matrix multiplication of row and column vectors. The same formalism, as we will see in the next class, can be applied for the case of Hilbert spaces in quantum mechanics.
 
 ***