Skip to content
Snippets Groups Projects

Some formatting updates

Merged Maciej Topyla requested to merge maciejedits into master
2 files
+ 139
120
Compare changes
  • Side-by-side
  • Inline
Files
2
@@ -44,24 +44,25 @@ initial conditions,
$$x^{(n-1)}(t_{0}) = x^{(n-1)}_{0}, \cdots, x(t_0)=x_0. $$
This is because to fully specify the solution of an $n$*-th* order differential
equation, $n-1$ initial conditions are necessary. To understand why we need
equation, $n$ initial conditions are necessary (we need to specify the value of $n-1$ derivatives
of $x(t)$ and as well the value of the function $x(t)$ for some $t_0$). To understand why we need
initial conditions, look at the following example.
!!! check "Example: Initial conditions"
Consider the following calculus problem,
$$\dot{f}(x)=x. $$
$$\dot{x}(t)=t. $$
By integrating, one finds that the solution to this equation is
$$\frac{1}{2}x^2 + c,$$
$$\frac{1}{2}t^2 + c,$$
where $c$ is an integration constant. In order to specify the integration
constant, an initial condition is needed. For instance, if we know that when
$x=2$ then $f(2)=4$, we can plug this into the equation to get
$t=2$ then $x(2)=4$, we can plug this into the equation to get
$$\frac{1}{2}*4 + c = 4, $$
$$\frac{1}{2}\cdot 4 + c = 4, $$
which implies that $c=2$.
@@ -82,7 +83,7 @@ In this course, we will be focusing on *Linear Differential Equations*, meaning
that we consider differential equations $x^{(n)}(t) = f(x^{(n-1)}(t), \cdots, x(t), t)$
where the function $f$ is a linear polynomial function of the unknown function
$x(t)$. A simple way to spot a non-linear differential equation is to look for
non-linear terms, such as $x(t)*\dot{x}(t)$ or $x^{(n)}(t)*x^{(2)}(t)$.
non-linear terms, such as $x(t) \cdot \dot{x}(t)$ or $x^{(n)}(t) \cdot x^{(2)}(t)$.
Often, we will be dealing with several coupled differential equations. In this
situation, we can write the entire system of differential equations as a vector
@@ -96,11 +97,17 @@ x_{m}(t) \\
A system of first order linear equations is then written as
$$\dot{\vec{x}(t)} = \vec{f}(\vec{x}(t),t) $$
$$\dot{\vec{x}}(t) = \vec{f}(\vec{x}(t),t) $$
with the initial condition $\vec{x}(t_0) = \vec{x}_0$.
###7.1.2. Basic examples and strategies
###7.1.2. Basic examples and strategies for a (single) dfirst-order differential equation
Before focusing on systems of first order equations, we will first consider
examplary cases of single first-order equations with only one unknown function $x(t)$.
In this case, we can distinguish important cases.
#### Type 1: $\dot{x}(t) = f(t)$
The simplest type of differential equation is the type usually learned about in the
integration portion of a calculus course. Such equations have the form,
@@ -112,7 +119,22 @@ to this type of equation are
$$x(t) = F(t) + c. $$
!!! check "Example: First order linear differential equation with constant coefficients"
!!! info "What is the antiderivative?"
You may know the antiderivative $F(t)$ of a function $f(t)$ under a different name -
it is the same as the indefinite integral: $F(t) = \int f(t) dt$. Remember that taking
an integral is essentially the opposite of differentiation, and indeed taking an integral
means finding a function $F(t)$ such that $\dot{F}(t) = \frac{dF}{dt} = \frac{d}{dt}
\int f(t) dt = f(t)$. In the context of differential equations we prefer to call this the
antiderivative as solving the differential equation means essentially undoing the derivative.
Note that the antiderivative is only defined up to a constant (as is the indefinite integral).
In practice, you will thus find some particular expression for $F(t)$ through integration. To
capture all possible solutions, don't forget the integration constant $c$ in the expression
above!
!!! check "Example"
Given the equation
@@ -120,16 +142,19 @@ $$x(t) = F(t) + c. $$
one finds by integrating that the solution is $\frac{1}{2}t^2 + c$.
For first order linear differential equations, it is possible to use the
concept of an anti-derivative from calculus to write a general solution in
terms of the independent variable:
#### Type 2: $\dot{x}(t) = f(x(t))$
The previous example was easy, as the function $x(t)$ did not enter in the right-hand side.
A second important case that we can solve explicitly is when the right-hand side is some
function of $x(t)$:
$$\dot{x}(t)=f(x(t)).$$
This implies that $\frac{\dot{x}(t)}{f(x)} = 1$. Let $F(x)$ be the
anti-derivative of $\frac{1}{f(x)}$. Then, by making use of the chain rule:
$$\frac{\dot{x}(t)}{f(x(t))} = \frac{dx}{dt}\,\frac{dF}{dx} = \frac{d}{dt} F(x(t)) = 1$$
$$\frac{d}{dt} F(x(t)) = \frac{dx}{dt}\,\frac{dF}{dx} = \frac{\dot{x}(t)}{f(x(t))} = 1$$
$$\Leftrightarrow F(x(t)) = t + c.$$
@@ -137,7 +162,7 @@ From this, we notice that if we can solve for $x(t)$, then we have the
solution! Having a specific form for the function $f(x)$ can often make it
possible to solve either implicitly or explicitly for the function $x(t)$.
!!! check "Example: Autonomous first order linear differential equation with constant coefficients"
!!! check "Example"
Given the equation
@@ -151,20 +176,24 @@ possible to solve either implicitly or explicitly for the function $x(t)$.
and $F(x)$ be the anti-derivative of the $\frac{1}{f(x)}$. Integrating
allows us to find the form of this anti-derivative.
$$F(x):= \int \frac{dx}{\lambda x} = \frac{1}{\lambda}log{\lambda x} $$
$$F(x):= \int \frac{dx}{\lambda x} = \frac{1}{\lambda}\log{\lambda x} $$
Now, making use of the general solution we also have that $F(x(t)) =t+c$.
These two equations can be combined to form an equation for $x(t)$,
$$Log(\lambda x) = \lambda t + c$$
$$\log(\lambda x) = \lambda t + c$$
$$x(t) = \frac{1}{\lambda} e^c e^{\lambda t} $$
$$x(t) = c_0 e^{\lambda t}$$
where in the last line we defined a new constant $c_0 =\frac{1}{\lambda}e^c$.
Given an initial condition, we could immediately determine this constant $c_0$.
So far we have considered only DE's with constant coefficients, but it is very
common to encounter equations such as the following,
#### Type 3: $\dot{x}(t) = g(t) f(x(t))$
So far we have considered onle DE's where the right-hand side is either a function of $t$
*or* of $x(t)$. We can still solve a more generic case, if we can separate the two dependencies
as:
$$\dot{x}(t)=g(t)f(x(t)).$$
@@ -186,7 +215,7 @@ $$\Rightarrow F(x(t)) = G(t) + c $$
Given this form of a general solution, the knowledge of specific functions $f, g$ would
make it possible to solve for $x(t)$.
!!! check "Example: First order linear differential equation with coefficient t"
!!! check "Example"
Let us apply the above strategy to the following equation,
@@ -367,43 +396,42 @@ $$ \dot{\vec{x}}(t) = A(t) \vec{x}(t) + \vec{b}(t).$$
Now we need a strategy for finding the solution of the inhomogeneous equation.
Begin by making an ansatz that $\vec{x}(t)$ can be written as a linear combination
of the basis functions for the homogeneous system, with coefficients that are
functions of the independent variable. Ansatz:
$$\vec{x}(t) = c_1(t) \vec{\phi}_1 (t)+ c_2(t) \vec{\phi}_2(t) + \cdots + c_n(t) \vec{\phi}_n (t) $$
functions of the independent variable.
1. Ansatz:
$$\vec{x}(t) = c_1(t) \vec{\phi}_1 (t)+ c_2(t) \vec{\phi}_2(t) + \cdots + c_n(t) \vec{\phi}_n (t) $$
Define the vector $\vec{c}(t)$ and matrix $\vec{\Phi}(t)$ as
2. Define the vector $\vec{c}(t)$ and matrix $\vec{\Phi}(t)$ as
$$\vec{c}(t) = \begin{bmatrix}
c_1(t) \\
\vdots \\
c_n(t) \\
\end{bmatrix} $$
$$\vec{\Phi}(t) = \big{(} \vec{\phi}_1 (t) | \cdots | \vec{\phi}_n (t) \big{)} $$
$$\vec{c}(t) = \begin{bmatrix}
c_1(t) \\
\vdots \\
c_n(t) \\
\end{bmatrix} $$
$$\vec{\Phi}(t) = \big{(} \vec{\phi}_1 (t) | \cdots | \vec{\phi}_n (t) \big{)} $$
With these definitions, it is possible to re-write the ansatz for $\vec{x}(t)$,
3. With these definitions, it is possible to re-write the ansatz for $\vec{x}(t)$,
$$ \vec{x}(t) = \vec{\Phi}(t) \vec{c}(t).$$
$$ \vec{x}(t) = \vec{\Phi}(t) \vec{c}(t).$$
Using the Leibniz rule, we then have the following expanded equation,
4. Using the Leibniz rule, we then have the following expanded equation,
$$\dot{\vec{x}}(t) = \dot{\vec{\Phi}}(t) \vec{c}(t) + \vec{\Phi}(t) \dot{\vec{c}}(t).$$
$$\dot{\vec{x}}(t) = \dot{\vec{\Phi}}(t) \vec{c}(t) + \vec{\Phi}(t) \dot{\vec{c}}(t).$$
Substituting the new expression into the differential equation gives,
5. Substituting the new expression into the differential equation gives,
$$\dot{\vec{\Phi}}(t) \vec{c}(t) + \vec{\Phi}(t) \dot{\vec{c}}(t) = A(t) \vec{\Phi}(t) \vec{c}(t) + \vec{b}(t) $$
$$\vec{\Phi}(t) \dot{\vec{c}}(t) = \vec{b}(t). $$
$$\dot{\vec{\Phi}}(t) \vec{c}(t) + \vec{\Phi}(t) \dot{\vec{c}}(t) = A(t) \vec{\Phi}(t) \vec{c}(t) + \vec{b}(t) $$
$$\vec{\Phi}(t) \dot{\vec{c}}(t) = \vec{b}(t). $$
In order to cancel terms in the previous line, we made use of the fact that
In order to cancel terms in the previous line, we made use of the fact that
$\vec{\Phi}(t)$ solves the homogeneous equation $\dot{\vec{\Phi}} = A \vec{\Phi}$.
By way of inverting and integrating, we can write the equation for the coefficient
vector $\vec{c}(t)$
6. By way of inverting and integrating, we can write the equation for the coefficient vector $\vec{c}(t)$
$$\vec{c}(t) = \int \vec{\Phi}^{-1}(t) \vec{b}(t) dt.$$
$$\vec{c}(t) = \int \vec{\Phi}^{-1}(t) \vec{b}(t) dt.$$
With access to a concrete form of the coefficient vector, we can then write down
the particular solution,
7. With access to a concrete form of the coefficient vector, we can then write down the particular solution,
$$\vec{\psi}(t)= \vec{\Phi}(t) \cdot \int \vec{\Phi}^{-1}(t) \vec{b}(t) dt .$$
$$\vec{\psi}(t)= \vec{\Phi}(t) \cdot \int \vec{\Phi}^{-1}(t) \vec{b}(t) dt .$$
!!! check "Example: Inhomogeneous first order linear differential equation"
@@ -493,39 +521,41 @@ $$A \vec{v}_i = \lambda_i \vec{v}_i, \qquad \forall i \epsilon \{1, \cdots, n \}
Here, we give consideration to the case of distinct eigenvectors, in which case
the $n$ eigenvectors form a basis for $\mathbb{R}^{n}$.
To solve the equation $\dot{\vec{x}}(t) = A \vec{x}(t)$, define a set of scalar functions $\{u_{1}(t), \cdots u_{n}(t) \}$ and make the following ansatz:
!!! info "Strategy for finding solution when $A$ is diagonizable"
$$\vec{\phi}_{i}(t) = u_{i}(t) \vec{v}_{i}.$$
1. To solve the equation $\dot{\vec{x}}(t) = A \vec{x}(t)$, define a set of scalar functions $\{u_{1}(t), \cdots u_{n}(t) \}$ and make the following ansatz:
$$\vec{\phi}_{i}(t) = u_{i}(t) \vec{v}_{i}.$$
2. Then, by differentiating,
$$\dot{\vec{\phi}_i}(t) = \dot{u_i}(t) \vec{v}_{i}.$$
Then, by differentiating,
3. The above equation can be combined with the differential equation for
$\vec{\phi}_{i}(t)$,
$$\dot{\vec{\phi}_{i}}(t)=A \vec{\phi}_{i}(t) \, ,$$
to derive the following equations,
$$\dot{\vec{\phi}_i}(t) = \dot{u_i}(t) \vec{v}_{i}.$$
$$\dot{u_i}(t) \vec{v}_{i} = A u_{i}(t) \vec{v}_{i}$$
$$\dot{u_i}(t) \vec{v}_{i} = u_{i}(t) \lambda_{i} \vec{v}_{i} $$
$$\vec{v}_{i} (\dot{u_i}(t) - \lambda_i u_{i}(t)) = 0, $$
The above equation can be combined with the differential equation for
$\vec{\phi}_{i}(t)$,
$$\dot{\vec{\phi}_{i}}(t)=A \vec{\phi}_{i}(t) \, ,$$
to derive the following equations,
where in the second last line, we make use of the fact that $\vec{v}_i$ is an eigenvector of $A$.
4. The obtained relation implies that
$$\dot{u_i}(t) \vec{v}_{i} = A u_{i}(t) \vec{v}_{i}$$
$$\dot{u_i}(t) \vec{v}_{i} = u_{i}(t) \lambda_{i} \vec{v}_{i} $$
$$\vec{v}_{i} (\dot{u_i}(t) - \lambda_i u_{i}(t)) = 0, $$
$$\dot{u_i}(t) = \lambda_i u_{i}(t).$$
where in the second last line, we make use of the fact that $\vec{v}_i$ is an eigenvector of $A$. The obtained relation implies that
This is a simple differential equation, of the type dealt with in the third example.
5. The solution is found to be
$$\dot{u_i}(t) = \lambda_i u_{i}(t).$$
$$u_{i}(t) = c_i e^{\lambda_i t},$$
This is a simple differential equation, of the type dealt with in the third example. The solution is found to be
with $c_i$ being a constant.
6. The general solution is found by adding all $n$ of the
solutions $\vec{\phi}_{i}(t)$,
$$u_{i}(t) = c_i e^{\lambda_i t},$$
$$\vec{x}(t) = c_{1} e^{\lambda_1 t} \vec{v}_{1} + c_{2} e^{\lambda_2 t} \vec{v}_{2} + \cdots + c_{n} e^{\lambda_n t} \vec{v}_{n}.$$
with $c_i$ being a constant. The general solution is found by adding all $n$ of the
solutions $\vec{\phi}_{i}(t)$,
$$\vec{x}(t) = c_{1} e^{\lambda_1 t} \vec{v}_{1} + c_{2} e^{\lambda_2 t} \vec{v}_{2} + \cdots + c_{n} e^{\lambda_n t} \vec{v}_{n}.$$
and the vectors $\{e^{\lambda_1 t} \vec{v}_{1}, \cdots, e^{\lambda_n t} \vec{v}_{n} \}$
form a basis for the solution space since $\det(\vec{v}_1 | \cdots | \vec{v}_n) \neq 0$
(the $n$ eigenvectors are linearly independent).
and the vectors $\{e^{\lambda_1 t} \vec{v}_{1}, \cdots, e^{\lambda_n t} \vec{v}_{n} \}$
form a basis for the solution space since $\det(\vec{v}_1 | \cdots | \vec{v}_n) \neq 0$
(the $n$ eigenvectors are linearly independent).
!!! check "Example: Homogeneous first order linear system with diagonalizable constant coefficient matrix"
@@ -671,35 +701,33 @@ has a root $\lambda$ with multiplicity 2, but only one eigenvector $\vec{v}_1$.
What is the problem in this case? Since there are $n$ equations to be solved and an $n \times n$ linear operator $A$, the solution space for the equation requires a basis of $n$ solutions. In this case however, there are $n-1$ eigenvectors, so we cannot use only these eigenvectors in forming a basis for
the solution space.
Suppose that we have a system of $2$ coupled equations, so that $A$ is a $2 \times 2$ matrix, which has eigenvalue $\lambda_1$ with multiplicity $2$. As in the previous section, we can form one solution using the single eigenvector $\vec{v}_1$,
!!! info "Strategy for finding a solution when $A (2 $ by $ 2)$ is defective"
$$\vec{\phi}_1(t) = e^{\lambda_1 t} \vec{v}_1.$$
1. Suppose that we have a system of $2$ coupled equations, so that $A$ is a $2 \times 2$ matrix, which has eigenvalue $\lambda_1$ with multiplicity $2$. As in the previous section, we can form one solution using the single eigenvector $\vec{v}_1$,
To determine a second, linearly independent solution, make the following ansatz:
$$\vec{\phi}_1(t) = e^{\lambda_1 t} \vec{v}_1.$$
$$\vec{\phi}_2(t) = t e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_1 t} \vec{v}_2.$$
2. To determine the second linearly independent solution, make the following ansatz:
With this ansatz, it is then necessary to determine an appropriate vector $\vec{v}_2$
such that $\vec{\phi}_2(t)$ is really a solution of this problem. To achieve that, take
the derivative of $\vec{\phi}_2(t)$,
$$\vec{\phi}_2(t) = t e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_1 t} \vec{v}_2.$$
$$\dot{\vec{\phi}_2}(t) = e^{\lambda_1 t} \vec{v}_1 + \lambda_1 t e^{\lambda_1 t} \vec{v}_1 + \lambda_1 e^{\lambda_1 t} \vec{v}_2 $$
3. With this ansatz, it is then necessary to determine an appropriate vector $\vec{v}_2$ such that $\vec{\phi}_2(t)$ is really a solution of this problem. To achieve that, take the derivative of $\vec{\phi}_2(t)$,
Also, write the matrix equation for $\vec{\phi}_2(t)$,
$$\dot{\vec{\phi}_2}(t) = e^{\lambda_1 t} \vec{v}_1 + \lambda_1 t e^{\lambda_1 t} \vec{v}_1 + \lambda_1 e^{\lambda_1 t} \vec{v}_2 $$
$$A \vec{\phi}_2(t) = A t e^{\lambda_1 t} \vec{v}_1 + A e^{\lambda_1 t} \vec{v}_2 $$
$$A \vec{\phi}_2(t) = \lambda_1 t e^{\lambda_1 t} \vec{v}_1 + A e^{\lambda_1 t}\vec{v}_2$$
4. Also, write the matrix equation for $\vec{\phi}_2(t)$,
Since $\vec{\phi}_2(t)$ must solve the equation
$\dot{\vec{\phi}_2(t)} = A \vec{\phi}_2(t)$, we can combine and simplify the
previous equations to write
$$A \vec{\phi}_2(t) = A t e^{\lambda_1 t} \vec{v}_1 + A e^{\lambda_1 t} \vec{v}_2 $$
$$A \vec{\phi}_2(t) = \lambda_1 t e^{\lambda_1 t} \vec{v}_1 + A e^{\lambda_1 t}\vec{v}_2$$
$$A \vec{v}_2 - \lambda_1 \vec{v}_2 = \vec{v}_1$$
$$(A- \lambda_1 I) \vec{v}_2 = \vec{v}_1 $$
5. Since $\vec{\phi}_2(t)$ must solve the equation $\dot{\vec{\phi}_2(t)} = A \vec{\phi}_2(t)$, we can combine and simplify the previous equations to write
With this condition, it is possible to write the general solution as
$$A \vec{v}_2 - \lambda_1 \vec{v}_2 = \vec{v}_1$$
$$(A- \lambda_1 I) \vec{v}_2 = \vec{v}_1 $$
$$\vec{x}(t) = c_1 e^{\lambda_1 t} \vec{v}_1 + c_2(t e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_1 t} \vec{v}_2).$$
6. With this condition, it is possible to write the general solution as
$$\vec{x}(t) = c_1 e^{\lambda_1 t} \vec{v}_1 + c_2(t e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_1 t} \vec{v}_2).$$
!!! check "Example: Continuation of the example with $A$ defective (Part 2)"
@@ -766,14 +794,15 @@ Then, for comparison, multiply $\vec{\phi}_k(t)$ by $A$
$$\begin{align}
A \vec{\phi}_k (t) &= e^{\lambda t} \big{(} \frac{t^{k-1}}{(k-1)!}\lambda \vec{v}_1 + \frac{t^{k-2}}{(k-2)!} A \vec{v}_2 + \cdots + A \vec{v}_{k-1} + A \vec{v}_k \big{)}\\
A \vec{\phi}_k (t) &= \lambda \vec{\phi}_k (t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!}(A- \lambda I)\vec{v}_2 + \cdots + t (A- \lambda I)\vec{v}_{k-1} + (A- \lambda I)\vec{v}_{k} \big{)}\\
A \vec{\phi}_k (t) &= \lambda \vec{\phi}_k (t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!} \vec{v}_1 + \cdots + t \vec{v}_{k-2} + \vec{v}_{k-1} \big{)}\\
A \vec{\phi}_k (t) &= \dot{\vec{\phi}}_{k}(t).
&= \lambda \vec{\phi}_k (t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!}(A- \lambda I)\vec{v}_2 + \cdots + t (A- \lambda I)\vec{v}_{k-1} + (A- \lambda I)\vec{v}_{k} \big{)}\\
&= \lambda \vec{\phi}_k (t) + e^{\lambda t} \big{(} \frac{t^{k-2}}{(k-2)!} \vec{v}_1 + \cdots + t \vec{v}_{k-2} + \vec{v}_{k-1} \big{)}\\
&= \dot{\vec{\phi}}_{k}(t).
\end{align}$$
Notice that in the second last line we made use of the relations
$(A- \lambda I)\vec{v}_{i} = \vec{v}_{i-1}$. This completes the proof
since we have demonstrated that $\vec{\phi}_{k}(t)$ is a solution of the DE.
$(A- \lambda I)\vec{v}_{i} = \vec{v}_{i-1}$.
This completes the proof since we have demonstrated that $\vec{\phi}_{k}(t)$ is a solution of the DE.
***
##7.4. Problems
Loading