Commit 3c978a03 authored by Michael Wimmer's avatar Michael Wimmer
Browse files

replace remaining X with R

parent e83a5d18
Pipeline #58157 passed with stages
in 1 minute and 37 seconds
......@@ -138,7 +138,7 @@ This leaves us a practical question: Can we sample exactly from an arbitrary $p(
So what is a Markov chain? Consider a sequence of random states $\{R_i\} = R_1, R_2, ..., R_N$. We say that the probability of choosing a new random state $R_{i+1}$ only depends on $R_i$, not any other previous states. Now, one can transition from one state to the other with a certain probability, which we denote as: $T(R \rightarrow R^\prime)$. This rate is normalised: $\sum_{R^\prime}T(R \rightarrow R^\prime)=1$. This simply means: you have to go to some state.
We can then write a rate equation to see what the chance is of obtaining state $p(X)$ at step $i+1$:
We can then write a rate equation to see what the chance is of obtaining state $p(R)$ at step $i+1$:
$$p(R, i+1) = p(R, i) - \sum_{R^\prime}p(R, i)T(R \rightarrow R^\prime) + \sum_{R^\prime}p(R^\prime, i)T(R^\prime \rightarrow R)$$
But we want this Markov chain to eventually sample our probability distribution $p(R)$. This means that we want the probability to be stationary, such that we always sample the same distribution:
......@@ -155,14 +155,14 @@ probability distribution? The Metropolis algorithm solves this problem!
The Metropolis algorithm uses the following ansatz: $T(R \rightarrow R^\prime) = \omega_{RR^\prime} \cdot A_{RR^\prime}$. Here, the generation of the
new state in the Markov chain is split into two phases. First, starting from the previous state $R = R_i$, we generate a candidate state $R^\prime$
with probability $\omega_{XX^\prime}$. This is the so-called "trial move". We then accept this trial move with probability $A_{RR^\prime}$,
i.e. set $R_{i+1} = R'$. If we don't accept it, we take the old state again, $R_{i+1} = R$. Altogether, the probability of going to a state new state ($T(X^\prime \rightarrow X)$)
i.e. set $R_{i+1} = R'$. If we don't accept it, we take the old state again, $R_{i+1} = R$. Altogether, the probability of going to a state new state ($T(R^\prime \rightarrow R)$)
is the product of proposing it ($\omega_{RR^\prime}$) and accepting it ($A_{RR^\prime})$.
The problem can we further simplified by demanding that $\omega_{RR^\prime}=\omega_{R^\prime R}$ - the trial move should have a symmetric probability of going from $R$ to $R^\prime$
and vice versa. The detailed balance equation then reduces to:
$$\frac{A_{R^\prime R}}{A_{RR^\prime}} = \frac{p(R)}{p(R^\prime)} \tag{4}$$
Metropolis \emph{et al.} solved this as:
$$A_{RR^\prime}=\begin{cases}&1 &\text{if} \ p(R^\prime) > p(R)\\ \\ &\frac{p(R)}{p(R^\prime)} &\text{if} \ p(R^\prime) < p(X)\end{cases}.\tag{5}$$
$$A_{RR^\prime}=\begin{cases}&1 &\text{if} \ p(R^\prime) > p(R)\\ \\ &\frac{p(R)}{p(R^\prime)} &\text{if} \ p(R^\prime) < p(R)\end{cases}.\tag{5}$$
The key to understanding why Eq. (5) solves (4) is by observing that $A_{RR'}$ and $A{R'R}$ will necessarily be given by different cases in the Eq. (5).
### Summary of Metropolis algorithm
......@@ -170,7 +170,7 @@ The key to understanding why Eq. (5) solves (4) is by observing that $A_{RR'}$ a
1. Start with a state $R_i$
2. Generate a state $R'$ from $R_i$ (such that $\omega_{R_i,X'}=\omega_{R',R_i}$)
3. If $p(R') > p(R_i)$, set $R_{i+1}=R'$ <br>
If $p(R') < p(R_i)$, set $R_{i+1}=R'$ with probability $\frac{p(X')}{p(X)}$<br>
If $p(R') < p(R_i)$, set $R_{i+1}=R'$ with probability $\frac{p(R')}{p(R)}$<br>
else set $R_{i+1} = R_i$.
4. Continue with 2.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment