Commit 13cc3488 authored by David's avatar David
Browse files

V0 version of complete MC notes

parent a1caae18
Pipeline #57400 passed with stages
in 1 minute and 36 seconds
......@@ -54,8 +54,30 @@ This is the Monte Carlo integral, where the $R_i$ are now sampled from $P(R)$. A
This leaves us a practical question: how do we sample from $P(R)$? The answer lies in using Markov chains.
## Metropolis Monte Carlo
So what is a Markov chain? Consider a sequence of random states $\{X_i\} = X_1, X_2, ..., X_N$. We say that the probability of choosing a new random state $X_{i+1}$ only depends on $X_i$, not any other previous states. Now, one can transition from one state to the other with a certain probability, which we denote as: $T(X \rightarrow X^\prime)$. This rate is normalised: $\sum_{X^\prime}T(X \rightarrow X^\prime)=1$. This simply means: if you choose 1 out of $N$ options, you choose one of the $N$.
**Summary of Metropolis algorithm**
We can then write a rate equation to see what the chance is of obtaining state $P(X)$ at step $i+1$:
$$P(X, i+1) = P(X, i) - \sum_{X^\prime}P(X, i)T(X \rightarrow X^\prime) + \sum_{X^\prime}P(X^\prime, i)T(X^\prime \rightarrow X)$$
Put in words:
But we want this Markov chain to eventually sample a probability distribution, like the Boltzmann distribution. This means that we want the probability to be stationary:
$$P(X, i+1)=P(X, i)=P(X)$$
This simplifies the rate equation to:
$$\sum_{X^\prime}P(X)T(X \rightarrow X^\prime) = \sum_{X^\prime}P(X^\prime)T(X^\prime \rightarrow X)$$
In turn, this is then solved by the condition:
$$P(X)T(X \rightarrow X^\prime) = P(X^\prime)T(X^\prime \rightarrow X)$$
This is very important, it's called **"detailed balance"**.
We can generate a desired probability distribution $P(X)$ by choosing the right $T(X^\prime \rightarrow X)$, including the Boltzmann distribution. This leads to the question: how to we generate the $T$ terms to get the Boltzmann distributin?
The Metropolis algorithm uses the following ansatz: $T(X^\prime \rightarrow X) = \omega_{XX^\prime} \cdot A_{XX^\prime}$. Here, $\omega$ is the probability to generate this new state. This is called 'generating a trial move'. Then, $A$ is the probability of accepting this proposed trial move.
Put into words, we say the probability of going to a state new state ($T(X^\prime \rightarrow X)$) is the product of proposing it ($\omega_{XX^\prime}$) and accepting it ($A_{XX^\prime})$.
We usually demand that: $\omega_{XX^\prime}=\omega_{X^\prime X}$. Which means that our detailed balance equation reduces to:
$$\frac{A_{X^\prime X}}{A_{XX^\prime}} = \frac{P(X)}{P(X^\prime)}$$
There's multiple solutions that fulfil this condition. The solution as given by Metropolis is:
$$A_{XX^\prime}=\begin{cases}&1 &\text{if} \ P(X^\prime) > P(X)\\ \\ &\frac{P(X)}{P(X^\prime)} &\text{if} \ P(X^\prime) < P(X)\end{cases}$$
Below, we present a pseudocode for performing the Metropolis algorithm
## Summary of Metropolis algorithm
1. Start with a state $X_i$
2. Generate a state $X'$ from $X_i$ (such that $\omega_{X_i,X'}=\omega_{X',X_i}$)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment