@@ -54,8 +54,30 @@ This is the Monte Carlo integral, where the $R_i$ are now sampled from $P(R)$. A

This leaves us a practical question: how do we sample from $P(R)$? The answer lies in using Markov chains.

## Metropolis Monte Carlo

So what is a Markov chain? Consider a sequence of random states $\{X_i\} = X_1, X_2, ..., X_N$. We say that the probability of choosing a new random state $X_{i+1}$ only depends on $X_i$, not any other previous states. Now, one can transition from one state to the other with a certain probability, which we denote as: $T(X \rightarrow X^\prime)$. This rate is normalised: $\sum_{X^\prime}T(X \rightarrow X^\prime)=1$. This simply means: if you choose 1 out of $N$ options, you choose one of the $N$.

**Summary of Metropolis algorithm**

We can then write a rate equation to see what the chance is of obtaining state $P(X)$ at step $i+1$:

But we want this Markov chain to eventually sample a probability distribution, like the Boltzmann distribution. This means that we want the probability to be stationary:

This is very important, it's called **"detailed balance"**.

We can generate a desired probability distribution $P(X)$ by choosing the right $T(X^\prime \rightarrow X)$, including the Boltzmann distribution. This leads to the question: how to we generate the $T$ terms to get the Boltzmann distributin?

The Metropolis algorithm uses the following ansatz: $T(X^\prime \rightarrow X) = \omega_{XX^\prime} \cdot A_{XX^\prime}$. Here, $\omega$ is the probability to generate this new state. This is called 'generating a trial move'. Then, $A$ is the probability of accepting this proposed trial move.

Put into words, we say the probability of going to a state new state ($T(X^\prime \rightarrow X)$) is the product of proposing it ($\omega_{XX^\prime}$) and accepting it ($A_{XX^\prime})$.

We usually demand that: $\omega_{XX^\prime}=\omega_{X^\prime X}$. Which means that our detailed balance equation reduces to: