@@ -21,7 +21,7 @@ where $x_i = a + \left(i-\frac{1}{2}\right) \frac{b-a}{N}$. This the simplest ap

A different approach to evaluating the integral can be taken if we use concepts from the theory of random variables. To this end, consider a probability distribution

$p(x)$ of a random variable $x$ (then $\int dx p(x) = 1$). We can compute the expectation value of a function $f(x)$ of this random variable as

$$\int p(x)f(x)dx \underbrace{\approx}_\text{$x_i$ sampled from $p(x)$\frac{1}{N}\sum_i f(x_i) + \mathcal{O}(\frac{1}{\sqrt{N}}\,. \tag{2}$$

$$\int p(x)f(x)dx \underbrace{\approx}_\text{$x_i$ sampled from $p(x)$}\frac{1}{N}\sum_i f(x_i) + \mathcal{O}(\frac{1}{\sqrt{N}}\,. \tag{2}$$

Here we are approximating the expectation value by taking a finite sample of the random variable $x$, we "sample" $x_i$ from the probability

distribution $p(x)$.

...

...

@@ -69,8 +69,8 @@ probability distribution that is concentrated in the physically relevant space.

### Sampling *almost* the right probability distribution

Let us consider the general case of computing the expectation value of a function $A(R)$ for a random variable distributed according to $p_\text{real}(R)$

(in the previous example, $p_\text{real} = e^{-\beta H(R)}/Z$. We consider the general case here).

Let us consider the general case of computing the expectation value of a function $A(R)$ for a random variable distributed according to $p_\text{real}(R)$.

(In the previous example, $p_\text{real}(R) = e^{-\beta H(R)}/Z$. We consider the general case here.)

We then have to calculate the integral

$$\int p_\text{real}(R) A(R) dR\,.$$

Ideally, we would now generate random variables $R_i$ that are distributed according to $p_\text{real}(R)$. However, this may be impractical. In this

...

...

@@ -81,12 +81,12 @@ In this way we can make sure to approximately focus on the physically relevant c

We will now compute how the weights $p_\text{real}/p_\text{sampling}$ are distributed for different dimensionality

$d$. In the example below we have chosen $\sigma_\text{real} = 1$ and $\sigma_\text{sampling} = 0.9$ and sampling over $N=10000$

configurations:

...

...

@@ -151,9 +151,9 @@ We can thus generate a desired probability distribution $p(R)$ by choosing the r

probability distribution? The Metropolis algorithm solves this problem!

The Metropolis algorithm uses the following ansatz: $T(R \rightarrow R^\prime) = \omega_{RR^\prime} \cdot A_{RR^\prime}$. Here, the generation of the

new state in the Markov chain is split into two phases. First, starting from the previous state $R = R_i$, we generate a candidate state $R^\rprime$

new state in the Markov chain is split into two phases. First, starting from the previous state $R = R_i$, we generate a candidate state $R^\prime$

with probability $\omega_{XX^\prime}$. This is the so-called "trial move". We then accept this trial move with probability $A_{RR^\prime}$,

i.e. set $R_{i+1} = R'. If we don't accept it, we take the old state again, $R_{i+1} = R$. Altogether, the probability of going to a state new state ($T(X^\prime \rightarrow X)$)

i.e. set $R_{i+1} = R'$. If we don't accept it, we take the old state again, $R_{i+1} = R$. Altogether, the probability of going to a state new state ($T(X^\prime \rightarrow X)$)

is the product of proposing it ($\omega_{RR^\prime}$) and accepting it ($A_{RR^\prime})$.

The problem can we further simplified by demanding that $\omega_{RR^\prime}=\omega_{R^\prime R} - the trial move should have a symmetric probability of going from $R$ to $R^\prime$