Commit 6972f322 authored by Michael Wimmer's avatar Michael Wimmer
Browse files

fix typos

parent e4a3da65
Pipeline #58150 passed with stages
in 1 minute and 38 seconds
......@@ -111,17 +111,17 @@ probability distributions vanishes, and we keep on mostly sampling uninteresting
This effect directly shows in the weights. Let us demonstrate this using a simple example. Consider the case
where
$$\p_\text{real}(x_1, \dots, x_d) = (2 \pi \sigma_\text{real})^{d/2} e^{-\frac{\sum_k=1^d x_k^2}{2\sigma_\text{real}}}$$
$$p_\text{real}(x_1, \dots, x_d) = (2 \pi \sigma_\text{real})^{d/2} e^{-\frac{\sum_{k=1}^d x_k^2}{2\sigma_\text{real}}}$$
is a normal distribution with standard deviation $\sigma_\text{real}$. For the sampling distribution we use
also a normal distribution, but with a slightly differen standard deviation $\sigma_\text{sampling}$:
$$\p_\text{sampling}(x_1, \dots, x_d) = (2 \pi \sigma_\text{sampling})^{d/2} e^{-\frac{\sum_k=1^d x_k^2}{2\sigma_\text{sampling}}}\,.$$
$$p_\text{sampling}(x_1, \dots, x_d) = (2 \pi \sigma_\text{sampling})^{d/2} e^{-\frac{\sum_{k=1}^d x_k^2}{2\sigma_\text{sampling}}}\,.$$
We will now compute how the weights $p_\text{real}/p_\text{sampling}$ are distributed for different dimensionality
$d$. In the example below we have chosen $\sigma_\text{real} = 1$ and $\sigma_\text{sampling} = 0.9$ and sampling over $N=10000$
configurations:
![Vanishing weights in high-dimensional space](figures/weights.svg)
We see that as dimensionality increses, the distribution of weights gets more and more skewed towards 0. For a large dimensionality,
almost all weights are very close 0! This means that most configurations will give little contribution to the weighted average
in Eq. (3). In fact, this weighted average will be determined by the very few configurations that were by close to $p_\text{real}$,
in Eq. (3). In fact, this weighted average will be determined by the very few configurations that were by close to $p_\text{real}(R)$,
end we end up with few physically relevant configurations again.
So in high-dimensional configuration space approximate importance sampling will eventually break down. The only way out
......@@ -189,7 +189,7 @@ $$x' = x + d \times \text{random number between -1 and 1}$$
This trial move has symmetric probability for every distance $d$. Still, the choice of $d$ determines very strongly the
correlation of the Markov chain as demonstrated below (for these simulation we used $\sigma=0$ and 200 Markov steps):
![Dependence of Markov chains on acceptance ratio](figures/metropolis_acceptance_ratio.svg)
In these pictures the probability distribution $p(x)$ is indicated by the shades of grey. The values in the MArkov chain are plotted in blue.
In these pictures the probability distribution $p(x)$ is indicated by the shades of grey. The values in the Markov chain are plotted in blue.
When choosing a too large value of $d$ (left picture) we easily "overshoot" and land in a regin with low probability. Hence, the resulting acceptance ratio
is small. But this leads to the Markov chain containing many repeated values as we keep on using the old value if the move is rejected! This leads to
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment