Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
computational_physics
lectures
Commits
e83a5d18
Commit
e83a5d18
authored
Mar 22, 2021
by
Michael Wimmer
Browse files
shorter text in math
parent
fbf2fa89
Pipeline
#58156
passed with stages
in 1 minute and 41 seconds
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
src/proj2-monte-carlo.md
View file @
e83a5d18
...
...
@@ -81,17 +81,17 @@ In this way we can make sure to approximately focus on the physically relevant c
Doing this requires us to rewrite the integral as
$$
\b
egin{eqnarray}
\i
nt p_
\t
ext{real}(R) A(R) dR = &
\i
nt p_
\t
ext{sampl
ing
}(R)
\u
nderbrace{
\f
rac{p_
\t
ext{real}(R)}{p_
\t
ext{sampl
ing
}(R)}}_{=w(R)} A(R) dR
\\
= &
\i
nt p_
\t
ext{sampl
ing
}(R) w(R) A(R) dR
\\
\i
nt p_
\t
ext{sampl
e
}(R)
\u
nderbrace{
\f
rac{p_
\t
ext{real}(R)}{p_
\t
ext{sampl
e
}(R)}}_{=w(R)} A(R) dR
\\
= &
\i
nt p_
\t
ext{sampl
e
}(R) w(R) A(R) dR
\\
\a
pprox &
\f
rac{1}{N}
\s
um_{i=1}^N w(R_i) A(R_i)
\t
ag{3}
\e
nd{eqnarray}$$
where the configurations $R_i$ are now sampled from $p_
\t
ext{sampl
ing
}(R)$. When using this approximate probability distribution
where the configurations $R_i$ are now sampled from $p_
\t
ext{sampl
e
}(R)$. When using this approximate probability distribution
we thus have to introduce
*weights*
$w(R)$ into the average.
### Why approximate importance sampling eventually fails
Approximate importance sampling is an attractive way of sampling: if we have a convenient and computationally efficient
$p_
\t
ext{sampl
ing
}$ we can apply the Monte Carlo integration and seemingly focus on the relevant part of the configuration space.
$p_
\t
ext{sampl
e
}$ we can apply the Monte Carlo integration and seemingly focus on the relevant part of the configuration space.
Unfortunatley, this approach becomes increasingly worse as the dimension of the configuration space increases. This is related to the
the very conter-intuitive fact thatin high-dimensional space "all the volume is near the surface". This defies our intuition that
...
...
@@ -115,10 +115,10 @@ This effect directly shows in the weights. Let us demonstrate this using a simpl
where
$$p_
\t
ext{real}(x_1,
\d
ots, x_d) = (2
\p
i
\s
igma_
\t
ext{real})^{d/2} e^{-
\f
rac{
\s
um_{k=1}^d x_k^2}{2
\s
igma_
\t
ext{real}}}$$
is a normal distribution with standard deviation $
\s
igma_
\t
ext{real}$. For the sampling distribution we use
also a normal distribution, but with a slightly differen standard deviation $
\s
igma_
\t
ext{sampl
ing
}$:
$$p_
\t
ext{sampl
ing
}(x_1,
\d
ots, x_d) = (2
\p
i
\s
igma_
\t
ext{sampl
ing
})^{d/2} e^{-
\f
rac{
\s
um_{k=1}^d x_k^2}{2
\s
igma_
\t
ext{sampl
ing
}}}
\,
.$$
We will now compute how the weights $p_
\t
ext{real}/p_
\t
ext{sampl
ing
}$ are distributed for different dimensionality
$d$. In the example below we have chosen $
\s
igma_
\t
ext{real} = 1$ and $
\s
igma_
\t
ext{sampl
ing
} = 0.9$ and sampling over $N=10000$
also a normal distribution, but with a slightly differen standard deviation $
\s
igma_
\t
ext{sampl
e
}$:
$$p_
\t
ext{sampl
e
}(x_1,
\d
ots, x_d) = (2
\p
i
\s
igma_
\t
ext{sampl
e
})^{d/2} e^{-
\f
rac{
\s
um_{k=1}^d x_k^2}{2
\s
igma_
\t
ext{sampl
e
}}}
\,
.$$
We will now compute how the weights $p_
\t
ext{real}/p_
\t
ext{sampl
e
}$ are distributed for different dimensionality
$d$. In the example below we have chosen $
\s
igma_
\t
ext{real} = 1$ and $
\s
igma_
\t
ext{sampl
e
} = 0.9$ and sampling over $N=10000$
configurations:
![
Vanishing weights in high-dimensional space
](
figures/weights.svg
)
We see that as dimensionality increses, the distribution of weights gets more and more skewed towards 0. For a large dimensionality,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment