@@ -36,7 +36,12 @@ If the goal of the simulation is to approximate a continuous function with the l
...
@@ -36,7 +36,12 @@ If the goal of the simulation is to approximate a continuous function with the l
Such a sampling strategy would trivially speedup many simulations.
Such a sampling strategy would trivially speedup many simulations.
One of the most significant complications here is to parallelize this algorithm, as it requires a lot of bookkeeping and planning ahead.
One of the most significant complications here is to parallelize this algorithm, as it requires a lot of bookkeeping and planning ahead.
{#fig:algo}
](figures/algo.pdf){#fig:algo}
#### We describe a class of algorithms relying on local criteria for sampling, which allow for easy parallelization and have a low overhead.
#### We describe a class of algorithms relying on local criteria for sampling, which allow for easy parallelization and have a low overhead.
...
@@ -54,12 +59,6 @@ Here we associate a *local loss* to each of the *candidate points* within an int
...
@@ -54,12 +59,6 @@ Here we associate a *local loss* to each of the *candidate points* within an int
In the case of the integration algorithm, the loss is the error estimate.
In the case of the integration algorithm, the loss is the error estimate.
The most significant advantage of these *local* algorithms is that they allow for easy parallelization and have a low computational overhead.
The most significant advantage of these *local* algorithms is that they allow for easy parallelization and have a low computational overhead.
{#fig:loss_1D}
{#fig:adaptive_vs_grid}
When the features are homogeneously spaced, such as with the wave packet, adaptive sampling is not as effective as in the other cases.](figures/adaptive_vs_grid.pdf){#fig:adaptive_vs_grid}