#### Simulations are costly and often require sampling a region in parameter space.
#### Simulations are costly and often require sampling a region in parameter space.
#### Chosing new points based on existing data improves the simulation efficiency.
In the computational sciences, one often does costly simulations---represented by a function $f$---where a certain region in parameter space $X$ is sampled, $f \colon X \to Y$.
<!-- examples here -->
Frequently, the different points in $X$ can be independently calculated.
Even though it is suboptimal, one usually resorts to sampling $X$ on a homogeneous grid because of its simple implementation.
#### We describe a class of algorithms replying on local criteria for sampling which allow for easy parallelization and have a low overhead.
<!-- This is useful for intermediary cost simulations. -->
#### Choosing new points based on existing data improves the simulation efficiency.
A better alternative is to choose new, potentially interesting points in $X$ based on existing data, which improves the simulation efficiency.
For example, a simple strategy for a one-dimensional function is to (1) construct intervals containing neighboring data points, (2) calculate the Euclidean distance of each interval, and (3) pick the new point to be in the middle of the largest Euclidean distance.
Such a sampling strategy would trivially speedup many simulations.
One of the most significant complications here is to parallelize this algorithm, as it requires a lot of bookkeeping and planning ahead.
#### We describe a class of algorithms relying on local criteria for sampling, which allow for easy parallelization and have a low overhead.
In this paper, we describe a class of algorithms that rely on local criteria for sampling, such as in the previous simple example.
Here we associate a local *loss* with each of the *intervals* (containing neighboring points), and choose new points inside of the interval with the largest loss.
We can then easily quantify how well the data is describing the underlying function by summing all the losses; allowing us to define stopping criteria.
The advantage of these algorithms is that they allow for easy parallelization and have a low computational overhead.
#### We provide a reference implementation, the Adaptive package, and demonstrate its performance.
#### We provide a reference implementation, the Adaptive package, and demonstrate its performance.
We provide a reference implementation, the open-source Python package called Adaptive.[@Nijholt2019a]
# Review of adaptive sampling
# Review of adaptive sampling
...
@@ -40,11 +52,11 @@ contribution: |
...
@@ -40,11 +52,11 @@ contribution: |
# Design constraints and the general algorithm
# Design constraints and the general algorithm
#### We aim to sample low dimensional low to intermediate cost functions in parallel.
#### We aim to sample low dimensional low to intermediate cost functions in parallel.
<!-- because of curse of dimensionality -->
<!-- because of the curse of dimensionality -->
<!-- fast functions don't require adaptive -->
<!-- fast functions do not require adaptive -->
<!-- When your function evaluation is very expensive, full-scale Bayesian sampling will perform better, however, there is a broad class of simulations that are in the right regime for Adaptive to be beneficial. -->
<!-- When your function evaluation is very expensive, full-scale Bayesian sampling will perform better, however, there is a broad class of simulations that are in the right regime for Adaptive to be beneficial. -->
#### We propose to use a local loss function as a criterion for chosing the next point.
#### We propose to use a local loss function as a criterion for choosing the next point.
#### As an example interpoint distance is a good loss function in one dimension.
#### As an example interpoint distance is a good loss function in one dimension.
<!-- Plot here -->
<!-- Plot here -->
...
@@ -58,14 +70,14 @@ contribution: |
...
@@ -58,14 +70,14 @@ contribution: |
#### A failure mode of such algorithms is sampling only a small neighborhood of one point.
#### A failure mode of such algorithms is sampling only a small neighborhood of one point.
<!-- example of distance loss on singularities -->
<!-- example of distance loss on singularities -->
#### A solution is to regularize the loss such that this would avoided.
#### A solution is to regularize the loss such that this would be avoided.
<!-- like resolution loss which limits the size of an interval -->
<!-- like resolution loss which limits the size of an interval -->
#### Adding loss functions allows for balancing between multiple priorities.
#### Adding loss functions allows for balancing between multiple priorities.
<!-- i.e. area + line simplification -->
<!-- i.e. area + line simplification -->
#### A desireble property is that eventually all points should be sampled.
#### A desirable property is that eventually, all points should be sampled.
<!-- exploration vs. explotation -->
<!-- exploration vs. exploitation -->
# Examples
# Examples
...
@@ -91,7 +103,7 @@ contribution: |
...
@@ -91,7 +103,7 @@ contribution: |
#### Anisotropic triangulation would improve the algorithm.
#### Anisotropic triangulation would improve the algorithm.
#### Learning stochastic functions is promising direction.
#### Learning stochastic functions is a promising direction.
#### Experimental control needs to deal with noise, hysteresis, and the cost for changing parameters.
#### Experimental control needs to deal with noise, hysteresis, and the cost for changing parameters.