#### Simulations are costly and often require sampling a region in parameter space
#### Simulations are costly and often require sampling a region in parameter space.
#### Chosing new points based on existing data improves the simulation efficiency
#### Chosing new points based on existing data improves the simulation efficiency.
<!-- examples here -->
<!-- examples here -->
#### We describe a class of algorithms replying on local criteria for sampling which allow for easy parallelization and have a low overhead
#### We describe a class of algorithms replying on local criteria for sampling which allow for easy parallelization and have a low overhead.
<!-- This is useful for intermediary cost simulations. -->
<!-- This is useful for intermediary cost simulations. -->
#### We provide a reference implementation, the Adaptive package, and demonstrate its performance
#### We provide a reference implementation, the Adaptive package, and demonstrate its performance.
# Review of adaptive sampling
# Review of adaptive sampling
#### Experiment design uses Bayesian sampling because the computational costs are not a limitation
#### Experiment design uses Bayesian sampling because the computational costs are not a limitation.
<!-- high dimensional functions -->
<!-- high dimensional functions -->
#### Plotting and low dimensional integration uses local sampling
#### Plotting and low dimensional integration uses local sampling.
<!-- can refer to Mathematica's implementation -->
<!-- can refer to Mathematica's implementation -->
#### PDE solvers and computer graphics use adaptive meshing
#### PDE solvers and computer graphics use adaptive meshing.
<!-- hydrodynamics anisotropic meshing paper ref -->
<!-- hydrodynamics anisotropic meshing paper ref -->
# Design constraints and the general algorithm
# Design constraints and the general algorithm
#### We aim to sample low dimensional low to intermediate cost functions in parallel
#### We aim to sample low dimensional low to intermediate cost functions in parallel.
<!-- because of curse of dimensionality -->
<!-- because of curse of dimensionality -->
<!-- fast functions don't require adaptive -->
<!-- fast functions don't require adaptive -->
<!-- When your function evaluation is very expensive, full-scale Bayesian sampling will perform better, however, there is a broad class of simulations that are in the right regime for Adaptive to be beneficial. -->
<!-- When your function evaluation is very expensive, full-scale Bayesian sampling will perform better, however, there is a broad class of simulations that are in the right regime for Adaptive to be beneficial. -->
#### We propose to use a local loss function as a criterion for chosing the next point
#### We propose to use a local loss function as a criterion for chosing the next point.
#### As an example interpoint distance is a good loss function in one dimension
#### As an example interpoint distance is a good loss function in one dimension.
<!-- Plot here -->
<!-- Plot here -->
#### In general local loss functions only have a logarithmic overhead
#### In general local loss functions only have a logarithmic overhead.
#### With many points, due to the loss being local, parallel sampling incurs no additional cost
#### With many points, due to the loss being local, parallel sampling incurs no additional cost.
# Loss function design
# Loss function design
#### A failure mode of such algorithms is sampling only a small neighborhood of one point
#### A failure mode of such algorithms is sampling only a small neighborhood of one point.
<!-- example of distance loss on singularities -->
<!-- example of distance loss on singularities -->
#### A solution is to regularize the loss such that this would avoided
#### A solution is to regularize the loss such that this would avoided.
<!-- like resolution loss which limits the size of an interval -->
<!-- like resolution loss which limits the size of an interval -->
#### Adding loss functions allows for balancing between multiple priorities
#### Adding loss functions allows for balancing between multiple priorities.
<!-- i.e. area + line simplification -->
<!-- i.e. area + line simplification -->
#### A desireble property is that eventually all points should be sampled
#### A desireble property is that eventually all points should be sampled.
<!-- exploration vs. explotation -->
<!-- exploration vs. explotation -->
# Examples
# Examples
## Line simplification loss
## Line simplification loss
#### The line simplification loss is based on an inverse Visvalingam’s algorithm
#### The line simplification loss is based on an inverse Visvalingam’s algorithm.
<!-- https://bost.ocks.org/mike/simplify/ -->
<!-- https://bost.ocks.org/mike/simplify/ -->
## A parallelizable adaptive integration algorithm based on cquad
## A parallelizable adaptive integration algorithm based on cquad
#### The `cquad` algorithm belongs to a class that is parallelizable
#### The `cquad` algorithm belongs to a class that is parallelizable.
## isosurface sampling
## isosurface sampling
# Implementation and benchmarks
# Implementation and benchmarks
<!-- API description -->
<!-- API description -->
#### The learner abstracts a loss based priority queue
#### The learner abstracts a loss based priority queue.
#### The runner orchestrates the function evaluation
#### The runner orchestrates the function evaluation.
# Possible extensions
# Possible extensions
#### Anisotropic triangulation would improve the algorithm
#### Anisotropic triangulation would improve the algorithm.
#### Learning stochastic functions is promising direction
#### Learning stochastic functions is promising direction.
#### Experimental control needs to deal with noise, hysteresis, and the cost for changing parameters
#### Experimental control needs to deal with noise, hysteresis, and the cost for changing parameters.