From b2d707b651a549d4c10358f3f735f46a0136a350 Mon Sep 17 00:00:00 2001
From: Bas Nijholt <basnijholt@gmail.com>
Date: Wed, 21 Aug 2019 18:30:27 +0200
Subject: [PATCH] add periods

---
 paper.md | 46 +++++++++++++++++++++++-----------------------
 1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/paper.md b/paper.md
index 1142fb8..3686431 100755
--- a/paper.md
+++ b/paper.md
@@ -16,84 +16,84 @@ contribution: |
 
 # Introduction
 
-#### Simulations are costly and often require sampling a region in parameter space
+#### Simulations are costly and often require sampling a region in parameter space.
 
-#### Chosing new points based on existing data improves the simulation efficiency
+#### Chosing new points based on existing data improves the simulation efficiency.
 <!-- examples here -->
 
-#### We describe a class of algorithms replying on local criteria for sampling which allow for easy parallelization and have a low overhead
+#### We describe a class of algorithms replying on local criteria for sampling which allow for easy parallelization and have a low overhead.
 <!-- This is useful for intermediary cost simulations. -->
 
-#### We provide a reference implementation, the Adaptive package, and demonstrate its performance
+#### We provide a reference implementation, the Adaptive package, and demonstrate its performance.
 
 # Review of adaptive sampling
 
-#### Experiment design uses Bayesian sampling because the computational costs are not a limitation
+#### Experiment design uses Bayesian sampling because the computational costs are not a limitation.
 <!-- high dimensional functions -->
 
-#### Plotting and low dimensional integration uses local sampling
+#### Plotting and low dimensional integration uses local sampling.
 <!-- can refer to Mathematica's implementation -->
 
-#### PDE solvers and computer graphics use adaptive meshing
+#### PDE solvers and computer graphics use adaptive meshing.
 <!-- hydrodynamics anisotropic meshing paper ref -->
 
 # Design constraints and the general algorithm
 
-#### We aim to sample low dimensional low to intermediate cost functions in parallel
+#### We aim to sample low dimensional low to intermediate cost functions in parallel.
 <!-- because of curse of dimensionality -->
 <!-- fast functions don't require adaptive -->
 <!-- When your function evaluation is very expensive, full-scale Bayesian sampling will perform better, however, there is a broad class of simulations that are in the right regime for Adaptive to be beneficial. -->
 
-#### We propose to use a local loss function as a criterion for chosing the next point
+#### We propose to use a local loss function as a criterion for chosing the next point.
 
-#### As an example interpoint distance is a good loss function in one dimension
+#### As an example interpoint distance is a good loss function in one dimension.
 <!-- Plot here -->
 
-#### In general local loss functions only have a logarithmic overhead
+#### In general local loss functions only have a logarithmic overhead.
 
-#### With many points, due to the loss being local, parallel sampling incurs no additional cost
+#### With many points, due to the loss being local, parallel sampling incurs no additional cost.
 
 # Loss function design
 
-#### A failure mode of such algorithms is sampling only a small neighborhood of one point
+#### A failure mode of such algorithms is sampling only a small neighborhood of one point.
 <!-- example of distance loss on singularities -->
 
-#### A solution is to regularize the loss such that this would avoided
+#### A solution is to regularize the loss such that this would avoided.
 <!-- like resolution loss which limits the size of an interval -->
 
-#### Adding loss functions allows for balancing between multiple priorities
+#### Adding loss functions allows for balancing between multiple priorities.
 <!-- i.e. area + line simplification -->
 
-#### A desireble property is that eventually all points should be sampled
+#### A desireble property is that eventually all points should be sampled.
 <!-- exploration vs. explotation -->
 
 # Examples
 
 ## Line simplification loss
 
-#### The line simplification loss is based on an inverse Visvalingam’s algorithm
+#### The line simplification loss is based on an inverse Visvalingam’s algorithm.
 <!-- https://bost.ocks.org/mike/simplify/ -->
 
 ## A parallelizable adaptive integration algorithm based on cquad
 
-#### The `cquad` algorithm belongs to a class that is parallelizable
+#### The `cquad` algorithm belongs to a class that is parallelizable.
 
 ## isosurface sampling
 
 # Implementation and benchmarks
 <!-- API description -->
 
-#### The learner abstracts a loss based priority queue
+#### The learner abstracts a loss based priority queue.
 
-#### The runner orchestrates the function evaluation
+#### The runner orchestrates the function evaluation.
 
 # Possible extensions
 
-#### Anisotropic triangulation would improve the algorithm
+#### Anisotropic triangulation would improve the algorithm.
 
-#### Learning stochastic functions is promising direction
+#### Learning stochastic functions is promising direction.
 
-#### Experimental control needs to deal with noise, hysteresis, and the cost for changing parameters
+#### Experimental control needs to deal with noise, hysteresis, and the cost for changing parameters.
 
 
 <!-- We can include things like:
-- 
GitLab