From ffe43ab4c1a2d5ac4207823aac6b08dc7146b7ac Mon Sep 17 00:00:00 2001 From: Bas Nijholt <basnijholt@gmail.com> Date: Tue, 10 Sep 2019 16:24:44 +0200 Subject: [PATCH] add figure to paper --- paper.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/paper.md b/paper.md index 4e677d5..a2b9525 100755 --- a/paper.md +++ b/paper.md @@ -15,7 +15,7 @@ abstract: | These mehods can suggest a new point to calculate based on \textit{all} existing data at that time; however, this is an expensive operation. An alternative is to use local algorithms---in contrast to the previously mentioned global algorithms---which can suggest a new point, based only on the data in the immediate vicinity of a new point. This approach works well, even when using hundreds of computers simultaneously because the point suggestion algorithm is cheap (fast) to evaluate. - We provide a reference implementation and show its performance. + We provide a reference implementation in Python and show its performance. acknowledgements: | We'd like to thank ... contribution: | @@ -41,12 +41,14 @@ One of the most significant complications here is to parallelize this algorithm, Due to parallelization, the algorithm should be local, meaning that the information updates are only in a region around the newly calculated point. Additionally, the algorithm should also be fast in order to handle many parallel workers that calculate the function and request new points. A simple example is greedily optimizing continuity of the sampling by selecting points according to the distance to the largest gaps in the function values. -For a one-dimensional function this is to (1) construct intervals containing neighboring data points, (2) calculate the Euclidean distance of each interval and assign it to the candidate point inside that interval, and finally (3) pick the candidate point with the largest Euclidean distance. +For a one-dimensional function (Fig. @fig:loss_1D) this is to (1) construct intervals containing neighboring data points, (2) calculate the Euclidean distance of each interval and assign it to the candidate point inside that interval, and finally (3) pick the candidate point with the largest Euclidean distance. In this paper, we describe a class of algorithms that rely on local criteria for sampling, such as in the previous mentioned example. Here we associate a *local loss* to each of the *candidate points* within an interval, and choose the points with the largest loss. In the case of the integration algorithm the loss could just be an error estimate. The most significant advantage of these *local* algorithms is that they allow for easy parallelization and have a low computational overhead. +{#fig:loss_1D} + #### We provide a reference implementation, the Adaptive package, and demonstrate its performance. We provide a reference implementation, the open-source Python package called Adaptive[@Nijholt2019a], which has previously been used in several scientific publications[@vuik2018reproducing; @laeven2019enhanced; @bommer2019spin; @melo2019supercurrent]. It has algorithms for $f \colon \R^N \to \R^M$, where $N, M \in \mathbb{Z}^+$ but which work best when $N$ is small; integration in $\R$; and the averaging of stochastic functions. -- GitLab