diff --git a/paper.md b/paper.md
index e6eaad63ca06eebeb7df371e9191a6c4539b22b0..31c4af94e4f9c62b93b8715d6ef5d809cc444b2a 100755
--- a/paper.md
+++ b/paper.md
@@ -320,6 +320,27 @@ learner = adaptive.LearnerND(ring, bounds=[(-1, 1), (-1, 1)])
 runner = Runner(learner, goal)
 ```
 
+Again, like the `Learner1D`, it is possible to specify a custom loss function.
+For example, the loss function used to find the iso-line in Fig. @fig:isoline (b) is
+```python
+def isoline_loss_function(level, sigma, priority):
+    from adaptive.learner.learnerND import default_loss
+
+    def gaussian(x, mu, sigma):
+        return np.exp(-(x - mu) ** 2 / sigma ** 2 / 2)
+
+    def loss(simplex, values, value_scale):
+        values = np.array(values)
+        dist = abs(level * value_scale - values).mean()
+        L_default = default_loss(simplex, values, value_scale)
+        L_dist = priority * gaussian(dist, 0, sigma)
+        return L_dist + L_default
+
+    return loss
+
+loss_per_simplex = isoline_loss_function(0.1, 0.4, 0.5)
+```
+
 #### The BalancingLearner can run many learners simultaneously.
 Frequently, more than one function (learner) needs to run at once, to do this we have implemented the `BalancingLearner`, which does not take a function, but a list of learners.
 This learner internally asks all child learners for points and will choose the point of the learner that maximizes the loss improvement; thereby, it balances the resources over the different learners.