Skip to content
Snippets Groups Projects
learner.ipynb 41.6 KiB
Newer Older
{
 "cells": [
  {
   "cell_type": "markdown",
Bas Nijholt's avatar
Bas Nijholt committed
   "metadata": {},
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Joseph Weston's avatar
Joseph Weston committed
    "[`adaptive`](https://gitlab.kwant-project.org/qt/adaptive-evaluation) is a package for adaptively sampling functions with support for parallel evaluation.\n",
    "\n",
    "This is an introductory notebook that shows some basic use cases.\n",
    "\n",
Joseph Weston's avatar
Joseph Weston committed
    "`adaptive` needs at least Python 3.6, and the following packages:\n",
    "\n",
Joseph Weston's avatar
Joseph Weston committed
    "+ `scipy`\n",
    "+ `sortedcontainers`\n",
    "\n",
    "Additionally `adaptive` has lots of extra functionality that makes it simple to use from Jupyter notebooks.\n",
    "This extra functionality depends on the following packages\n",
    "\n",
    "+ `ipykernel>=4.8.0`\n",
    "+ `jupyter_client>=5.2.2`\n",
Joseph Weston's avatar
Joseph Weston committed
    "+ `bokeh`\n",
    "+ `ipywidgets`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import adaptive\n",
    "adaptive.notebook_extension()\n",
    "\n",
    "# Import modules that are used in multiple cells\n",
    "import holoviews as hv\n",
    "import numpy as np\n",
    "from functools import partial\n",
    "import random"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 1D function learner"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We start with the most common use-case: sampling a 1D function $\\ f: ℝ → ℝ$.\n",
    "\n",
    "We will use the following function, which is a smooth (linear) background with a sharp peak at a random location:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
Bas Nijholt's avatar
Bas Nijholt committed
   "metadata": {},
   "outputs": [],
   "source": [
    "offset = random.uniform(-0.5, 0.5)\n",
    "def f(x, offset=offset, wait=True):\n",
    "    from time import sleep\n",
    "    from random import random\n",
    "    a = 0.01\n",
    "    return x + a**2 / (a**2 + (x - offset)**2)"
   "metadata": {},
   "source": [
    "We start by initializing a 1D \"learner\", which will suggest points to evaluate, and adapt its suggestions as more and more points are evaluated."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
Bas Nijholt's avatar
Bas Nijholt committed
   "metadata": {},
   "outputs": [],
    "learner = adaptive.Learner1D(f, bounds=(-1, 1))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Joseph Weston's avatar
Joseph Weston committed
    "Next we create a \"runner\" that will request points from the learner and evaluate 'f' on them.\n",
    "\n",
    "By default on Unix-like systems the runner will evaluate the points in parallel using local processes ([`concurrent.futures.ProcessPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor)).\n",
    "\n",
    "On Windows systems the runner will try to use a [`distributed.Client`](https://distributed.readthedocs.io/en/latest/client.html) if [`distributed`](https://distributed.readthedocs.io/en/latest/index.html) is installed. A `ProcessPoolExecutor` cannot be used on Windows for reasons."
  {
   "cell_type": "code",
   "execution_count": null,
Bas Nijholt's avatar
Bas Nijholt committed
   "metadata": {},
   "outputs": [],
   "source": [
    "# The end condition is when the \"loss\" is less than 0.1. In the context of the\n",
    "# 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.\n",
Joseph Weston's avatar
Joseph Weston committed
    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.05)\n",
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "When instantiated in a Jupyter notebook the runner does its job in the background and does not block the IPython kernel.\n",
    "We can use this to create a plot that updates as new data arrives:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
Bas Nijholt's avatar
Bas Nijholt committed
   "metadata": {},
   "outputs": [],
   "source": [
    "runner.live_plot(update_interval=0.1)"
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can now compare the adaptive sampling to a homogeneous sampling with the same number of points:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
Bas Nijholt's avatar
Bas Nijholt committed
   "metadata": {},
   "outputs": [],
   "source": [
    "if not runner.task.done():\n",
    "    raise RuntimeError('Wait for the runner to finish before executing the cells below!')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
Bas Nijholt's avatar
Bas Nijholt committed
   "metadata": {},
   "outputs": [],
   "source": [
    "learner2 = adaptive.Learner1D(f, bounds=learner.bounds)\n",
Bas Nijholt's avatar
Bas Nijholt committed
    "\n",
    "xs = np.linspace(*learner.bounds, len(learner.data))\n",
Joseph Weston's avatar
Joseph Weston committed
    "learner2.tell_many(xs, map(partial(f, wait=False), xs))\n",
    "\n",
Bas Nijholt's avatar
Bas Nijholt committed
    "learner.plot() + learner2.plot()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 2D function learner"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Besides 1D functions, we can also learn 2D functions: $\\ f: ℝ^2 → ℝ$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def ring(xy, wait=True):\n",
    "    import numpy as np\n",
    "    from time import sleep\n",
    "    from random import random\n",
    "    x, y = xy\n",
    "    a = 0.2\n",
    "    return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
    "\n",
    "learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
Bas Nijholt's avatar
Bas Nijholt committed
   "metadata": {},
   "outputs": [],
   "source": [
    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
    "runner.live_info()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
Bas Nijholt's avatar
Bas Nijholt committed
   "metadata": {},
   "outputs": [],
   "source": [
    "def plot(learner):\n",
    "    plot = learner.plot(tri_alpha=0.2)\n",
    "    title = f'loss={learner._loss:.3f}, n_points={learner.npoints}'\n",
    "    return (plot.Image\n",
    "            + plot.EdgePaths.I.opts(plot=dict(title_format=title))\n",
    "            + plot)\n",
    "runner.live_plot(plotter=plot, update_interval=0.1)"
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%opts EdgePaths (color='w')\n",
Joseph Weston's avatar
Joseph Weston committed
    "\n",
    "import itertools\n",
Joseph Weston's avatar
Joseph Weston committed
    "\n",
    "# Create a learner and add data on homogeneous grid, so that we can plot it\n",
    "learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)\n",
    "n = int(learner.npoints**0.5)\n",
    "xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds]\n",
    "xys = list(itertools.product(xs, ys))\n",
Joseph Weston's avatar
Joseph Weston committed
    "learner2.tell_many(xys, map(partial(ring, wait=False), xys))\n",
Joseph Weston's avatar
Joseph Weston committed
    "\n",
    "(learner2.plot(n).relabel('Homogeneous grid') + learner.plot().relabel('With adaptive') + \n",
    " learner2.plot(n, tri_alpha=0.4) + learner.plot(tri_alpha=0.4)).cols(2)"
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# N-dimensional function learner\n",
    "Besides 1 and 2 dimensional functions, we can also learn N-D functions: $\\ f: ℝ^N → ℝ, N \\ge 2$\n",
    "Do keep in mind the speed and [effectiveness](https://en.wikipedia.org/wiki/Curse_of_dimensionality) of the learner drops quickly with increasing number of dimensions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# this step takes a lot of time, it will finish at about 3300 points, which can take up to 6 minutes\n",
    "def sphere(xyz):\n",
    "    x, y, z = xyz\n",
    "    a = 0.4\n",
    "    return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4)\n",
    "\n",
    "learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)])\n",
    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02, log=True)\n",
    "runner.live_info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's plot 2D slices of the 3D function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def plot_cut(x, direction, learner=learner):\n",
    "    cut_mapping = {'xyz'.index(direction): x}\n",
    "    return learner.plot_slice(cut_mapping, n=100)\n",
    "\n",
    "dm = hv.DynamicMap(plot_cut, kdims=['value', 'direction'])\n",
    "dm.redim.values(value=np.linspace(-1, 1), direction=list('xyz'))"
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Or we can plot 1D slices"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%opts Path {+framewise}\n",
    "def plot_cut(x1, x2, directions, learner=learner):\n",
    "    cut_mapping = {'xyz'.index(d): x for d, x in zip(directions, [x1, x2])}\n",
    "    return learner.plot_slice(cut_mapping)\n",
    "\n",
    "dm = hv.DynamicMap(plot_cut, kdims=['v1', 'v2', 'directions'])\n",
    "dm.redim.values(v1=np.linspace(-1, 1),\n",
    "                v2=np.linspace(-1, 1),\n",
    "                directions=['xy', 'xz', 'yz'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The plots show some wobbles while the original function was smooth, this is a result of the fact that the learner chooses points in 3 dimensions and the simplices are not in the same face as we try to interpolate our lines. However, as always, when you sample more points the graph will become gradually smoother."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Averaging learner"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The next type of learner averages a function until the uncertainty in the average meets some condition.\n",
    "This is useful for sampling a random variable. The function passed to the learner must formally take a single parameter,\n",
    "which should be used like a \"seed\" for the (pseudo-) random variable (although in the current implementation the seed parameter can be ignored by the function)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def g(n):\n",
    "    import random\n",
    "    from time import sleep\n",
Joseph Weston's avatar
Joseph Weston committed
    "    sleep(random.random() / 1000)\n",
    "    # Properly save and restore the RNG state\n",
    "    state = random.getstate()\n",
    "    random.seed(n)\n",
    "    val = random.gauss(0.5, 1)\n",
    "    random.setstate(state)\n",
    "    return val"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "learner = adaptive.AverageLearner(g, atol=None, rtol=0.01)\n",
Joseph Weston's avatar
Joseph Weston committed
    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 2)\n",
    "runner.live_info()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
Joseph Weston's avatar
Joseph Weston committed
    "runner.live_plot(update_interval=0.1)"
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 1D integration learner with `cquad`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This learner learns a 1D function and calculates the integral and error of the integral with it. It is based on Pedro Gonnet's [implementation](https://www.academia.edu/1976055/Adaptive_quadrature_re-revisited).\n",
    "\n",
    "Let's try the following function with cusps (that is difficult to integrate):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def f24(x):\n",
    "    return np.floor(np.exp(x))\n",
    "\n",
    "xs = np.linspace(0, 3, 200)\n",
    "hv.Scatter((xs, [f24(x) for x in xs]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Just to prove that this really is a difficult to integrate function, let's try a familiar function integrator `scipy.integrate.quad`, which will give us warnings that it encounters difficulties."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import scipy.integrate\n",
    "scipy.integrate.quad(f24, 0, 3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We initialize a learner again and pass the bounds and relative tolerance we want to reach. Then in the `Runner` we pass `goal=lambda l: l.done()` where `learner.done()` is `True` when the relative tolerance has been reached."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from adaptive.runner import SequentialExecutor\n",
Joseph Weston's avatar
Joseph Weston committed
    "\n",
    "learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-10)\n",
Joseph Weston's avatar
Joseph Weston committed
    "\n",
    "# We use a SequentialExecutor, which runs the function to be learned in *this* process only. This means we don't pay\n",
    "# the overhead of evaluating the function in another process.\n",
    "runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.done())\n",
    "runner.live_info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we could do the live plotting again, but lets just wait untill the runner is done."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "if not runner.task.done():\n",
    "    raise RuntimeError('Wait for the runner to finish before executing the cells below!')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('The integral value is {} with the corresponding error of {}'.format(learner.igral, learner.err))\n",
    "learner.plot()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 1D learner with vector output: `f:ℝ → ℝ^N`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Joseph Weston's avatar
Joseph Weston committed
    "Sometimes you may want to learn a function with vector output:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "random.seed(0)\n",
    "offsets = [random.uniform(-0.8, 0.8) for _ in range(3)]\n",
Joseph Weston's avatar
Joseph Weston committed
    "\n",
    "# sharp peaks at random locations in the domain\n",
    "def f_levels(x, offsets=offsets):\n",
    "    a = 0.01\n",
    "    return np.array([offset + x + a**2 / (a**2 + (x - offset)**2)\n",
    "                     for offset in offsets])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Joseph Weston's avatar
Joseph Weston committed
    "`adaptive` has you covered! The `Learner1D` can be used for such functions:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))\n",
    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
    "runner.live_info()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "runner.live_plot(update_interval=0.1)"
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Custom adaptive logic for 1D and 2D"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`Learner1D` and `Learner2D` both work on the principle of subdividing their domain into subdomains, and assigning a property to each subdomain, which we call the *loss*. The algorithm for choosing the best place to evaluate our function is then simply *take the subdomain with the largest loss and add a point in the center, creating new subdomains around this point*. \n",
    "The *loss function* that defines the loss per subdomain is the canonical place to define what regions of the domain are \"interesting\".\n",
    "The default loss function for `Learner1D` and `Learner2D` is sufficient for a wide range of common cases, but it is by no means a panacea. For example, the default loss function will tend to get stuck on divergences.\n",
    "Both the `Learner1D` and `Learner2D` allow you to specify a *custom loss function*. Below we illustrate how you would go about writing your own loss function. The documentation for `Learner1D` and `Learner2D` specifies the signature that your loss function needs to have in order for it to work with `adaptive`.\n",
    "Say we want to properly sample a function that contains divergences. A simple (but naive) strategy is to *uniformly* sample the domain:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def uniform_sampling_1d(interval, scale, function_values):\n",
    "    # Note that we never use 'function_values'; the loss is just the size of the subdomain\n",
    "    x_left, x_right = interval\n",
    "    x_scale, _ = scale\n",
    "    dx = (x_right - x_left) / x_scale\n",
    "    return dx\n",
    "\n",
    "def f_divergent_1d(x):\n",
    "    return 1 / x**2\n",
    "\n",
    "learner = adaptive.Learner1D(f_divergent_1d, (-1, 1), loss_per_interval=uniform_sampling_1d)\n",
    "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01)\n",
    "learner.plot().select(y=(0, 10000))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%opts EdgePaths (color='w') Image [logz=True]\n",
    "\n",
    "from adaptive.runner import SequentialExecutor\n",
    "\n",
    "def uniform_sampling_2d(ip):\n",
    "    from adaptive.learner.learner2D import areas\n",
    "    A = areas(ip)\n",
    "    return np.sqrt(A)\n",
    "\n",
    "def f_divergent_2d(xy):\n",
    "    x, y = xy\n",
    "    return 1 / (x**2 + y**2)\n",
    "\n",
    "learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=uniform_sampling_2d)\n",
    "\n",
    "# this takes a while, so use the async Runner so we know *something* is happening\n",
    "runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02)\n",
    "runner.live_info()\n",
    "runner.live_plot(update_interval=0.2,\n",
    "                 plotter=lambda l: l.plot(tri_alpha=0.3).relabel('1 / (x^2 + y^2) in log scale'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The uniform sampling strategy is a common case to benchmark against, so the 1D and 2D versions are included in `adaptive` as `adaptive.learner.learner1D.uniform_sampling` and `adaptive.learner.learner2D.uniform_sampling`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Doing better\n",
    "Of course, using `adaptive` for uniform sampling is a bit of a waste!\n",
    "\n",
    "Let's see if we can do a bit better. Below we define a loss per subdomain that scales with the degree of nonlinearity of the function (this is very similar to the default loss function for `Learner2D`), but which is 0 for subdomains smaller than a certain area, and infinite for subdomains larger than a certain area.\n",
    "\n",
    "A loss defined in this way means that the adaptive algorithm will first prioritise subdomains that are too large (infinite loss). After all subdomains are appropriately small it will prioritise places where the function is very nonlinear, but will ignore subdomains that are too small (0 loss)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%opts EdgePaths (color='w') Image [logz=True]\n",
    "\n",
    "def resolution_loss(ip, min_distance=0, max_distance=1):\n",
    "    \"\"\"min_distance and max_distance should be in between 0 and 1\n",
    "    because the total area is normalized to 1.\"\"\"\n",
    "    from adaptive.learner.learner2D import areas, deviations\n",
    "    # 'deviations' returns an array of shape '(n, len(ip))', where\n",
    "    # 'n' is the  is the dimension of the output of the learned function\n",
    "    # In this case we know that the learned function returns a scalar,\n",
    "    # so 'deviations' returns an array of shape '(1, len(ip))'.\n",
    "    # It represents the deviation of the function value from a linear estimate\n",
    "    # over each triangular subdomain.\n",
    "    dev = deviations(ip)[0]\n",
    "    \n",
    "    # we add terms of the same dimension: dev == [distance], A == [distance**2]\n",
    "    loss = np.sqrt(A) * dev + A\n",
    "    \n",
    "    # Setting areas with a small area to zero such that they won't be chosen again\n",
    "    loss[A < min_distance**2] = 0 \n",
    "    \n",
    "    # Setting triangles that have a size larger than max_distance to infinite loss\n",
    "    loss[A > max_distance**2] = np.inf\n",
    "\n",
    "loss = partial(resolution_loss, min_distance=0.01)\n",
    "\n",
    "learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=loss)\n",
    "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.02)\n",
    "learner.plot(tri_alpha=0.3).relabel('1 / (x^2 + y^2) in log scale')"
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Awesome! We zoom in on the singularity, but not at the expense of sampling the rest of the domain a reasonable amount.\n",
    "\n",
    "The above strategy is available as `adaptive.learner.learner2D.resolution_loss`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Balancing learner"
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Joseph Weston's avatar
Joseph Weston committed
    "The balancing learner is a \"meta-learner\" that takes a list of learners. When you request a point from the balancing learner, it will query all of its \"children\" to figure out which one will give the most improvement.\n",
    "The balancing learner can for example be used to implement a poor-man's 2D learner by using the `Learner1D`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def h(x, offset=0):\n",
    "    a = 0.01\n",
    "    return x + a**2 / (a**2 + (x - offset)**2)\n",
    "\n",
    "learners = [adaptive.Learner1D(partial(h, offset=random.uniform(-1, 1)),\n",
    "            bounds=(-1, 1)) for i in range(10)]\n",
    "bal_learner = adaptive.BalancingLearner(learners)\n",
    "runner = adaptive.Runner(bal_learner, goal=lambda l: l.loss() < 0.01)\n",
    "runner.live_info()"
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plotter = lambda learner: hv.Overlay([L.plot() for L in learner.learners])\n",
Joseph Weston's avatar
Joseph Weston committed
    "runner.live_plot(plotter=plotter, update_interval=0.1)"
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Often one wants to create a set of `learner`s for a cartesian product of parameters. For that particular case we've added a `classmethod` called `from_product`. See how it works below"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from scipy.special import eval_jacobi\n",
    "\n",
    "def jacobi(x, n, alpha, beta): return eval_jacobi(n, alpha, beta, x)\n",
    "\n",
    "combos = {\n",
    "    'n': [1, 2, 4, 8],\n",
    "    'alpha': np.linspace(0, 2, 3),\n",
    "    'beta': np.linspace(0, 1, 5),\n",
    "}\n",
    "\n",
    "learner = adaptive.BalancingLearner.from_product(\n",
    "    jacobi, adaptive.Learner1D, dict(bounds=(0, 1)), combos)\n",
    "\n",
    "runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01)\n",
    "\n",
    "# The `cdims` will automatically be set when using `from_product`, so\n",
    "# `plot()` will return a HoloMap with correctly labeled sliders.\n",
    "learner.plot().overlay('beta').grid()"
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# DataSaver"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Joseph Weston's avatar
Joseph Weston committed
    "If the function that you want to learn returns a value along with some metadata, you can wrap your learner in an `adaptive.DataSaver`.\n",
Joseph Weston's avatar
Joseph Weston committed
    "In the following example the function to be learned returns its result and the execution time in a dictionary:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from operator import itemgetter\n",
    "\n",
    "    \"\"\"The function evaluation takes roughly the time we `sleep`.\"\"\"\n",
    "    import random\n",
    "    from time import sleep\n",
    "\n",
    "    waiting_time = random.random()\n",
    "    sleep(waiting_time)\n",
    "    a = 0.01\n",
    "    y = x + a**2 / (a**2 + x**2)\n",
    "    return {'y': y, 'waiting_time': waiting_time}\n",
    "\n",
Joseph Weston's avatar
Joseph Weston committed
    "# Create the learner with the function that returns a 'dict'\n",
    "# This learner cannot be run directly, as Learner1D does not know what to do with the 'dict'\n",
    "_learner = adaptive.Learner1D(f_dict, bounds=(-1, 1))\n",
Joseph Weston's avatar
Joseph Weston committed
    "# Wrapping the learner with 'adaptive.DataSaver' and tell it which key it needs to learn\n",
    "learner = adaptive.DataSaver(_learner, arg_picker=itemgetter('y'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`learner.learner` is the original learner, so `learner.learner.loss()` will call the correct loss method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
Joseph Weston's avatar
Joseph Weston committed
    "runner = adaptive.Runner(learner, goal=lambda l: l.learner.loss() < 0.05)\n",
    "runner.live_info()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
Joseph Weston's avatar
Joseph Weston committed
    "runner.live_plot(plotter=lambda l: l.learner.plot(), update_interval=0.1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now the `DataSavingLearner` will have an dictionary attribute `extra_data` that has `x` as key and the data that was returned by `learner.function` as values."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "learner.extra_data"
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# `Scikit-Optimize`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We have wrapped the `Optimizer` class from [`scikit-optimize`](https://github.com/scikit-optimize/scikit-optimize), to show how existing libraries can be integrated with `adaptive`.\n",
    "\n",
    "The `SKOptLearner` attempts to \"optimize\" the given function `g` (i.e. find the global minimum of `g` in the window of interest).\n",
    "\n",
    "Here we use the same example as in the `scikit-optimize` [tutorial](https://github.com/scikit-optimize/scikit-optimize/blob/master/examples/ask-and-tell.ipynb). Although `SKOptLearner` can optimize functions of arbitrary dimensionality, we can only plot the learner if a 1D function is being learned."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "outputs": [],
   "source": [
    "def g(x, noise_level=0.1):\n",
    "    return (np.sin(5 * x) * (1 - np.tanh(x ** 2))\n",
    "            + np.random.randn() * noise_level)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "learner = adaptive.SKOptLearner(g, dimensions=[(-2., 2.)],\n",
    "                                base_estimator=\"GP\",\n",
    "                                acq_func=\"gp_hedge\",\n",
    "                                acq_optimizer=\"lbfgs\",\n",
    "                               )\n",
    "runner = adaptive.Runner(learner, ntasks=1, goal=lambda l: l.npoints > 40)\n",
    "runner.live_info()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%opts Overlay [legend_position='top']\n",
    "xs = np.linspace(*learner.space.bounds[0])\n",
    "to_learn = hv.Curve((xs, [g(x, 0) for x in xs]), label='to learn')\n",
    "\n",
    "runner.live_plot().relabel('prediction', depth=2) * to_learn"
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
Joseph Weston's avatar
Joseph Weston committed
    "# Using multiple cores"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Often you will want to evaluate the function on some remote computing resources. `adaptive` works out of the box with any framework that implements a [PEP 3148](https://www.python.org/dev/peps/pep-3148/) compliant executor that returns `concurrent.futures.Future` objects."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Joseph Weston's avatar
Joseph Weston committed
    "### [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "On Unix-like systems by default `adaptive.Runner` creates a `ProcessPoolExecutor`, but you can also pass one explicitly e.g. to limit the number of workers:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
Bas Nijholt's avatar
Bas Nijholt committed
   "metadata": {},
   "outputs": [],
   "source": [
    "from concurrent.futures import ProcessPoolExecutor\n",
    "\n",
    "executor = ProcessPoolExecutor(max_workers=4)\n",
    "\n",
    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
    "runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.05)\n",
    "runner.live_info()\n",
Joseph Weston's avatar
Joseph Weston committed
    "runner.live_plot(update_interval=0.1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Joseph Weston's avatar
Joseph Weston committed
    "### [`ipyparallel`](https://ipyparallel.readthedocs.io/en/latest/intro.html)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
Bas Nijholt's avatar
Bas Nijholt committed
   "metadata": {},
   "outputs": [],
   "source": [
    "import ipyparallel\n",
    "\n",
    "client = ipyparallel.Client()  # You will need to start an `ipcluster` to make this work\n",
    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
    "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
    "runner.live_info()\n",
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### [`distributed`](https://distributed.readthedocs.io/en/latest/)\n",
    "\n",
    "On Windows by default `adaptive.Runner` uses a `distributed.Client`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import distributed\n",
    "\n",
    "client = distributed.Client()\n",
    "\n",
    "learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
    "runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
    "runner.live_info()\n",
Joseph Weston's avatar
Joseph Weston committed
    "runner.live_plot(update_interval=0.1)"
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "cell_type": "markdown",