Newer
Older
{
"cells": [
{
"cell_type": "markdown",
"source": [
"# Adaptive"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[`adaptive`](https://gitlab.kwant-project.org/qt/adaptive-evaluation) is a package for adaptively sampling functions with support for parallel evaluation.\n",
"\n",
"This is an introductory notebook that shows some basic use cases.\n",
"\n",
"`adaptive` needs at least Python 3.6, and the following packages:\n",
"+ `scipy`\n",
"+ `sortedcontainers`\n",
"\n",
"Additionally `adaptive` has lots of extra functionality that makes it simple to use from Jupyter notebooks.\n",
"This extra functionality depends on the following packages\n",
"\n",
"+ `ipykernel>=4.8.0`\n",
"+ `jupyter_client>=5.2.2`\n",
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import adaptive\n",
"adaptive.notebook_extension()\n",
"\n",
"# Import modules that are used in multiple cells\n",
"import holoviews as hv\n",
"import numpy as np\n",
"from functools import partial\n",
"import random"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1D function learner"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We start with the most common use-case: sampling a 1D function $\\ f: ℝ → ℝ$.\n",
"\n",
"We will use the following function, which is a smooth (linear) background with a sharp peak at a random location:"
]
},
{
"cell_type": "code",
"execution_count": null,
"offset = random.uniform(-0.5, 0.5)\n",
"def f(x, offset=offset, wait=True):\n",
" sleep(random())\n",
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We start by initializing a 1D \"learner\", which will suggest points to evaluate, and adapt its suggestions as more and more points are evaluated."
]
},
{
"cell_type": "code",
"execution_count": null,
"learner = adaptive.Learner1D(f, bounds=(-1, 1))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we create a \"runner\" that will request points from the learner and evaluate 'f' on them.\n",
"\n",
"By default the runner will evaluate the points in parallel using local processes ([`concurrent.futures.ProcessPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor))."
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"# The end condition is when the \"loss\" is less than 0.1. In the context of the\n",
"# 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.\n",
"runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.05)\n",
"runner.live_info()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When instantiated in a Jupyter notebook the runner does its job in the background and does not block the IPython kernel.\n",
"We can use this to create a plot that updates as new data arrives:"
]
},
{
"cell_type": "code",
"execution_count": null,
"runner.live_plot(update_interval=0.1)"
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now compare the adaptive sampling to a homogeneous sampling with the same number of points:"
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"if not runner.task.done():\n",
" raise RuntimeError('Wait for the runner to finish before executing the cells below!')"
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"learner2 = adaptive.Learner1D(f, bounds=learner.bounds)\n",
"xs = np.linspace(*learner.bounds, len(learner.data))\n",
"learner2.add_data(xs, map(partial(f, wait=False), xs))\n",
]
},
{
"cell_type": "markdown",
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Besides 1D functions, we can also learn 2D functions: $\\ f: ℝ^2 → ℝ$"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def ring(xy, wait=True):\n",
" from time import sleep\n",
" from random import random\n",
" sleep(random()/10)\n",
" x, y = xy\n",
" a = 0.2\n",
" return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)\n",
"learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)])"
]
},
{
"cell_type": "code",
"execution_count": null,
"runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)\n",
"runner.live_info()"
]
},
{
"cell_type": "code",
"execution_count": null,
" plot = learner.plot(tri_alpha=0.2)\n",
" title = f'loss={learner._loss:.3f}, n_points={learner.npoints}'\n",
" return (plot.Image\n",
" + plot.EdgePaths.I.opts(plot=dict(title_format=title))\n",
" + plot)\n",
"runner.live_plot(plotter=plot, update_interval=0.1)"
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%opts EdgePaths (color='w')\n",
"import itertools\n",
"\n",
"# Create a learner and add data on homogeneous grid, so that we can plot it\n",
"learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)\n",
"n = int(learner.npoints**0.5)\n",
"xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds]\n",
"xys = list(itertools.product(xs, ys))\n",
"learner2.add_data(xys, map(partial(ring, wait=False), xys))\n",
"(learner2.plot(n).relabel('Homogeneous grid') + learner.plot().relabel('With adaptive') + \n",
" learner2.plot(n, tri_alpha=0.4) + learner.plot(tri_alpha=0.4)).cols(2)"
{
"cell_type": "markdown",
"metadata": {},
"source": [
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The next type of learner averages a function until the uncertainty in the average meets some condition.\n",
"This is useful for sampling a random variable. The function passed to the learner must formally take a single parameter,\n",
"which should be used like a \"seed\" for the (pseudo-) random variable (although in the current implementation the seed parameter can be ignored by the function)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def g(n):\n",
" import random\n",
" from time import sleep\n",
" # Properly save and restore the RNG state\n",
" state = random.getstate()\n",
" random.seed(n)\n",
" val = random.gauss(0.5, 1)\n",
" random.setstate(state)\n",
" return val"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learner = adaptive.AverageLearner(g, atol=None, rtol=0.01)\n",
"runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 2)\n",
"runner.live_info()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1D integration learner with `cquad`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This learner learns a 1D function and calculates the integral and error of the integral with it. It is based on Pedro Gonnet's [implementation](https://www.academia.edu/1976055/Adaptive_quadrature_re-revisited).\n",
"\n",
"Let's try the following function with cusps (that is difficult to integrate):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def f24(x):\n",
" return np.floor(np.exp(x))\n",
"\n",
"xs = np.linspace(0, 3, 200)\n",
"hv.Scatter((xs, [f24(x) for x in xs]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Just to prove that this really is a difficult to integrate function, let's try a familiar function integrator `scipy.integrate.quad`, which will give us warnings that it encounters difficulties."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import scipy.integrate\n",
"scipy.integrate.quad(f24, 0, 3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We initialize a learner again and pass the bounds and relative tolerance we want to reach. Then in the `Runner` we pass `goal=lambda l: l.done()` where `learner.done()` is `True` when the relative tolerance has been reached."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from adaptive.runner import SequentialExecutor\n",
"learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-10)\n",
"\n",
"# We use a SequentialExecutor, which runs the function to be learned in *this* process only. This means we don't pay\n",
"# the overhead of evaluating the function in another process.\n",
"runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.done())\n",
"runner.live_info()"
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we could do the live plotting again, but lets just wait untill the runner is done."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not runner.task.done():\n",
" raise RuntimeError('Wait for the runner to finish before executing the cells below!')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print('The integral value is {} with the corresponding error of {}'.format(learner.igral, learner.err))\n",
"learner.plot()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1D learner with vector output: `f:ℝ → ℝ^N`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Sometimes you may want to learn a function with vector output:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"random.seed(0)\n",
"offsets = [random.uniform(-0.8, 0.8) for _ in range(3)]\n",
"\n",
"# sharp peaks at random locations in the domain\n",
"def f_levels(x, offsets=offsets):\n",
" a = 0.01\n",
" return np.array([offset + x + a**2 / (a**2 + (x - offset)**2)\n",
" for offset in offsets])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`adaptive` has you covered! The `Learner1D` can be used for such functions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))\n",
"runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.05)\n",
"runner.live_plot()"
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Custom point choosing logic for 1D and 2D"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `Learner1D` and `Learner2D` implement a certain logic for chosing points based on the existing data.\n",
"\n",
"For some functions this default stratagy might not work, for example you'll run into trouble when you learn functions that contain divergencies.\n",
"\n",
"Both the `Learner1D` and `Learner2D` allow you to use a custom loss function, which you specify as an argument in the learner. See the doc-string of `Learner1D` and `Learner2D` to see what `loss_per_interval` and `loss_per_triangle` need to return and take as input.\n",
"\n",
"As an example we implement a homogeneous sampling strategy (which of course is not the best way of handling divergencies).\n",
"\n",
"Note that both these loss functions are also available from `adaptive.learner.learner1d.uniform_sampling` and `adaptive.learner.learner2d.uniform_sampling`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def uniform_sampling_1d(interval, scale, function_values):\n",
" x_left, x_right = interval\n",
" x_scale, _ = scale\n",
" dx = (x_right - x_left) / x_scale\n",
" return dx\n",
"\n",
"def f_divergent_1d(x):\n",
" return 1 / x**2\n",
"\n",
"learner = adaptive.Learner1D(f_divergent_1d, (-1, 1), loss_per_interval=uniform_sampling_1d)\n",
"runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01)\n",
"learner.plot().select(y=(0, 10000))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%opts EdgePaths (color='w')\n",
"\n",
"def uniform_sampling_2d(ip):\n",
" from adaptive.learner.learner2D import areas\n",
" A = areas(ip)\n",
" return np.sqrt(A)\n",
"\n",
"def f_divergent_2d(xy):\n",
" x, y = xy\n",
" return 1 / (x**2 + y**2)\n",
"\n",
"learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=uniform_sampling_2d)\n",
"runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.02)\n",
"learner.plot(tri_alpha=0.3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Doing better\n",
"Of course we can improve on the the above result, since just homogeneous sampling is usually the dumbest way to sample."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%opts EdgePaths (color='w') Image [logz=True]\n",
"\n",
"def max_resolution_loss(ip, smallest_distance=0.01):\n",
" from adaptive.learner.learner2D import areas, deviations\n",
" A = areas(ip)\n",
"\n",
" # `deviations` returns an array of the same length as the\n",
" # vector your function to be learned returns, so 1 in this case.\n",
" # Its value represents the deviation from the linear estimate based\n",
" # on the gradients inside each triangle.\n",
" dev = deviations(ip)[0]\n",
" \n",
" # we add terms of the same dimension: dev == [distance], A == [distance**2]\n",
" loss = np.sqrt(A) * dev + A\n",
" \n",
" # Setting areas with a small area to zero such that they won't be chosen again\n",
" loss[A < smallest_distance**2] = 0 \n",
" return loss\n",
"\n",
"learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=max_resolution_loss)\n",
"runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.02)\n",
"learner.plot(tri_alpha=0.3).relabel('Plotted in log scale')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The balancing learner is a \"meta-learner\" that takes a list of learners. When you request a point from the balancing learner, it will query all of its \"children\" to figure out which one will give the most improvement.\n",
"The balancing learner can for example be used to implement a poor-man's 2D learner by using the `Learner1D`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" a = 0.01\n",
" return x + a**2 / (a**2 + (x - offset)**2)\n",
"\n",
"learners = [adaptive.Learner1D(partial(f, offset=random.uniform(-1, 1)),\n",
" bounds=(-1, 1)) for i in range(10)]\n",
"bal_learner = adaptive.BalancingLearner(learners)\n",
"runner = adaptive.Runner(bal_learner, goal=lambda l: l.loss() < 0.01)\n",
"runner.live_info()"
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plotter = lambda learner: hv.Overlay([L.plot() for L in learner.learners])\n",
"runner.live_plot(plotter=plotter, update_interval=0.1)"
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# DataSaver"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If the function that you want to learn returns a value along with some metadata, you can wrap your learner in an `adaptive.DataSaver`.\n",
"In the following example the function to be learned returns its result and the execution time in a dictionary:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
" \"\"\"The function evaluation takes roughly the time we `sleep`.\"\"\"\n",
" import random\n",
" from time import sleep\n",
"\n",
" waiting_time = random.random()\n",
" sleep(waiting_time)\n",
" a = 0.01\n",
" y = x + a**2 / (a**2 + x**2)\n",
" return {'y': y, 'waiting_time': waiting_time}\n",
"\n",
"# Create the learner with the function that returns a 'dict'\n",
"# This learner cannot be run directly, as Learner1D does not know what to do with the 'dict'\n",
"_learner = adaptive.Learner1D(f_dict, bounds=(-1, 1))\n",
"# Wrapping the learner with 'adaptive.DataSaver' and tell it which key it needs to learn\n",
"learner = adaptive.DataSaver(_learner, arg_picker=itemgetter('y'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`learner.learner` is the original learner, so `learner.learner.loss()` will call the correct loss method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"runner = adaptive.Runner(learner, goal=lambda l: l.learner.loss() < 0.05)\n",
"runner.live_info()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"runner.live_plot(plotter=lambda l: l.learner.plot(), update_interval=0.1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now the `DataSavingLearner` will have an dictionary attribute `extra_data` that has `x` as key and the data that was returned by `learner.function` as values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Often you will want to evaluate the function on some remote computing resources. `adaptive` works out of the box with any framework that implements a [PEP 3148](https://www.python.org/dev/peps/pep-3148/) compliant executor that returns `concurrent.futures.Future` objects."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By default `adaptive.Runner` creates a `ProcessPoolExecutor`, but you can also pass one explicitly e.g. to limit the number of workers:"
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"from concurrent.futures import ProcessPoolExecutor\n",
"\n",
"executor = ProcessPoolExecutor(max_workers=4)\n",
"\n",
"learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
"runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.05)\n",
"runner.live_info()\n",
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [`ipyparallel`](https://ipyparallel.readthedocs.io/en/latest/intro.html)"
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"client = ipyparallel.Client() # You will need to start an `ipcluster` to make this work\n",
"learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
"runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
"runner.live_info()\n",
"runner.live_plot()"
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [`distributed`](https://distributed.readthedocs.io/en/latest/)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import distributed\n",
"\n",
"client = distributed.Client()\n",
"\n",
"learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
"runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01)\n",
"runner.live_info()\n",
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Advanced Topics"
]
},
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## A watched pot never boils!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`adaptive.Runner` does its work in an `asyncio` task that runs concurrently with the IPython kernel, when using `adaptive` from a Jupyter notebook. This is advantageous because it allows us to do things like live-updating plots, however it can trip you up if you're not careful.\n",
"\n",
"Notably: **if you block the IPython kernel, the runner will not do any work**.\n",
"\n",
"For example if you wanted to wait for a runner to complete, **do not wait in a busy loop**:\n",
"```python\n",
"while not runner.task.done():\n",
" pass\n",
"```\n",
"\n",
"If you do this then **the runner will never finish**."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"What to do if you don't care about live plotting, and just want to run something until its done?\n",
"\n",
"The simplest way to accomplish this is to use `adaptive.BlockingRunner`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
"adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.005)\n",
"# This will only get run after the runner has finished\n",
"learner.plot()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reproducibility"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By default `adaptive` runners evaluate the learned function in parallel across several cores. The runners are also opportunistic, in that as soon as a result is available they will feed it to the learner and request another point to replace the one that just finished.\n",
"\n",
"Because the order in which computations complete is non-deterministic, this means that the runner behaves in a non-deterministic way. Adaptive makes this choice because in many cases the speedup from parallel execution is worth sacrificing the \"purity\" of exactly reproducible computations.\n",
"\n",
"Nevertheless it is still possible to run a learner in a deterministic way with adaptive.\n",
"\n",
"The simplest way is to use `adaptive.runner.simple` to run your learner:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
"\n",
"# blocks until completion\n",
"adaptive.runner.simple(learner, goal=lambda l: l.loss() < 0.002)\n",
"\n",
"learner.plot()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that unlike `adaptive.Runner`, `adaptive.runner.simple` *blocks* until it is finished.\n",
"\n",
"If you want to enable determinism, want to continue using the non-blocking `adaptive.Runner`, you can use the `adaptive.runner.SequentialExecutor`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from adaptive.runner import SequentialExecutor\n",
"\n",
"learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
"\n",
"# blocks until completion\n",
"runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.loss() < 0.002)\n",
"runner.live_info()\n",
"runner.live_plot(update_interval=0.1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cancelling a runner"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Sometimes you want to interactively explore a parameter space, and want the function to be evaluated at finer and finer resolution and manually control when the calculation stops.\n",
"\n",
"If no `goal` is provided to a runner then the runner will run until cancelled.\n",
"\n",
"`runner.live_info()` will provide a button that can be clicked to stop the runner. You can also stop the runner programatically using `runner.cancel()`."
]
},
{
"cell_type": "code",
"execution_count": null,
"learner = adaptive.Learner1D(f, bounds=(-1, 1))\n",
"runner.live_info()\n",
"runner.live_plot(update_interval=0.1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"runner.cancel()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(runner.status())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Debugging Problems "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Runners work in the background with respect to the IPython kernel, which makes it convenient, but also means that inspecting errors is more difficult because exceptions will not be raised directly in the notebook. Often the only indication you will have that something has gone wrong is that nothing will be happening.\n",
"\n",
"Let's look at the following example, where the function to be learned will raise an exception 10% of the time."
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"def will_raise(x):\n",
" from random import random\n",
" from time import sleep\n",
" \n",
" sleep(random())\n",
" if random() < 0.1:\n",
" raise RuntimeError('something went wrong!')\n",
" return x**2\n",
" \n",
"learner = adaptive.Learner1D(will_raise, (-1, 1))\n",
"runner = adaptive.Runner(learner) # without 'goal' the runner will run forever unless cancelled\n",
"runner.live_plot()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above runner should continue forever, but we notice that it stops after a few points are evaluated.\n",
"\n",
"First we should check that the runner has really finished:"
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"runner.task.done()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If it has indeed finished then we should check the `result` of the runner. This should be `None` if the runner stopped successfully. If the runner stopped due to an exception then asking for the result will raise the exception with the stack trace:"
]
},
{
"cell_type": "code",
"execution_count": null,