Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
No results found
Show changes
Commits on Source (20)
Showing
with 730 additions and 301 deletions
......@@ -6,9 +6,3 @@ Below is a list of the contributors to Adaptive:
+ [Christoph Groth](<http://inac.cea.fr/Pisp/christoph.groth/>)
+ Jorn Hoofwijk
+ [Joseph Weston](<https://joseph.weston.cloud>)
For a full list of contributors run
```
git log --pretty=format:"%an" | sort | uniq
```
# ![][logo] adaptive
[![PyPI](https://img.shields.io/pypi/v/adaptive.svg)](https://pypi.python.org/pypi/adaptive)
[![Conda](https://anaconda.org/conda-forge/adaptive/badges/installer/conda.svg)](https://anaconda.org/conda-forge/adaptive)
[![Downloads](https://anaconda.org/conda-forge/adaptive/badges/downloads.svg)](https://anaconda.org/conda-forge/adaptive)
[![pipeline status](https://gitlab.kwant-project.org/qt/adaptive/badges/master/pipeline.svg)](https://gitlab.kwant-project.org/qt/adaptive/pipelines)
[![DOI](https://zenodo.org/badge/113714660.svg)](https://zenodo.org/badge/latestdoi/113714660)
[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb)
[![Join the chat at https://gitter.im/python-adaptive/adaptive](https://img.shields.io/gitter/room/nwjs/nw.js.svg)](https://gitter.im/python-adaptive/adaptive)
**Tools for adaptive parallel sampling of mathematical functions.**
`adaptive` is an [open-source](LICENSE) Python library designed to make adaptive parallel function evaluation simple.
With `adaptive` you just supply a function with its bounds, and it will be evaluated at the "best" points in parameter space.
With just a few lines of code you can evaluate functions on a computing cluster, live-plot the data as it returns, and fine-tune the adaptive sampling algorithm.
Check out the `adaptive` [example notebook `learner.ipynb`](learner.ipynb) (or run it [live on Binder](https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb)) to see examples of how to use `adaptive`.
**WARNING: `adaptive` is still in a beta development stage**
## Implemented algorithms
The core concept in `adaptive` is that of a *learner*. A *learner* samples
a function at the best places in its parameter space to get maximum
"information" about the function. As it evaluates the function
at more and more points in the parameter space, it gets a better idea of where
the best places are to sample next.
Of course, what qualifies as the "best places" will depend on your application domain!
`adaptive` makes some reasonable default choices, but the details of the adaptive
sampling are completely customizable.
The following learners are implemented:
* `Learner1D`, for 1D functions `f: ℝ → ℝ^N`,
* `Learner2D`, for 2D functions `f: ℝ^2 → ℝ^N`,
* `LearnerND`, for ND functions `f: ℝ^N → ℝ^M`,
* `AverageLearner`, For stochastic functions where you want to average the result over many evaluations,
* `IntegratorLearner`, for when you want to intergrate a 1D function `f: ℝ → ℝ`,
* `BalancingLearner`, for when you want to run several learners at once, selecting the "best" one each time you get more points.
In addition to the learners, `adaptive` also provides primitives for running
the sampling across several cores and even several machines, with built-in support
for [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html),
[`ipyparallel`](https://ipyparallel.readthedocs.io/en/latest/)
and [`distributed`](https://distributed.readthedocs.io/en/latest/).
## Examples
<img src="https://user-images.githubusercontent.com/6897215/38739170-6ac7c014-3f34-11e8-9e8f-93b3a3a3d61b.gif" width='20%'> </img>
<img src="https://user-images.githubusercontent.com/6897215/35219611-ac8b2122-ff73-11e7-9332-adffab64a8ce.gif" width='40%'> </img>
## Installation
`adaptive` works with Python 3.6 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook.
The recommended way to install adaptive is using `conda`:
```bash
conda install -c conda-forge adaptive
```
`adaptive` is also available on PyPI:
```bash
pip install adaptive[notebook]
```
The `[notebook]` above will also install the optional dependencies for running `adaptive` inside
a Jupyter notebook.
## Development
Clone the repository and run `setup.py develop` to add a link to the cloned repo into your
Python path:
```
git clone git@github.com:python-adaptive/adaptive.git
cd adaptive
python3 setup.py develop
```
We highly recommend using a Conda environment or a virtualenv to manage the versions of your installed
packages while working on `adaptive`.
In order to not pollute the history with the output of the notebooks, please setup the git filter by executing
```bash
python ipynb_filter.py
```
in the repository.
## Credits
We would like to give credits to the following people:
- Pedro Gonnet for his implementation of [`CQUAD`](https://www.gnu.org/software/gsl/manual/html_node/CQUAD-doubly_002dadaptive-integration.html), "Algorithm 4" as described in "Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants", P. Gonnet, ACM Transactions on Mathematical Software, 37 (3), art. no. 26, 2010.
- Pauli Virtanen for his `AdaptiveTriSampling` script (no longer available online since SciPy Central went down) which served as inspiration for the [`Learner2D`](adaptive/learner/learner2D.py).
For general discussion, we have a [Gitter chat channel](https://gitter.im/python-adaptive/adaptive). If you find any bugs or have any feature suggestions please file a GitLab [issue](https://gitlab.kwant-project.org/qt/adaptive/issues/new?issue) or submit a [merge request](https://gitlab.kwant-project.org/qt/adaptive/merge_requests).
[logo]: https://gitlab.kwant-project.org/qt/adaptive/uploads/d20444093920a4a0499e165b5061d952/logo.png "adaptive logo"
.. summary-start
.. _logo-adaptive:
|image0| adaptive
=================
|PyPI| |Conda| |Downloads| |pipeline status| |DOI| |Binder| |Join the
chat at https://gitter.im/python-adaptive/adaptive| |Documentation Status|
**Tools for adaptive parallel sampling of mathematical functions.**
``adaptive`` is an open-source Python library designed to
make adaptive parallel function evaluation simple. With ``adaptive`` you
just supply a function with its bounds, and it will be evaluated at the
“best” points in parameter space. With just a few lines of code you can
evaluate functions on a computing cluster, live-plot the data as it
returns, and fine-tune the adaptive sampling algorithm.
Run the ``adaptive`` example notebook `live on
Binder <https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb>`_
to see examples of how to use ``adaptive`` or visit the
`tutorial on Read the Docs <https://adaptive.readthedocs.io/en/latest/tutorial/tutorial.html>`__.
.. summary-end
**WARNING: adaptive is still in a beta development stage**
.. not-in-documentation-start
Implemented algorithms
----------------------
The core concept in ``adaptive`` is that of a *learner*. A *learner*
samples a function at the best places in its parameter space to get
maximum “information” about the function. As it evaluates the function
at more and more points in the parameter space, it gets a better idea of
where the best places are to sample next.
Of course, what qualifies as the “best places” will depend on your
application domain! ``adaptive`` makes some reasonable default choices,
but the details of the adaptive sampling are completely customizable.
The following learners are implemented:
- ``Learner1D``, for 1D functions ``f: ℝ → ℝ^N``,
- ``Learner2D``, for 2D functions ``f: ℝ^2 → ℝ^N``,
- ``LearnerND``, for ND functions ``f: ℝ^N → ℝ^M``,
- ``AverageLearner``, For stochastic functions where you want to
average the result over many evaluations,
- ``IntegratorLearner``, for
when you want to intergrate a 1D function ``f: ℝ → ℝ``,
- ``BalancingLearner``, for when you want to run several learners at once,
selecting the “best” one each time you get more points.
In addition to the learners, ``adaptive`` also provides primitives for
running the sampling across several cores and even several machines,
with built-in support for
`concurrent.futures <https://docs.python.org/3/library/concurrent.futures.html>`_,
`ipyparallel <https://ipyparallel.readthedocs.io/en/latest/>`_ and
`distributed <https://distributed.readthedocs.io/en/latest/>`_.
Examples
--------
.. raw:: html
<img src="https://user-images.githubusercontent.com/6897215/38739170-6ac7c014-3f34-11e8-9e8f-93b3a3a3d61b.gif" width='20%'> </img> <img src="https://user-images.githubusercontent.com/6897215/35219611-ac8b2122-ff73-11e7-9332-adffab64a8ce.gif" width='40%'> </img>
.. not-in-documentation-end
Installation
------------
``adaptive`` works with Python 3.6 and higher on Linux, Windows, or Mac,
and provides optional extensions for working with the Jupyter/IPython
Notebook.
The recommended way to install adaptive is using ``conda``:
.. code:: bash
conda install -c conda-forge adaptive
``adaptive`` is also available on PyPI:
.. code:: bash
pip install adaptive[notebook]
The ``[notebook]`` above will also install the optional dependencies for
running ``adaptive`` inside a Jupyter notebook.
Development
-----------
Clone the repository and run ``setup.py develop`` to add a link to the
cloned repo into your Python path:
.. code:: bash
git clone git@github.com:python-adaptive/adaptive.git
cd adaptive
python3 setup.py develop
We highly recommend using a Conda environment or a virtualenv to manage
the versions of your installed packages while working on ``adaptive``.
In order to not pollute the history with the output of the notebooks,
please setup the git filter by executing
.. code:: bash
python ipynb_filter.py
in the repository.
Credits
-------
We would like to give credits to the following people:
- Pedro Gonnet for his implementation of `CQUAD <https://www.gnu.org/software/gsl/manual/html_node/CQUAD-doubly_002dadaptive-integration.html>`_,
“Algorithm 4” as described in “Increasing the Reliability of Adaptive
Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on
Mathematical Software, 37 (3), art. no. 26, 2010.
- Pauli Virtanen for his ``AdaptiveTriSampling`` script (no longer
available online since SciPy Central went down) which served as
inspiration for the `~adaptive.Learner2D`.
For general discussion, we have a `Gitter chat
channel <https://gitter.im/python-adaptive/adaptive>`_. If you find any
bugs or have any feature suggestions please file a GitLab
`issue <https://gitlab.kwant-project.org/qt/adaptive/issues/new?issue>`_
or submit a `merge
request <https://gitlab.kwant-project.org/qt/adaptive/merge_requests>`_.
.. references-start
.. |image0| image:: https://gitlab.kwant-project.org/qt/adaptive/uploads/d20444093920a4a0499e165b5061d952/logo.png
.. |PyPI| image:: https://img.shields.io/pypi/v/adaptive.svg
:target: https://pypi.python.org/pypi/adaptive
.. |Conda| image:: https://anaconda.org/conda-forge/adaptive/badges/installer/conda.svg
:target: https://anaconda.org/conda-forge/adaptive
.. |Downloads| image:: https://anaconda.org/conda-forge/adaptive/badges/downloads.svg
:target: https://anaconda.org/conda-forge/adaptive
.. |pipeline status| image:: https://gitlab.kwant-project.org/qt/adaptive/badges/master/pipeline.svg
:target: https://gitlab.kwant-project.org/qt/adaptive/pipelines
.. |DOI| image:: https://zenodo.org/badge/113714660.svg
:target: https://zenodo.org/badge/latestdoi/113714660
.. |Binder| image:: https://mybinder.org/badge.svg
:target: https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb
.. |Join the chat at https://gitter.im/python-adaptive/adaptive| image:: https://img.shields.io/gitter/room/nwjs/nw.js.svg
:target: https://gitter.im/python-adaptive/adaptive
.. |Documentation Status| image:: https://readthedocs.org/projects/adaptive/badge/?version=latest
:target: https://adaptive.readthedocs.io/en/latest/?badge=latest
.. references-end
......@@ -8,15 +8,15 @@ from . import learner
from . import runner
from . import utils
from .learner import (Learner1D, Learner2D, LearnerND, AverageLearner,
BalancingLearner, make_datasaver, DataSaver,
IntegratorLearner)
from .learner import (BaseLearner, Learner1D, Learner2D, LearnerND,
AverageLearner, BalancingLearner, make_datasaver,
DataSaver, IntegratorLearner)
with suppress(ImportError):
# Only available if 'scikit-optimize' is installed
from .learner import SKOptLearner
from .runner import Runner, BlockingRunner
from .runner import Runner, AsyncRunner, BlockingRunner
from ._version import __version__
del _version
......
......@@ -17,9 +17,9 @@ class AverageLearner(BaseLearner):
Parameters
----------
atol : float
Desired absolute tolerance
Desired absolute tolerance.
rtol : float
Desired relative tolerance
Desired relative tolerance.
Attributes
----------
......@@ -125,3 +125,9 @@ class AverageLearner(BaseLearner):
num_bins = int(max(5, sqrt(self.npoints)))
vals = hv.Points(vals)
return hv.operation.histogram(vals, num_bins=num_bins, dimension=1)
def _get_data(self):
return (self.data, self.npoints, self.sum_f, self.sum_f_sq)
def _set_data(self, data):
self.data, self.npoints, self.sum_f, self.sum_f_sq = data
......@@ -3,6 +3,7 @@ from collections import defaultdict
from contextlib import suppress
from functools import partial
from operator import itemgetter
import os.path
import numpy as np
......@@ -21,27 +22,32 @@ class BalancingLearner(BaseLearner):
Parameters
----------
learners : sequence of BaseLearner
learners : sequence of `BaseLearner`
The learners from which to choose. These must all have the same type.
cdims : sequence of dicts, or (keys, iterable of values), optional
Constant dimensions; the parameters that label the learners. Used
in `plot`.
Example inputs that all give identical results:
- sequence of dicts:
>>> cdims = [{'A': True, 'B': 0},
... {'A': True, 'B': 1},
... {'A': False, 'B': 0},
... {'A': False, 'B': 1}]`
- tuple with (keys, iterable of values):
>>> cdims = (['A', 'B'], itertools.product([True, False], [0, 1]))
>>> cdims = (['A', 'B'], [(True, 0), (True, 1),
... (False, 0), (False, 1)])
strategy : 'loss_improvements' (default), 'loss', or 'npoints'
The points that the 'BalancingLearner' choses can be either based on:
The points that the `BalancingLearner` choses can be either based on:
the best 'loss_improvements', the smallest total 'loss' of the
child learners, or the number of points per learner, using 'npoints'.
One can dynamically change the strategy while the simulation is
running by changing the 'learner.strategy' attribute.
running by changing the ``learner.strategy`` attribute.
Notes
-----
......@@ -50,7 +56,7 @@ class BalancingLearner(BaseLearner):
compared*. For the moment we enforce this restriction by requiring that
all learners are the same type but (depending on the internals of the
learner) it may be that the loss cannot be compared *even between learners
of the same type*. In this case the BalancingLearner will behave in an
of the same type*. In this case the `BalancingLearner` will behave in an
undefined way.
"""
......@@ -183,28 +189,34 @@ class BalancingLearner(BaseLearner):
cdims : sequence of dicts, or (keys, iterable of values), optional
Constant dimensions; the parameters that label the learners.
Example inputs that all give identical results:
- sequence of dicts:
>>> cdims = [{'A': True, 'B': 0},
... {'A': True, 'B': 1},
... {'A': False, 'B': 0},
... {'A': False, 'B': 1}]`
- tuple with (keys, iterable of values):
>>> cdims = (['A', 'B'], itertools.product([True, False], [0, 1]))
>>> cdims = (['A', 'B'], [(True, 0), (True, 1),
... (False, 0), (False, 1)])
plotter : callable, optional
A function that takes the learner as a argument and returns a
holoviews object. By default learner.plot() will be called.
holoviews object. By default ``learner.plot()`` will be called.
dynamic : bool, default True
Return a holoviews.DynamicMap if True, else a holoviews.HoloMap.
The DynamicMap is rendered as the sliders change and can therefore
not be exported to html. The HoloMap does not have this problem.
Return a `holoviews.core.DynamicMap` if True, else a
`holoviews.core.HoloMap`. The `~holoviews.core.DynamicMap` is
rendered as the sliders change and can therefore not be exported
to html. The `~holoviews.core.HoloMap` does not have this problem.
Returns
-------
dm : holoviews.DynamicMap object (default) or holoviews.HoloMap object
A DynamicMap (dynamic=True) or HoloMap (dynamic=False) with
sliders that are defined by 'cdims'.
dm : `holoviews.core.DynamicMap` (default) or `holoviews.core.HoloMap`
A `DynamicMap` (dynamic=True) or `HoloMap` (dynamic=False) with
sliders that are defined by `cdims`.
"""
hv = ensure_holoviews()
cdims = cdims or self._cdims_default
......@@ -248,13 +260,13 @@ class BalancingLearner(BaseLearner):
def from_product(cls, f, learner_type, learner_kwargs, combos):
"""Create a `BalancingLearner` with learners of all combinations of
named variables’ values. The `cdims` will be set correctly, so calling
`learner.plot` will be a `holoviews.HoloMap` with the correct labels.
`learner.plot` will be a `holoviews.core.HoloMap` with the correct labels.
Parameters
----------
f : callable
Function to learn, must take arguments provided in in `combos`.
learner_type : BaseLearner
learner_type : `BaseLearner`
The learner that should wrap the function. For example `Learner1D`.
learner_kwargs : dict
Keyword argument for the `learner_type`. For example `dict(bounds=[0, 1])`.
......@@ -291,3 +303,75 @@ class BalancingLearner(BaseLearner):
learner = learner_type(function=partial(f, **combo), **learner_kwargs)
learners.append(learner)
return cls(learners, cdims=arguments)
def save(self, folder, compress=True):
"""Save the data of the child learners into pickle files
in a directory.
Parameters
----------
folder : str
Directory in which the learners's data will be saved.
compress : bool, default True
Compress the data upon saving using 'gzip'. When saving
using compression, one must load it with compression too.
Notes
-----
The child learners need to have a 'fname' attribute in order to use
this method.
Example
-------
>>> def combo_fname(val):
... return '__'.join([f'{k}_{v}.p' for k, v in val.items()])
...
... def f(x, a, b): return a * x**2 + b
...
>>> learners = []
>>> for combo in adaptive.utils.named_product(a=[1, 2], b=[1]):
... l = Learner1D(functools.partial(f, combo=combo))
... l.fname = combo_fname(combo) # 'a_1__b_1.p', 'a_2__b_1.p' etc.
... learners.append(l)
... learner = BalancingLearner(learners)
... # Run the learner
... runner = adaptive.Runner(learner)
... # Then save
... learner.save('data_folder') # use 'load' in the same way
"""
if len(self.learners) != len(set(l.fname for l in self.learners)):
raise RuntimeError("The 'learner.fname's are not all unique.")
for l in self.learners:
l.save(os.path.join(folder, l.fname), compress=compress)
def load(self, folder, compress=True):
"""Load the data of the child learners from pickle files
in a directory.
Parameters
----------
folder : str
Directory from which the learners's data will be loaded.
compress : bool, default True
If the data is compressed when saved, one must load it
with compression too.
Notes
-----
The child learners need to have a 'fname' attribute in order to use
this method.
Example
-------
See the example in the 'BalancingLearner.save' doc-string.
"""
for l in self.learners:
l.load(os.path.join(folder, l.fname), compress=compress)
def _get_data(self):
return [l._get_data() for l in learner.learners]
def _set_data(self, data):
for l, _data in zip(self.learners, data):
l._set_data(_data)
# -*- coding: utf-8 -*-
import abc
from contextlib import suppress
from copy import deepcopy
from ..utils import save, load
class BaseLearner(metaclass=abc.ABCMeta):
"""Base class for algorithms for learning a function 'f: X → Y'.
......@@ -11,14 +14,16 @@ class BaseLearner(metaclass=abc.ABCMeta):
function : callable: X → Y
The function to learn.
data : dict: X → Y
'function' evaluated at certain points.
`function` evaluated at certain points.
The values can be 'None', which indicates that the point
will be evaluated, but that we do not have the result yet.
npoints : int, optional
The number of evaluated points that have been added to the learner.
Subclasses do not *have* to implement this attribute.
Subclasses may define a 'plot' method that takes no parameters
Notes
-----
Subclasses may define a ``plot`` method that takes no parameters
and returns a holoviews plot.
"""
......@@ -75,15 +80,91 @@ class BaseLearner(metaclass=abc.ABCMeta):
n : int
The number of points to choose.
tell_pending : bool, default: True
If True, add the chosen points to this
learner's 'data' with 'None' for the 'y'
values. Set this to False if you do not
If True, add the chosen points to this learner's
`pending_points`. Set this to False if you do not
want to modify the state of the learner.
"""
pass
@abc.abstractmethod
def _get_data(self):
pass
@abc.abstractmethod
def _set_data(self):
pass
def copy_from(self, other):
"""Copy over the data from another learner.
Parameters
----------
other : BaseLearner object
The learner from which the data is copied.
"""
self._set_data(other._get_data())
def save(self, fname=None, compress=True):
"""Save the data of the learner into a pickle file.
Parameters
----------
fname : str, optional
The filename of the learner's pickle data file. If None use
the 'fname' attribute, like 'learner.fname = "example.p".
compress : bool, default True
Compress the data upon saving using 'gzip'. When saving
using compression, one must load it with compression too.
Notes
-----
There are **two ways** of naming the files:
1. Using the ``fname`` argument in ``learner.save(fname='example.p')``
2. Setting the ``fname`` attribute, like
``learner.fname = "data/example.p"`` and then ``learner.save()``.
"""
fname = fname or self.fname
data = self._get_data()
save(fname, data, compress)
def load(self, fname=None, compress=True):
"""Load the data of a learner from a pickle file.
Parameters
----------
fname : str, optional
The filename of the saved learner's pickled data file.
If None use the 'fname' attribute, like
'learner.fname = "example.p".
compress : bool, default True
If the data is compressed when saved, one must load it
with compression too.
Notes
-----
See the notes in the `save` doc-string.
"""
fname = fname or self.fname
with suppress(FileNotFoundError, EOFError):
data = load(fname, compress)
self._set_data(data)
def __getstate__(self):
return deepcopy(self.__dict__)
def __setstate__(self, state):
self.__dict__ = state
@property
def fname(self):
# This is a property because then it will be availible in the DataSaver
try:
return self._fname
except AttributeError:
raise AttributeError("Set 'learner.fname' or use the 'fname'"
" argument when using 'learner.save' or 'learner.load'.")
@fname.setter
def fname(self, fname):
self._fname = fname
......@@ -2,13 +2,16 @@
from collections import OrderedDict
import functools
from .base_learner import BaseLearner
from ..utils import copy_docstring_from
class DataSaver:
"""Save extra data associated with the values that need to be learned.
Parameters
----------
learner : Learner object
learner : `~adaptive.BaseLearner` instance
The learner that needs to be wrapped.
arg_picker : function
Function that returns the argument that needs to be learned.
......@@ -16,10 +19,11 @@ class DataSaver:
Example
-------
Imagine we have a function that returns a dictionary
of the form: `{'y': y, 'err_est': err_est}`.
of the form: ``{'y': y, 'err_est': err_est}``.
>>> from operator import itemgetter
>>> _learner = Learner1D(f, bounds=(-1.0, 1.0))
>>> learner = DataSaver(_learner, arg_picker=operator.itemgetter('y'))
>>> learner = DataSaver(_learner, arg_picker=itemgetter('y'))
"""
def __init__(self, learner, arg_picker):
......@@ -39,6 +43,25 @@ class DataSaver:
def tell_pending(self, x):
self.learner.tell_pending(x)
def _get_data(self):
return self.learner._get_data(), self.extra_data
def _set_data(self, data):
learner_data, self.extra_data = data
self.learner._set_data(learner_data)
@copy_docstring_from(BaseLearner.save)
def save(self, fname=None, compress=True):
# We copy this method because the 'DataSaver' is not a
# subclass of the 'BaseLearner'.
BaseLearner.save(self, fname, compress)
@copy_docstring_from(BaseLearner.load)
def load(self, fname=None, compress=True):
# We copy this method because the 'DataSaver' is not a
# subclass of the 'BaseLearner'.
BaseLearner.load(self, fname, compress)
def _ds(learner_type, arg_picker, *args, **kwargs):
args = args[2:] # functools.partial passes the first 2 arguments in 'args'!
......@@ -46,12 +69,12 @@ def _ds(learner_type, arg_picker, *args, **kwargs):
def make_datasaver(learner_type, arg_picker):
"""Create a DataSaver of a `learner_type` that can be instantiated
"""Create a `DataSaver` of a `learner_type` that can be instantiated
with the `learner_type`'s key-word arguments.
Parameters
----------
learner_type : BaseLearner
learner_type : `~adaptive.BaseLearner` type
The learner type that needs to be wrapped.
arg_picker : function
Function that returns the argument that needs to be learned.
......@@ -59,15 +82,16 @@ def make_datasaver(learner_type, arg_picker):
Example
-------
Imagine we have a function that returns a dictionary
of the form: `{'y': y, 'err_est': err_est}`.
of the form: ``{'y': y, 'err_est': err_est}``.
>>> DataSaver = make_datasaver(Learner1D,
... arg_picker=operator.itemgetter('y'))
>>> from operator import itemgetter
>>> DataSaver = make_datasaver(Learner1D, arg_picker=itemgetter('y'))
>>> learner = DataSaver(function=f, bounds=(-1.0, 1.0))
Or when using `BalacingLearner.from_product`:
Or when using `adaptive.BalancingLearner.from_product`:
>>> learner_type = make_datasaver(adaptive.Learner1D,
... arg_picker=operator.itemgetter('y'))
... arg_picker=itemgetter('y'))
>>> learner = adaptive.BalancingLearner.from_product(
... jacobi, learner_type, dict(bounds=(0, 1)), combos)
"""
......
......@@ -330,7 +330,7 @@ class IntegratorLearner(BaseLearner):
The integral value in `self.bounds`.
err : float
The absolute error associated with `self.igral`.
max_ivals : int, default 1000
max_ivals : int, default: 1000
Maximum number of intervals that can be present in the calculation
of the integral. If this amount exceeds max_ivals, the interval
with the smallest error will be discarded.
......@@ -525,3 +525,30 @@ class IntegratorLearner(BaseLearner):
xs, ys = zip(*[(x, y) for ival in ivals
for x, y in sorted(ival.done_points.items())])
return hv.Path((xs, ys))
def _get_data(self):
# Change the defaultdict of SortedSets to a normal dict of sets.
x_mapping = {k: set(v) for k, v in self.x_mapping.items()}
return (self.priority_split,
self.done_points,
self.pending_points,
self._stack,
x_mapping,
self.ivals,
self.first_ival)
def _set_data(self, data):
self.priority_split, self.done_points, self.pending_points, \
self._stack, x_mapping, self.ivals, self.first_ival = data
# Add the pending_points to the _stack such that they are evaluated again
for x in self.pending_points:
if x not in self._stack:
self._stack.append(x)
# x_mapping is a data structure that can't easily be saved
# so we recreate it here
self.x_mapping = defaultdict(lambda: SortedSet([], key=attrgetter('rdepth')))
for k, _set in x_mapping.items():
self.x_mapping[k].update(_set)
......@@ -94,17 +94,24 @@ class Learner1D(BaseLearner):
If not provided, then a default is used, which uses the scaled distance
in the x-y plane as the loss. See the notes for more details.
Attributes
----------
data : dict
Sampled points and values.
pending_points : set
Points that still have to be evaluated.
Notes
-----
'loss_per_interval' takes 3 parameters: interval, scale, and function_values,
and returns a scalar; the loss over the interval.
`loss_per_interval` takes 3 parameters: ``interval``, ``scale``, and
``function_values``, and returns a scalar; the loss over the interval.
interval : (float, float)
The bounds of the interval.
scale : (float, float)
The x and y scale over all the intervals, useful for rescaling the
interval loss.
function_values : dict(float -> float)
function_values : dict(float float)
A map containing evaluated function values. It is guaranteed
to have values for both of the points in 'interval'.
"""
......@@ -363,7 +370,7 @@ class Learner1D(BaseLearner):
x_left, x_right = ival
a, b = to_interpolate[-1] if to_interpolate else (None, None)
if b == x_left and (a, b) not in self.losses:
# join (a, b) and (x_left, x_right) --> (a, x_right)
# join (a, b) and (x_left, x_right) (a, x_right)
to_interpolate[-1] = (a, x_right)
else:
to_interpolate.append((x_left, x_right))
......@@ -478,3 +485,9 @@ class Learner1D(BaseLearner):
self.pending_points = set()
self.losses_combined = deepcopy(self.losses)
self.neighbors_combined = deepcopy(self.neighbors)
def _get_data(self):
return self.data
def _set_data(self, data):
self.tell_many(*zip(*data.items()))
# -*- coding: utf-8 -*-
from collections import OrderedDict
from copy import copy
import itertools
from math import sqrt
......@@ -62,8 +63,8 @@ def uniform_loss(ip):
def resolution_loss(ip, min_distance=0, max_distance=1):
"""Loss function that is similar to the default loss function, but you can
set the maximimum and minimum size of a triangle.
"""Loss function that is similar to the `default_loss` function, but you
can set the maximimum and minimum size of a triangle.
Works with `~adaptive.Learner2D` only.
......@@ -101,8 +102,8 @@ def resolution_loss(ip, min_distance=0, max_distance=1):
def minimize_triangle_surface_loss(ip):
"""Loss function that is similar to the default loss function in the
`Learner1D`. The loss is the area spanned by the 3D vectors of the
vertices.
`~adaptive.Learner1D`. The loss is the area spanned by the 3D
vectors of the vertices.
Works with `~adaptive.Learner2D` only.
......@@ -206,15 +207,15 @@ class Learner2D(BaseLearner):
pending_points : set
Points that still have to be evaluated and are currently
interpolated, see `data_combined`.
stack_size : int, default 10
stack_size : int, default: 10
The size of the new candidate points stack. Set it to 1
to recalculate the best points at each call to `ask`.
aspect_ratio : float, int, default 1
Average ratio of `x` span over `y` span of a triangle. If
there is more detail in either `x` or `y` the `aspect_ratio`
needs to be adjusted. When `aspect_ratio > 1` the
triangles will be stretched along `x`, otherwise
along `y`.
aspect_ratio : float, int, default: 1
Average ratio of ``x`` span over ``y`` span of a triangle. If
there is more detail in either ``x`` or ``y`` the ``aspect_ratio``
needs to be adjusted. When ``aspect_ratio > 1`` the
triangles will be stretched along ``x``, otherwise
along ``y``.
Methods
-------
......@@ -239,13 +240,13 @@ class Learner2D(BaseLearner):
This sampling procedure is not extremely fast, so to benefit from
it, your function needs to be slow enough to compute.
'loss_per_triangle' takes a single parameter, 'ip', which is a
`loss_per_triangle` takes a single parameter, `ip`, which is a
`scipy.interpolate.LinearNDInterpolator`. You can use the
*undocumented* attributes 'tri' and 'values' of 'ip' to get a
*undocumented* attributes ``tri`` and ``values`` of `ip` to get a
`scipy.spatial.Delaunay` and a vector of function values.
These can be used to compute the loss. The functions
`adaptive.learner.learner2D.areas` and
`adaptive.learner.learner2D.deviations` to calculate the
`~adaptive.learner.learner2D.areas` and
`~adaptive.learner.learner2D.deviations` to calculate the
areas and deviations from a linear interpolation
over each triangle.
"""
......@@ -464,19 +465,21 @@ class Learner2D(BaseLearner):
Number of points in x and y. If None (default) this number is
evaluated by looking at the size of the smallest triangle.
tri_alpha : float
The opacity (0 <= tri_alpha <= 1) of the triangles overlayed on
top of the image. By default the triangulation is not visible.
The opacity ``(0 <= tri_alpha <= 1)`` of the triangles overlayed
on top of the image. By default the triangulation is not visible.
Returns
-------
plot : holoviews.Overlay or holoviews.HoloMap
A `holoviews.Overlay` of `holoviews.Image * holoviews.EdgePaths`.
If the `learner.function` returns a vector output, a
`holoviews.HoloMap` of the `holoviews.Overlay`s wil be returned.
plot : `holoviews.core.Overlay` or `holoviews.core.HoloMap`
A `holoviews.core.Overlay` of
``holoviews.Image * holoviews.EdgePaths``. If the
`learner.function` returns a vector output, a
`holoviews.core.HoloMap` of the
`holoviews.core.Overlay`\s wil be returned.
Notes
-----
The plot object that is returned if `learner.function` returns a
The plot object that is returned if ``learner.function`` returns a
vector *cannot* be used with the live_plotting functionality.
"""
hv = ensure_holoviews()
......@@ -520,3 +523,13 @@ class Learner2D(BaseLearner):
no_hover = dict(plot=dict(inspection_policy=None, tools=[]))
return im.opts(style=im_opts) * tris.opts(style=tri_opts, **no_hover)
def _get_data(self):
return self.data
def _set_data(self, data):
self.data = data
# Remove points from stack if they already exist
for point in copy(self._stack):
if point in self.data:
self._stack.pop(point)
......@@ -124,15 +124,8 @@ class LearnerND(BaseLearner):
Coordinates of the currently known points
values : numpy array
The values of each of the known points
Methods
-------
plot(n)
If dim == 2, this method will plot the function being learned.
plot_slice(cut_mapping, n)
plot a slice of the function using interpolation of the current data.
the cut_mapping contains the fixed parameters, the other parameters are
used as axes for plotting.
pending_points : set
Points that still have to be evaluated.
Notes
-----
......@@ -169,10 +162,10 @@ class LearnerND(BaseLearner):
self._tri = None
self._losses = dict()
self._pending_to_simplex = dict() # vertex -> simplex
self._pending_to_simplex = dict() # vertex simplex
# triangulation of the pending points inside a specific simplex
self._subtriangulations = dict() # simplex -> triangulation
self._subtriangulations = dict() # simplex triangulation
# scale to unit
self._transform = np.linalg.inv(np.diag(np.diff(bounds).flat))
......@@ -217,6 +210,8 @@ class LearnerND(BaseLearner):
@property
def tri(self):
"""A `adaptive.learner.Triangulation` instance with all the points
of the learner."""
if self._tri is not None:
return self._tri
......@@ -517,13 +512,14 @@ class LearnerND(BaseLearner):
return im.opts(style=im_opts) * tris.opts(style=tri_opts, **no_hover)
def plot_slice(self, cut_mapping, n=None):
"""Plot a 1d or 2d interpolated slice of a N-dimensional function.
"""Plot a 1D or 2D interpolated slice of a N-dimensional function.
Parameters
----------
cut_mapping : dict (int -> float)
cut_mapping : dict (int float)
for each fixed dimension the value, the other dimensions
are interpolated
are interpolated. e.g. ``cut_mapping = {0: 1}``, so from
dimension 0 ('x') to value 1.
n : int
the number of boxes in the interpolation grid along each axis
"""
......@@ -576,3 +572,9 @@ class LearnerND(BaseLearner):
return im.opts(style=dict(cmap='viridis'))
else:
raise ValueError("Only 1 or 2-dimensional plots can be generated.")
def _get_data(self):
return self.data
def _set_data(self, data):
self.tell_many(*zip(*data.items()))
......@@ -8,18 +8,18 @@ from ..utils import cache_latest
class SKOptLearner(Optimizer, BaseLearner):
"""Learn a function minimum using 'skopt.Optimizer'.
"""Learn a function minimum using ``skopt.Optimizer``.
This is an 'Optimizer' from 'scikit-optimize',
This is an ``Optimizer`` from ``scikit-optimize``,
with the necessary methods added to make it conform
to the 'adaptive' learner interface.
to the ``adaptive`` learner interface.
Parameters
----------
function : callable
The function to learn.
**kwargs :
Arguments to pass to 'skopt.Optimizer'.
Arguments to pass to ``skopt.Optimizer``.
"""
def __init__(self, function, **kwargs):
......
......@@ -612,7 +612,7 @@ class Triangulation:
Parameters
----------
check : bool, default True
check : bool, default: True
Whether to raise an error if the computed hull is different from
stored.
......
......@@ -20,7 +20,7 @@ def notebook_extension():
try:
import ipywidgets
import holoviews
holoviews.notebook_extension('bokeh')
holoviews.notebook_extension('bokeh', logo=False)
_plotting_enabled = True
except ModuleNotFoundError:
warnings.warn("holoviews and (or) ipywidgets are not installed; plotting "
......@@ -56,21 +56,21 @@ def live_plot(runner, *, plotter=None, update_interval=2, name=None):
Parameters
----------
runner : Runner
runner : `Runner`
plotter : function
A function that takes the learner as a argument and returns a
holoviews object. By default learner.plot() will be called.
holoviews object. By default ``learner.plot()`` will be called.
update_interval : int
Number of second between the updates of the plot.
name : hasable
Name for the `live_plot` task in `adaptive.active_plotting_tasks`.
By default the name is `None` and if another task with the same name
already exists that other live_plot is canceled.
By default the name is None and if another task with the same name
already exists that other `live_plot` is canceled.
Returns
-------
dm : holoviews.DynamicMap
The plot that automatically updates every update_interval.
dm : `holoviews.core.DynamicMap`
The plot that automatically updates every `update_interval`.
"""
if not _plotting_enabled:
raise RuntimeError("Live plotting is not enabled; did you run "
......@@ -176,7 +176,7 @@ def _info_html(runner):
with suppress(Exception):
info.append(('latest loss', f'{runner.learner._cache["loss"]:.3f}'))
template = '<dt>{}</dt><dd>{}</dd>'
template = '<dt class="ignore-css">{}</dt><dd>{}</dd>'
table = '\n'.join(template.format(k, v) for k, v in info)
return f'''
......
......@@ -54,64 +54,60 @@ else:
class BaseRunner:
"""Base class for runners that use concurrent.futures.Executors.
"""Base class for runners that use `concurrent.futures.Executors`.
Parameters
----------
learner : adaptive.learner.BaseLearner
learner : `~adaptive.BaseLearner` instance
goal : callable
The end condition for the calculation. This function must take
the learner as its sole argument, and return True when we should
stop requesting more points.
executor : concurrent.futures.Executor, distributed.Client,
or ipyparallel.Client, optional
executor : `concurrent.futures.Executor`, `distributed.Client`,\
or `ipyparallel.Client`, optional
The executor in which to evaluate the function to be learned.
If not provided, a new `ProcessPoolExecutor` is used on Unix systems
while on Windows a `distributed.Client` is used if `distributed` is
installed.
If not provided, a new `~concurrent.futures.ProcessPoolExecutor`
is used on Unix systems while on Windows a `distributed.Client`
is used if `distributed` is installed.
ntasks : int, optional
The number of concurrent function evaluations. Defaults to the number
of cores available in 'executor'.
of cores available in `executor`.
log : bool, default: False
If True, record the method calls made to the learner by this runner.
shutdown_executor : Bool, default: False
shutdown_executor : bool, default: False
If True, shutdown the executor when the runner has completed. If
'executor' is not provided then the executor created internally
`executor` is not provided then the executor created internally
by the runner is shut down, regardless of this parameter.
retries : int, default: 0
Maximum amount of retries of a certain point 'x' in
'learner.function(x)'. After 'retries' is reached for 'x' the
point is present in 'runner.failed'.
Maximum amount of retries of a certain point ``x`` in
``learner.function(x)``. After `retries` is reached for ``x``
the point is present in ``runner.failed``.
raise_if_retries_exceeded : bool, default: True
Raise the error after a point 'x' failed 'retries'.
Raise the error after a point ``x`` failed `retries`.
Attributes
----------
learner : Learner
learner : `~adaptive.BaseLearner` instance
The underlying learner. May be queried for its state.
log : list or None
Record of the method calls made to the learner, in the format
'(method_name, *args)'.
``(method_name, *args)``.
to_retry : dict
Mapping of {point: n_fails, ...}. When a point has failed
'runner.retries' times it is removed but will be present
in 'runner.tracebacks'.
Mapping of ``{point: n_fails, ...}``. When a point has failed
``runner.retries`` times it is removed but will be present
in ``runner.tracebacks``.
tracebacks : dict
A mapping of point to the traceback if that point failed.
pending_points : dict
A mapping of 'concurrent.Future's to points, {Future: point, ...}.
A mapping of `~concurrent.futures.Future`\s to points.
Methods
-------
overhead : callable
The overhead in percent of using Adaptive. This includes the
overhead of the executor. Essentially, this is
100 * (1 - total_elapsed_function_time / self.elapsed_time()).
``100 * (1 - total_elapsed_function_time / self.elapsed_time())``.
Properties
----------
failed : set
Set of points that failed 'retries' times.
"""
def __init__(self, learner, goal, *,
......@@ -145,7 +141,7 @@ class BaseRunner:
self.to_retry = {}
self.tracebacks = {}
def max_tasks(self):
def _get_max_tasks(self):
return self._max_tasks or _get_ncores(self.executor)
def _do_raise(self, e, x):
......@@ -173,7 +169,7 @@ class BaseRunner:
def overhead(self):
"""Overhead of using Adaptive and the executor in percent.
This is measured as 100 * (1 - t_function / t_elapsed).
This is measured as ``100 * (1 - t_function / t_elapsed)``.
Notes
-----
......@@ -213,7 +209,7 @@ class BaseRunner:
# Launch tasks to replace the ones that completed
# on the last iteration, making sure to fill workers
# that have started since the last iteration.
n_new_tasks = max(0, self.max_tasks() - len(self.pending_points))
n_new_tasks = max(0, self._get_max_tasks() - len(self.pending_points))
if self.do_log:
self.log.append(('ask', n_new_tasks))
......@@ -243,7 +239,7 @@ class BaseRunner:
@property
def failed(self):
"""Set of points that failed 'self.retries' times."""
"""Set of points that failed ``runner.retries`` times."""
return set(self.tracebacks) - set(self.to_retry)
......@@ -252,48 +248,48 @@ class BlockingRunner(BaseRunner):
Parameters
----------
learner : adaptive.learner.BaseLearner
learner : `~adaptive.BaseLearner` instance
goal : callable
The end condition for the calculation. This function must take
the learner as its sole argument, and return True when we should
stop requesting more points.
executor : concurrent.futures.Executor, distributed.Client,
or ipyparallel.Client, optional
executor : `concurrent.futures.Executor`, `distributed.Client`,\
or `ipyparallel.Client`, optional
The executor in which to evaluate the function to be learned.
If not provided, a new `ProcessPoolExecutor` is used on Unix systems
while on Windows a `distributed.Client` is used if `distributed` is
installed.
If not provided, a new `~concurrent.futures.ProcessPoolExecutor`
is used on Unix systems while on Windows a `distributed.Client`
is used if `distributed` is installed.
ntasks : int, optional
The number of concurrent function evaluations. Defaults to the number
of cores available in 'executor'.
of cores available in `executor`.
log : bool, default: False
If True, record the method calls made to the learner by this runner.
shutdown_executor : Bool, default: False
shutdown_executor : bool, default: False
If True, shutdown the executor when the runner has completed. If
'executor' is not provided then the executor created internally
`executor` is not provided then the executor created internally
by the runner is shut down, regardless of this parameter.
retries : int, default: 0
Maximum amount of retries of a certain point 'x' in
'learner.function(x)'. After 'retries' is reached for 'x' the
point is present in 'runner.failed'.
Maximum amount of retries of a certain point ``x`` in
``learner.function(x)``. After `retries` is reached for ``x``
the point is present in ``runner.failed``.
raise_if_retries_exceeded : bool, default: True
Raise the error after a point 'x' failed 'retries'.
Raise the error after a point ``x`` failed `retries`.
Attributes
----------
learner : Learner
learner : `~adaptive.BaseLearner` instance
The underlying learner. May be queried for its state.
log : list or None
Record of the method calls made to the learner, in the format
'(method_name, *args)'.
``(method_name, *args)``.
to_retry : dict
Mapping of {point: n_fails, ...}. When a point has failed
'runner.retries' times it is removed but will be present
in 'runner.tracebacks'.
Mapping of ``{point: n_fails, ...}``. When a point has failed
``runner.retries`` times it is removed but will be present
in ``runner.tracebacks``.
tracebacks : dict
A mapping of point to the traceback if that point failed.
pending_points : dict
A mapping of 'concurrent.Future's to points, {Future: point, ...}.
A mapping of `~concurrent.futures.Future`\to points.
Methods
-------
......@@ -303,12 +299,8 @@ class BlockingRunner(BaseRunner):
overhead : callable
The overhead in percent of using Adaptive. This includes the
overhead of the executor. Essentially, this is
100 * (1 - total_elapsed_function_time / self.elapsed_time()).
``100 * (1 - total_elapsed_function_time / self.elapsed_time())``.
Properties
----------
failed : set
Set of points that failed 'retries' times.
"""
def __init__(self, learner, goal, *,
......@@ -330,7 +322,7 @@ class BlockingRunner(BaseRunner):
def _run(self):
first_completed = concurrent.FIRST_COMPLETED
if self.max_tasks() < 1:
if self._get_max_tasks() < 1:
raise RuntimeError('Executor has no workers')
try:
......@@ -354,58 +346,58 @@ class BlockingRunner(BaseRunner):
class AsyncRunner(BaseRunner):
"""Run a learner asynchronously in an executor using asyncio.
"""Run a learner asynchronously in an executor using `asyncio`.
Parameters
----------
learner : adaptive.learner.BaseLearner
learner : `~adaptive.BaseLearner` instance
goal : callable, optional
The end condition for the calculation. This function must take
the learner as its sole argument, and return True when we should
stop requesting more points. If not provided, the runner will run
forever, or until 'self.task.cancel()' is called.
executor : concurrent.futures.Executor, distributed.Client,
or ipyparallel.Client, optional
forever, or until ``self.task.cancel()`` is called.
executor : `concurrent.futures.Executor`, `distributed.Client`,\
or `ipyparallel.Client`, optional
The executor in which to evaluate the function to be learned.
If not provided, a new `ProcessPoolExecutor` is used on Unix systems
while on Windows a `distributed.Client` is used if `distributed` is
installed.
If not provided, a new `~concurrent.futures.ProcessPoolExecutor`
is used on Unix systems while on Windows a `distributed.Client`
is used if `distributed` is installed.
ntasks : int, optional
The number of concurrent function evaluations. Defaults to the number
of cores available in 'executor'.
of cores available in `executor`.
log : bool, default: False
If True, record the method calls made to the learner by this runner.
shutdown_executor : Bool, default: False
shutdown_executor : bool, default: False
If True, shutdown the executor when the runner has completed. If
'executor' is not provided then the executor created internally
`executor` is not provided then the executor created internally
by the runner is shut down, regardless of this parameter.
ioloop : asyncio.AbstractEventLoop, optional
ioloop : ``asyncio.AbstractEventLoop``, optional
The ioloop in which to run the learning algorithm. If not provided,
the default event loop is used.
retries : int, default: 0
Maximum amount of retries of a certain point 'x' in
'learner.function(x)'. After 'retries' is reached for 'x' the
point is present in 'runner.failed'.
Maximum amount of retries of a certain point ``x`` in
``learner.function(x)``. After `retries` is reached for ``x``
the point is present in ``runner.failed``.
raise_if_retries_exceeded : bool, default: True
Raise the error after a point 'x' failed 'retries'.
Raise the error after a point ``x`` failed `retries`.
Attributes
----------
task : asyncio.Task
task : `asyncio.Task`
The underlying task. May be cancelled in order to stop the runner.
learner : Learner
learner : `~adaptive.BaseLearner` instance
The underlying learner. May be queried for its state.
log : list or None
Record of the method calls made to the learner, in the format
'(method_name, *args)'.
``(method_name, *args)``.
to_retry : dict
Mapping of {point: n_fails, ...}. When a point has failed
'runner.retries' times it is removed but will be present
in 'runner.tracebacks'.
Mapping of ``{point: n_fails, ...}``. When a point has failed
``runner.retries`` times it is removed but will be present
in ``runner.tracebacks``.
tracebacks : dict
A mapping of point to the traceback if that point failed.
pending_points : dict
A mapping of 'concurrent.Future's to points, {Future: point, ...}.
A mapping of `~concurrent.futures.Future`\s to points.
Methods
-------
......@@ -415,17 +407,13 @@ class AsyncRunner(BaseRunner):
overhead : callable
The overhead in percent of using Adaptive. This includes the
overhead of the executor. Essentially, this is
100 * (1 - total_elapsed_function_time / self.elapsed_time()).
``100 * (1 - total_elapsed_function_time / self.elapsed_time())``.
Properties
----------
failed : set
Set of points that failed 'retries' times.
Notes
-----
This runner can be used when an async function (defined with
'async def') has to be learned. In this case the function will be
``async def``) has to be learned. In this case the function will be
run directly on the event loop (and not in the executor).
"""
......@@ -461,6 +449,7 @@ class AsyncRunner(BaseRunner):
self.function)
self.task = self.ioloop.create_task(self._run())
self.saving_task = None
if in_ipynb() and not self.ioloop.is_running():
warnings.warn("The runner has been scheduled, but the asyncio "
"event loop is not running! If you are "
......@@ -486,7 +475,7 @@ class AsyncRunner(BaseRunner):
def cancel(self):
"""Cancel the runner.
This is equivalent to calling `runner.task.cancel()`.
This is equivalent to calling ``runner.task.cancel()``.
"""
self.task.cancel()
......@@ -495,21 +484,21 @@ class AsyncRunner(BaseRunner):
Parameters
----------
runner : Runner
runner : `Runner`
plotter : function
A function that takes the learner as a argument and returns a
holoviews object. By default learner.plot() will be called.
holoviews object. By default ``learner.plot()`` will be called.
update_interval : int
Number of second between the updates of the plot.
name : hasable
Name for the `live_plot` task in `adaptive.active_plotting_tasks`.
By default the name is `None` and if another task with the same name
already exists that other live_plot is canceled.
By default the name is None and if another task with the same name
already exists that other `live_plot` is canceled.
Returns
-------
dm : holoviews.DynamicMap
The plot that automatically updates every update_interval.
dm : `holoviews.core.DynamicMap`
The plot that automatically updates every `update_interval`.
"""
return live_plot(self, plotter=plotter,
update_interval=update_interval,
......@@ -526,7 +515,7 @@ class AsyncRunner(BaseRunner):
async def _run(self):
first_completed = asyncio.FIRST_COMPLETED
if self.max_tasks() < 1:
if self._get_max_tasks() < 1:
raise RuntimeError('Executor has no workers')
try:
......@@ -553,6 +542,31 @@ class AsyncRunner(BaseRunner):
end_time = time.time()
return end_time - self.start_time
def start_periodic_saving(self, save_kwargs, interval):
"""Periodically save the learner's data.
Parameters
----------
save_kwargs : dict
Key-word arguments for ``learner.save(**save_kwargs)``.
interval : int
Number of seconds between saving the learner.
Example
-------
>>> runner = Runner(learner)
>>> runner.start_periodic_saving(
... save_kwargs=dict(fname='data/test.pickle'),
... interval=600)
"""
async def _saver(save_kwargs=save_kwargs, interval=interval):
while self.status() == 'running':
self.learner.save(**save_kwargs)
await asyncio.sleep(interval)
self.learner.save(**save_kwargs) # one last time
self.saving_task = self.ioloop.create_task(_saver())
return self.saving_task
# Default runner
Runner = AsyncRunner
......@@ -572,7 +586,7 @@ def simple(learner, goal):
Parameters
----------
learner : adaptive.BaseLearner
learner : ~`adaptive.BaseLearner` instance
goal : callable
The end condition for the calculation. This function must take the
learner as its sole argument, and return True if we should stop.
......@@ -591,9 +605,10 @@ def replay_log(learner, log):
Parameters
----------
learner : learner.BaseLearner
learner : `~adaptive.BaseLearner` instance
New learner where the log will be applied.
log : list
contains tuples: '(method_name, *args)'.
contains tuples: ``(method_name, *args)``.
"""
for method, *args in log:
getattr(learner, method)(*args)
......
# -*- coding: utf-8 -*-
import collections
import functools as ft
import inspect
import itertools as it
import functools as ft
import random
import math
import numpy as np
import scipy.spatial
import operator
import os
import random
import shutil
import tempfile
import numpy as np
import pytest
import scipy.spatial
from ..learner import AverageLearner, BalancingLearner, Learner1D, Learner2D, LearnerND
from ..learner import (AverageLearner, BalancingLearner, DataSaver,
IntegratorLearner, Learner1D, Learner2D, LearnerND)
from ..runner import simple
try:
import skopt
from ..learner import SKOptLearner
except ModuleNotFoundError:
SKOptLearner = None
def generate_random_parametrization(f):
"""Return a realization of 'f' with parameters bound to random values.
......@@ -55,6 +67,10 @@ def xfail(learner):
return pytest.mark.xfail, learner
def maybe_skip(learner):
return (pytest.mark.skip, learner) if learner is None else learner
# All parameters except the first must be annotated with a callable that
# returns a random value for that parameter.
......@@ -95,15 +111,15 @@ def gaussian(n):
def run_with(*learner_types):
pars = []
for l in learner_types:
is_xfail = isinstance(l, tuple)
if is_xfail:
xfail, l = l
has_marker = isinstance(l, tuple)
if has_marker:
marker, l = l
for f, k in learner_function_combos[l]:
# Check if learner was marked with our `xfail` decorator
# XXX: doesn't work when feeding kwargs to xfail.
if is_xfail:
if has_marker:
pars.append(pytest.param(l, f, dict(k),
marks=[pytest.mark.xfail]))
marks=[marker]))
else:
pars.append((l, f, dict(k)))
return pytest.mark.parametrize('learner_type, f, learner_kwargs', pars)
......@@ -386,6 +402,77 @@ def test_balancing_learner(learner_type, f, learner_kwargs):
assert all(l.npoints > 10 for l in learner.learners), [l.npoints for l in learner.learners]
@run_with(Learner1D, Learner2D, LearnerND, AverageLearner,
maybe_skip(SKOptLearner), IntegratorLearner)
def test_saving(learner_type, f, learner_kwargs):
f = generate_random_parametrization(f)
learner = learner_type(f, **learner_kwargs)
control = learner_type(f, **learner_kwargs)
simple(learner, lambda l: l.npoints > 100)
fd, path = tempfile.mkstemp()
try:
learner.save(path)
control.load(path)
if learner_type is not Learner1D:
# Because different scales result in differnt losses
np.testing.assert_almost_equal(learner.loss(), control.loss())
# Try if the control is runnable
simple(control, lambda l: l.npoints > 200)
finally:
os.remove(path)
@run_with(Learner1D, Learner2D, LearnerND, AverageLearner,
maybe_skip(SKOptLearner), IntegratorLearner)
def test_saving_of_balancing_learner(learner_type, f, learner_kwargs):
f = generate_random_parametrization(f)
learner = BalancingLearner([learner_type(f, **learner_kwargs)])
control = BalancingLearner([learner_type(f, **learner_kwargs)])
# set fnames
learner.learners[0].fname = 'test'
control.learners[0].fname = 'test'
simple(learner, lambda l: l.learners[0].npoints > 100)
folder = tempfile.mkdtemp()
try:
learner.save(folder=folder)
control.load(folder=folder)
if learner_type is not Learner1D:
# Because different scales result in differnt losses
np.testing.assert_almost_equal(learner.loss(), control.loss())
# Try if the control is runnable
simple(control, lambda l: l.learners[0].npoints > 200)
finally:
shutil.rmtree(folder)
@run_with(Learner1D, Learner2D, LearnerND, AverageLearner,
maybe_skip(SKOptLearner), IntegratorLearner)
def test_saving_with_datasaver(learner_type, f, learner_kwargs):
f = generate_random_parametrization(f)
g = lambda x: {'y': f(x), 't': random.random()}
arg_picker = operator.itemgetter('y')
learner = DataSaver(learner_type(g, **learner_kwargs), arg_picker)
control = DataSaver(learner_type(g, **learner_kwargs), arg_picker)
simple(learner, lambda l: l.npoints > 100)
fd, path = tempfile.mkstemp()
try:
learner.save(path)
control.load(path)
if learner_type is not Learner1D:
# Because different scales result in differnt losses
np.testing.assert_almost_equal(learner.loss(), control.loss())
assert learner.extra_data == control.extra_data
# Try if the control is runnable
simple(control, lambda l: l.npoints > 200)
finally:
os.remove(path)
@pytest.mark.xfail
@run_with(Learner1D, Learner2D, LearnerND)
def test_convergence_for_arbitrary_ordering(learner_type, f, learner_kwargs):
......
# -*- coding: utf-8 -*-
from contextlib import contextmanager
from functools import wraps
import functools
import gzip
from itertools import product
import os
import pickle
import time
......@@ -30,7 +33,7 @@ def restore(*learners):
def cache_latest(f):
"""Cache the latest return value of the function and add it
as 'self._cache[f.__name__]'."""
@wraps(f)
@functools.wraps(f)
def wrapper(*args, **kwargs):
self = args[0]
if not hasattr(self, '_cache'):
......@@ -38,3 +41,26 @@ def cache_latest(f):
self._cache[f.__name__] = f(*args, **kwargs)
return self._cache[f.__name__]
return wrapper
def save(fname, data, compress=True):
fname = os.path.expanduser(fname)
dirname = os.path.dirname(fname)
if dirname:
os.makedirs(dirname, exist_ok=True)
_open = gzip.open if compress else open
with _open(fname, 'wb') as f:
pickle.dump(data, f, protocol=pickle.HIGHEST_PROTOCOL)
def load(fname, compress=True):
fname = os.path.expanduser(fname)
_open = gzip.open if compress else open
with _open(fname, 'rb') as f:
return pickle.load(f)
def copy_docstring_from(other):
def decorator(method):
return functools.wraps(other)(method)
return decorator
......@@ -6,4 +6,3 @@
tail -n1 $f | read -r _ || echo $f: no newline at end of file
tail -n1 $f | grep -q '^$' && echo $f: empty line at end of file
done | grep . >&2
build/*
source/_static/holoviews.*