adaptive merge requestshttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests2017-08-21T09:57:51Zhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/1WIP: 0D averaging learner2017-08-21T09:57:51ZAnton AkhmerovWIP: 0D averaging learnerSo far it's a prototype implementation, check it out.
Closes #12So far it's a prototype implementation, check it out.
Closes #12https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/2rename variables and begin implementing loss_improvement(points)2017-09-01T08:20:33ZBas Nijholtrename variables and begin implementing loss_improvement(points)https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/3setup CI and tests2017-11-20T14:53:38ZBas Nijholtsetup CI and testshttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/4Implement BalancingLearner2017-09-01T12:44:51ZBas NijholtImplement BalancingLearner@jbweston or @anton-akhmerov
It would be nice if you guys could look at this, I am going nuts with finding the bug.
Sometimes the snippet in the notebook works without issues, but other times there is an error that I am unable to ...@jbweston or @anton-akhmerov
It would be nice if you guys could look at this, I am going nuts with finding the bug.
Sometimes the snippet in the notebook works without issues, but other times there is an error that I am unable to reproduce.
Closes #10, #13https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/5Feature/logging2017-09-01T11:06:27ZJoseph WestonFeature/loggingAllow runners to log the method calls they make to a learner, and add a function to reconstruct this
sequence of method callsAllow runners to log the method calls they make to a learner, and add a function to reconstruct this
sequence of method callshttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/6Better loss improvement2017-09-06T12:57:35ZBas NijholtBetter loss improvementI have changed the API of `loss_improvement` to accept only a point instead of a list of points because we won't ever use it in any other way in the current implementation.I have changed the API of `loss_improvement` to accept only a point instead of a list of points because we won't ever use it in any other way in the current implementation.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/7implement 2D learner2017-09-15T13:05:01ZBas Nijholtimplement 2D learner![](http://nijholt.biz/stuff/awesome-adaptive.gif)
- [x] credit Pauli Virtanen in the learner docstring
- ~~ move 2D to seperate file~~ it should be kept with the rest of the learners
- [ ] improve loss function![](http://nijholt.biz/stuff/awesome-adaptive.gif)
- [x] credit Pauli Virtanen in the learner docstring
- ~~ move 2D to seperate file~~ it should be kept with the rest of the learners
- [ ] improve loss functionhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/8cquad2017-10-31T14:21:24ZBas Nijholtcquad# Definitions
* `discard` means that the interval and it's children are not participating in the determination of the total integral anymore because it's parent did a refinement when the data of the interval wasn't known, and later it a...# Definitions
* `discard` means that the interval and it's children are not participating in the determination of the total integral anymore because it's parent did a refinement when the data of the interval wasn't known, and later it appeared that this interval had to split
* `complete` all the function values of the interval are known
* `done` the integral and the error for the interval has been calculated
* `branch_complete` the interval can be used to determine the total integral, (but if it's children are also branch_complete, they should be used.)
Closes #5.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/92D: use the same loss for choosing points and loss_improvement2017-10-27T17:32:57ZBas Nijholt2D: use the same loss for choosing points and loss_improvementThis seems to work really badly:
![Screen_Shot_2017-10-04_at_12.46.36](/uploads/b93ee00b3025a20de7b3b22fe4570907/Screen_Shot_2017-10-04_at_12.46.36.png)This seems to work really badly:
![Screen_Shot_2017-10-04_at_12.46.36](/uploads/b93ee00b3025a20de7b3b22fe4570907/Screen_Shot_2017-10-04_at_12.46.36.png)https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/10add `min_resolution` exposed attribute2018-02-19T18:50:15ZPablo Piskunowadd `min_resolution` exposed attributeFix problem #27 of finding hidden features of a function, by providing a minimum resolution parameter.Fix problem #27 of finding hidden features of a function, by providing a minimum resolution parameter.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/11add performance test with notebook2018-01-02T14:54:07ZPablo Piskunowadd performance test with notebookThe test is based on passing a function of randomly distributed peaks (gaussians or lorentzians), and testing if the learner is able to find them.
The test is designed to fail with the current state of adaptive, and to pass with and fix...The test is based on passing a function of randomly distributed peaks (gaussians or lorentzians), and testing if the learner is able to find them.
The test is designed to fail with the current state of adaptive, and to pass with and fix of the kind mentioned in #27.
I added a python test file, which for the moment is not running alone, and needs to be run in a jupyter notebook.
The corresponding jupyter notebook to see what the test is doing is also included.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/12Meta data saver2017-11-01T12:04:46ZBas NijholtMeta data saverhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/13cache the points in the BalancingLearner2017-11-08T12:51:21ZBas Nijholtcache the points in the BalancingLearnerThis will not work with the `IntegratorLearner` but this doesn't really matter ATM since the `BalancingLearner` currently doesn't work with the `IntegratorLearner` at all.This will not work with the `IntegratorLearner` but this doesn't really matter ATM since the `BalancingLearner` currently doesn't work with the `IntegratorLearner` at all.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/14implement AverageLearner().done()2017-11-17T11:24:34ZBas Nijholtimplement AverageLearner().done()We now require the user to use `goal=lambda l: l.loss() < 1`.
Analogous to the `IntegratorLearner` we can implement `learner.done()` which tells you whether the tolerance has been reached.We now require the user to use `goal=lambda l: l.loss() < 1`.
Analogous to the `IntegratorLearner` we can implement `learner.done()` which tells you whether the tolerance has been reached.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/15make DataSaver work with None, closes #302017-11-03T10:38:48ZBas Nijholtmake DataSaver work with None, closes #30https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/16Plot contours2017-11-08T12:38:42ZBas NijholtPlot contoursI notice that I use the overlays of the triangles a lot in using the `Learner2D`, so I added an option `triangles_alpha` to the default plot function.
This also fixes the issue of running the cell in the notebook when there is no data...I notice that I use the overlays of the triangles a lot in using the `Learner2D`, so I added an option `triangles_alpha` to the default plot function.
This also fixes the issue of running the cell in the notebook when there is no data available.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/17Level learner2017-11-14T13:05:59ZBas NijholtLevel learnerThis learner learns a function `f:ℝ → ℝ^N`
An example is:
![level_learner](/uploads/a0f6e119a8912ab278fb6691da3e2b7c/level_learner.png)This learner learns a function `f:ℝ → ℝ^N`
An example is:
![level_learner](/uploads/a0f6e119a8912ab278fb6691da3e2b7c/level_learner.png)https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/18WIP: add timeit option in runner2018-06-15T04:34:12ZBas NijholtWIP: add timeit option in runnerThis is probably useful for testing.
This will only work with the `SequentialExecutor` or an executor that does a better job of pickling, I marked it as a WIP because of this reason.
```python
import adaptive
import numpy as np
...This is probably useful for testing.
This will only work with the `SequentialExecutor` or an executor that does a better job of pickling, I marked it as a WIP because of this reason.
```python
import adaptive
import numpy as np
import holoviews as hv
adaptive.notebook_extension()
def f(x):
return x**2
learner = adaptive.Learner1D(f, (-1, 1))
runner = adaptive.Runner(learner, adaptive.runner.SequentialExecutor(),
goal=lambda l: l.loss() < 0.00001, timeit=True)
```
we can then easily generate a plot:
```python
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / float(N)
(hv.Curve(running_mean(runner.times['add_point'], 200), label='add_point')
* hv.Curve(running_mean(runner.times['choose_points'], 200), label='choose_points')
* hv.Curve(running_mean(runner.times['function'], 200), label='function'))
```
![times](/uploads/1bc3dcb69bfb5003f29cfeddc367119b/times.png)
### For the future
The runner can also be made smarter. For example, it could notice that `choose_points` takes longer than evaluating the `function`, then it could choose to choose more points at once.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/19Make Learner1D and Learner2D work with vector outputs2017-11-15T13:17:40ZBas NijholtMake Learner1D and Learner2D work with vector outputsBased on the discussion in !17 I put the logic inside the learners itself.Based on the discussion in !17 I put the logic inside the learners itself.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/20use ioloop.set_default_executor such that it the executor properly closes itself2017-11-14T12:04:24ZBas Nijholtuse ioloop.set_default_executor such that it the executor properly closes itselfI think that this will solve https://gitlab.kwant-project.org/qt/adaptive/issues/21 when using the default executor.I think that this will solve https://gitlab.kwant-project.org/qt/adaptive/issues/21 when using the default executor.