adaptive merge requestshttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests2017-08-21T09:57:51Zhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/1WIP: 0D averaging learner2017-08-21T09:57:51ZAnton AkhmerovWIP: 0D averaging learnerSo far it's a prototype implementation, check it out.
Closes #12So far it's a prototype implementation, check it out.
Closes #12https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/10add `min_resolution` exposed attribute2018-02-19T18:50:15ZPablo Piskunowadd `min_resolution` exposed attributeFix problem #27 of finding hidden features of a function, by providing a minimum resolution parameter.Fix problem #27 of finding hidden features of a function, by providing a minimum resolution parameter.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/11add performance test with notebook2018-01-02T14:54:07ZPablo Piskunowadd performance test with notebookThe test is based on passing a function of randomly distributed peaks (gaussians or lorentzians), and testing if the learner is able to find them.
The test is designed to fail with the current state of adaptive, and to pass with and fix...The test is based on passing a function of randomly distributed peaks (gaussians or lorentzians), and testing if the learner is able to find them.
The test is designed to fail with the current state of adaptive, and to pass with and fix of the kind mentioned in #27.
I added a python test file, which for the moment is not running alone, and needs to be run in a jupyter notebook.
The corresponding jupyter notebook to see what the test is doing is also included.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/14implement AverageLearner().done()2017-11-17T11:24:34ZBas Nijholtimplement AverageLearner().done()We now require the user to use `goal=lambda l: l.loss() < 1`.
Analogous to the `IntegratorLearner` we can implement `learner.done()` which tells you whether the tolerance has been reached.We now require the user to use `goal=lambda l: l.loss() < 1`.
Analogous to the `IntegratorLearner` we can implement `learner.done()` which tells you whether the tolerance has been reached.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/17Level learner2017-11-14T13:05:59ZBas NijholtLevel learnerThis learner learns a function `f:ℝ → ℝ^N`
An example is:
![level_learner](/uploads/a0f6e119a8912ab278fb6691da3e2b7c/level_learner.png)This learner learns a function `f:ℝ → ℝ^N`
An example is:
![level_learner](/uploads/a0f6e119a8912ab278fb6691da3e2b7c/level_learner.png)https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/18WIP: add timeit option in runner2018-06-15T04:34:12ZBas NijholtWIP: add timeit option in runnerThis is probably useful for testing.
This will only work with the `SequentialExecutor` or an executor that does a better job of pickling, I marked it as a WIP because of this reason.
```python
import adaptive
import numpy as np
...This is probably useful for testing.
This will only work with the `SequentialExecutor` or an executor that does a better job of pickling, I marked it as a WIP because of this reason.
```python
import adaptive
import numpy as np
import holoviews as hv
adaptive.notebook_extension()
def f(x):
return x**2
learner = adaptive.Learner1D(f, (-1, 1))
runner = adaptive.Runner(learner, adaptive.runner.SequentialExecutor(),
goal=lambda l: l.loss() < 0.00001, timeit=True)
```
we can then easily generate a plot:
```python
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / float(N)
(hv.Curve(running_mean(runner.times['add_point'], 200), label='add_point')
* hv.Curve(running_mean(runner.times['choose_points'], 200), label='choose_points')
* hv.Curve(running_mean(runner.times['function'], 200), label='function'))
```
![times](/uploads/1bc3dcb69bfb5003f29cfeddc367119b/times.png)
### For the future
The runner can also be made smarter. For example, it could notice that `choose_points` takes longer than evaluating the `function`, then it could choose to choose more points at once.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/20use ioloop.set_default_executor such that it the executor properly closes itself2017-11-14T12:04:24ZBas Nijholtuse ioloop.set_default_executor such that it the executor properly closes itselfI think that this will solve https://gitlab.kwant-project.org/qt/adaptive/issues/21 when using the default executor.I think that this will solve https://gitlab.kwant-project.org/qt/adaptive/issues/21 when using the default executor.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/23optimizations2018-01-15T09:04:47ZBas Nijholtoptimizationshttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/31WIP: add absolute tolerance to `integrator_learner`2018-10-25T10:16:40ZPablo PiskunowWIP: add absolute tolerance to `integrator_learner`By default, the `integrator_learner` uses a relative tolerance, but an absolute tolerance is also useful.
This commit adds both tolerances.By default, the `integrator_learner` uses a relative tolerance, but an absolute tolerance is also useful.
This commit adds both tolerances.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/34WIP: Triangulation2018-07-10T10:44:30ZAnton AkhmerovWIP: TriangulationThis is an attempt to replace qhul as the triangulation generator. Optimistically it should let us avoid rebuilding an expensive Delaunay triangulation every time we add a point, and even implement an anisotropic mesh.
Right now there i...This is an attempt to replace qhul as the triangulation generator. Optimistically it should let us avoid rebuilding an expensive Delaunay triangulation every time we add a point, and even implement an anisotropic mesh.
Right now there isn't much in the MR:
## Triangulation
- [x] Initialize with a single triangle
- [x] Update a triangulation by adding a point to an existing triangle or an edge
- [x] Flip an edge
- [ ] Implement triangulation health checks to detect whether it is consistent
- [x] Add a point outside of the triangulation (requires updating the hull and computing which new triangles are to be added)
- [x] Rename triangles → simplices, edges → faces.
## Replacing QHull (using the triangulation object inside the learner)
- [ ] Cache the triangle or edge where the learner requested the point so that we can cheaply add points.
- [ ] Compute gradient on each triangle
- [ ] Compute deviation from a linear estimate by comparing a triangle with its neighbors
- [ ] Keep track of the interpolated point values by maintaining both a triangulation with all points, and a triangulation with only points where we already know the values. The former should be used for interpolating the function values, the latter to compute losses per triangle.
- [ ] Compute triangles badness and when adding data flip all edges neighboring the updated point. Because we use triangulation to compute interpolated values, adding points to the latter should not result in any edges being flipped, only adding data should rearrange the triangles.
## Implementing anisotropic mesh
- [ ] Define gradient-aware triangle badness, use that one to determine which edges need to be flipped.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/36add a function that runs the learner for N points2017-12-11T14:42:20ZBas Nijholtadd a function that runs the learner for N pointshttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/39add active_runner_tasks and cancel old runners if no name is provided2017-12-19T10:59:58ZBas Nijholtadd active_runner_tasks and cancel old runners if no name is providedSimilar to what we do with the `active_plotting_tasks`, I think that it is good to cancel the other unnamed runners by default, such that you don't run into problem if you execute `runner = adaptive.Runner(learner)` a couple of times.
...Similar to what we do with the `active_plotting_tasks`, I think that it is good to cancel the other unnamed runners by default, such that you don't run into problem if you execute `runner = adaptive.Runner(learner)` a couple of times.
You can still have multiple runners at the same time, but you just have to name them, like:
```python
runner1 = adaptive.Runner(learners, name='balancing_learners')
runner2 = adaptive.Runner(learner1D, name='learner1D')
```https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/57filter out all meta data from the notebook2018-04-12T12:31:26ZBas Nijholtfilter out all meta data from the notebookThis filters out the remaining meta-data in the notebooks.This filters out the remaining meta-data in the notebooks.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/78add is_sequence_of_points, closes #592018-07-02T17:37:27ZBas Nijholtadd is_sequence_of_points, closes #59An attempt to fix #59.An attempt to fix #59.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/84only scale the z-axis when the maximal ptp distance is >1e-162018-08-20T21:15:19ZBas Nijholtonly scale the z-axis when the maximal ptp distance is >1e-16Found this old branch where I fixed an issue that I had some time ago.Found this old branch where I fixed an issue that I had some time ago.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/87rename LearnerND to TriangulatingLearner2018-12-07T19:26:18ZJoseph Westonrename LearnerND to TriangulatingLearnerThis is a more accurate description of what the learner does
and what makes it unique. One could imagine other learners
(e.g. a monte carlo one) that also sample "ND" functions, so
it's important to distinguish.
Closes #88This is a more accurate description of what the learner does
and what makes it unique. One could imagine other learners
(e.g. a monte carlo one) that also sample "ND" functions, so
it's important to distinguish.
Closes #88https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/92(LearnerND) Make some functions faster using Cython2018-09-17T15:45:56ZJorn Hoofwijk(LearnerND) Make some functions faster using CythonCloses #80
This speeds up the learner with ~30% I believe.
I am aware that currently a great amount of time is going into the triangulation class. And I think that it can become a gazillion times faster if it would be fully translated ...Closes #80
This speeds up the learner with ~30% I believe.
I am aware that currently a great amount of time is going into the triangulation class. And I think that it can become a gazillion times faster if it would be fully translated into cython (or C++). However, I am not sure if we would want to go this route, now. As it would be a great deal of work. And I suppose it would be beneficial to make this change only if we are certain that only minor parts of the triangulation will change.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/110WIP: move the function to the runner2018-10-01T11:36:47ZBas NijholtWIP: move the function to the runnerThis is a quick implementation of putting the function inside the `Runner` and now only works with the `Learner1D` and the `BalancingLearner`.
I don't plan on merging this but it's for #46.
This is a quick implementation of putting the function inside the `Runner` and now only works with the `Learner1D` and the `BalancingLearner`.
I don't plan on merging this but it's for #46.
https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/143WIP: Resolve "use a ItemSortedDict for the loss in the LearnerND"2018-12-17T15:33:51ZBas NijholtWIP: Resolve "use a ItemSortedDict for the loss in the LearnerND"Closes #133Closes #133