adaptive merge requestshttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests2018-07-10T21:32:31Zhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/81use contextlib.suppress2018-07-10T21:32:31ZBas Nijholtuse contextlib.suppressIt's more beautiful and [Raymond Hettinger](https://youtu.be/OSGv2VnC0go?t=43m44s) recommends it.It's more beautiful and [Raymond Hettinger](https://youtu.be/OSGv2VnC0go?t=43m44s) recommends it.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/82reinitialize learner in runner test2018-07-10T21:51:18ZBas Nijholtreinitialize learner in runner testhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/83Fixup/tests into LearnerND (branch 52)2018-07-12T18:17:07ZJorn HoofwijkFixup/tests into LearnerND (branch 52)Make extra tests for LearnerND and Triangulation class
(I created this merge request so I could open discussions on the tests)Make extra tests for LearnerND and Triangulation class
(I created this merge request so I could open discussions on the tests)https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/84only scale the z-axis when the maximal ptp distance is >1e-162018-08-20T21:15:19ZBas Nijholtonly scale the z-axis when the maximal ptp distance is >1e-16Found this old branch where I fixed an issue that I had some time ago.Found this old branch where I fixed an issue that I had some time ago.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/85rename 'learner.tell'2018-07-18T18:40:56ZJoseph Westonrename 'learner.tell'We now define 'tell' and 'tell_many'. Subclasses may implement
either (or both).
Closes #59.We now define 'tell' and 'tell_many'. Subclasses may implement
either (or both).
Closes #59.Bas NijholtBas Nijholthttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/86(LearnerND) make default point choosing better2018-07-27T08:38:49ZJorn Hoofwijk(LearnerND) make default point choosing betterCloses #76, #77
This is a first attempt at making the point choosing better, this is done by a modified loss and choose_point_in_simplex method.
The new loss function is the hypervolume of the simplex in the N+M space. eg, if we h...Closes #76, #77
This is a first attempt at making the point choosing better, this is done by a modified loss and choose_point_in_simplex method.
The new loss function is the hypervolume of the simplex in the N+M space. eg, if we have a function $`f: R^2 \to R`$. Then for every evaluated point we have three coordinated (two input, one output). Then we take the surface-area of the triangles, taking into account the output coordinates. This can be done as a triangle in 3d space still has a well-defined area. In essence this is the extension of the loss of the 1D-learner into ND-space.
The choose-point-in-simplex works by taking the center of the simplex, unless the shape of the simplex is sufficiently bad, then it will take the center of the longest edge.
It does this by checking whether the circumcenter lies within the simplex. In 2D this means that the largest angle has to be less than 90 degrees, (in higher dimensions, well something similar I imagine).https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/87rename LearnerND to TriangulatingLearner2018-12-07T19:26:18ZJoseph Westonrename LearnerND to TriangulatingLearnerThis is a more accurate description of what the learner does
and what makes it unique. One could imagine other learners
(e.g. a monte carlo one) that also sample "ND" functions, so
it's important to distinguish.
Closes #88This is a more accurate description of what the learner does
and what makes it unique. One could imagine other learners
(e.g. a monte carlo one) that also sample "ND" functions, so
it's important to distinguish.
Closes #88https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/88refactor LearnerND._ask to be more readable (and faster)2018-08-20T11:56:45ZJorn Hoofwijkrefactor LearnerND._ask to be more readable (and faster)Closes #72
Also does help lot to make the learner faster (from O(N) per ask/tell, now only O(log N))
This certainly can be refactored to be even more readable :), I will make sure of it.Closes #72
Also does help lot to make the learner faster (from O(N) per ask/tell, now only O(log N))
This certainly can be refactored to be even more readable :), I will make sure of it.Jorn HoofwijkJorn Hoofwijkhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/89(triangulation) make method for finding initial simplex part of the triangula...2018-07-25T15:25:35ZJorn Hoofwijk(triangulation) make method for finding initial simplex part of the triangulation classCloses #71
Move some logic from the learner to the triangulation. The signature of the triangulation init changed, it now accepts any number of vertices >= dim+1. instead of only num_vertices == dim+1Closes #71
Move some logic from the learner to the triangulation. The signature of the triangulation init changed, it now accepts any number of vertices >= dim+1. instead of only num_vertices == dim+1https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/90WIP: (feature) add anisotropic meshing to LearnerND2019-02-07T12:09:49ZJorn HoofwijkWIP: (feature) add anisotropic meshing to LearnerNDCloses #74
Depends on !86 and #80 and therefore it has the corresponding branches included as well
Still has a few to-do's:
- [x] let `LearnerND.ip()` make use of our triangulation rather than building a new one
- [ ] make it ...Closes #74
Depends on !86 and #80 and therefore it has the corresponding branches included as well
Still has a few to-do's:
- [x] let `LearnerND.ip()` make use of our triangulation rather than building a new one
- [ ] make it work in arbitrary dimensions
- [ ] verify that it is beneficial
- ~~let the user configure the parameters (maximum stretch factor and number of points to take into account)~~ Use one simplex and it's neighbours
- [x] make test pass
- ~~add `rtree` as install requirement~~ No more RTree anymore :)
- ~~raise exception if `anistropic=True` and `rtree` not installed, pass if `anisotropic=False`~~
- [x] refactor code to be human-readable
- ~~let's make it fast :)~~ #80
As it seems it doesn't work that well with the ring, since this ring has a relative low average gradient and very high second derivative. So maybe this second derivative might be a more useful property to determine the
Sneak peek:
![anisotropic](/uploads/3babecfe20b8ac4d6edff89566707e9e/anisotropic.png)https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/91(LearnerND) Evaluate less circumspheres2018-08-20T11:57:46ZJorn Hoofwijk(LearnerND) Evaluate less circumspheresit prunes some circumcircles faster
it results in:
~20% faster in 2d
~40% faster in 3d
Helps to speed up LearnerND a bit see (#80)it prunes some circumcircles faster
it results in:
~20% faster in 2d
~40% faster in 3d
Helps to speed up LearnerND a bit see (#80)Jorn HoofwijkJorn Hoofwijkhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/92(LearnerND) Make some functions faster using Cython2018-09-17T15:45:56ZJorn Hoofwijk(LearnerND) Make some functions faster using CythonCloses #80
This speeds up the learner with ~30% I believe.
I am aware that currently a great amount of time is going into the triangulation class. And I think that it can become a gazillion times faster if it would be fully translated ...Closes #80
This speeds up the learner with ~30% I believe.
I am aware that currently a great amount of time is going into the triangulation class. And I think that it can become a gazillion times faster if it would be fully translated into cython (or C++). However, I am not sure if we would want to go this route, now. As it would be a great deal of work. And I suppose it would be beneficial to make this change only if we are certain that only minor parts of the triangulation will change.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/93add a release guide2018-09-24T16:59:56ZBas Nijholtadd a release guideEven though there are very few steps involved, it is good to have it documented.Even though there are very few steps involved, it is good to have it documented.Joseph WestonJoseph Westonhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/94add runner.max_retries2018-09-24T16:59:56ZBas Nijholtadd runner.max_retriesThis PR (mainly) introduces `runner.max_retries` and `runner.raise_if_max_retries`.
With this the following function would be "learned":
```python
import adaptive
adaptive.notebook_extension()
def f(x, offset=0):
from rando...This PR (mainly) introduces `runner.max_retries` and `runner.raise_if_max_retries`.
With this the following function would be "learned":
```python
import adaptive
adaptive.notebook_extension()
def f(x, offset=0):
from random import random
a = 0.01
if random() < 0.9:
raise Exception('Oops, this failed.')
return x + a**2 / (a**2 + (x - offset)**2)
learner = adaptive.Learner1D(f, bounds=(-1, 1))
runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.05,
max_retries=20, log=True, raise_if_max_retries=False)
```
I also introduce `BlockingRunner.overhead` and the corresponding timing functions and put the shared code of `BlockingRunner` and `AsyncRunner` in `BaseLearner` methods.Joseph WestonJoseph Westonhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/951D: fix the rare case where the right boundary point exists before the left b...2018-09-24T16:59:56ZBas Nijholt1D: fix the rare case where the right boundary point exists before the left boundFixes #94.Fixes #94.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/96More efficient 'tell_many'2018-09-28T14:18:36ZBas NijholtMore efficient 'tell_many'```python
import adaptive
def f(x, offset=0):
a = 0.01
return x + a**2 / (a**2 + (x - offset)**2)
learner = adaptive.Learner1D(f, bounds=(-1, 1))
adaptive.runner.simple(learner, goal=lambda l: l.npoints > 200)
```
T...```python
import adaptive
def f(x, offset=0):
a = 0.01
return x + a**2 / (a**2 + (x - offset)**2)
learner = adaptive.Learner1D(f, bounds=(-1, 1))
adaptive.runner.simple(learner, goal=lambda l: l.npoints > 200)
```
Timing new implementation
```python
%%timeit
learner2 = adaptive.Learner1D(f, bounds=(-1, 1))
learner2.tell_many(*zip(*learner.data.items()))
```
`1.17 ms ± 24.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)`
Timing old implementation
```python
%%timeit
learner2 = adaptive.Learner1D(f, bounds=(-1, 1))
for x, y in learner.data.items():
learner2.tell(x, y)
```
`6.82 ms ± 447 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)`
This makes it ~6 times faster for functions that return scalars and is >10 times faster for vectors.
v0.6Joseph WestonJoseph Westonhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/97Fix #97 and #982018-09-24T16:59:56ZBas NijholtFix #97 and #98See the related issues #97 and #98.See the related issues #97 and #98.Bas NijholtBas Nijholthttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/98Resolve "DeprecationWarning: sorted_dict.iloc is deprecated. Use SortedDict.k...2018-09-24T16:59:56ZJorn HoofwijkResolve "DeprecationWarning: sorted_dict.iloc is deprecated. Use SortedDict.keys() instead."Closes #92
This might impact performance, since `sorted_dict.keys()` might be O(N), but I am not 100 percent certain on this, but it sounds logicalCloses #92
This might impact performance, since `sorted_dict.keys()` might be O(N), but I am not 100 percent certain on this, but it sounds logicalhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/99Resolve "Learner1D's bound check algo in self.ask doesn't take self.data or s...2018-09-24T16:59:57ZJorn HoofwijkResolve "Learner1D's bound check algo in self.ask doesn't take self.data or self.pending_points"Closes #95
- [x] currently the tests fail, this should be fixed
- [x] add some more tests to check uniformity of the return valueCloses #95
- [x] currently the tests fail, this should be fixed
- [x] add some more tests to check uniformity of the return valueJoseph WestonJoseph Westonhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/100Resolve "Learner1D doesn't correctly set the interpolated loss when a point i...2018-09-24T16:59:57ZJorn HoofwijkResolve "Learner1D doesn't correctly set the interpolated loss when a point is added"Closes #99Closes #99Jorn HoofwijkJorn Hoofwijk