adaptive merge requestshttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests2018-12-17T15:33:51Zhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/143WIP: Resolve "use a ItemSortedDict for the loss in the LearnerND"2018-12-17T15:33:51ZBas NijholtWIP: Resolve "use a ItemSortedDict for the loss in the LearnerND"Closes #133Closes #133https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/142make methods private in the LearnerND, closes #852018-12-14T23:17:00ZBas Nijholtmake methods private in the LearnerND, closes #85See the related note here https://gitlab.kwant-project.org/qt/adaptive/issues/85#note_21890.See the related note here https://gitlab.kwant-project.org/qt/adaptive/issues/85#note_21890.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/141change the simplex_queue to a SortedKeyList2018-12-17T14:32:24ZBas Nijholtchange the simplex_queue to a SortedKeyListI suddenly realized the reason why the `test_learner_performance_is_invariant_under_scaling` was failing for the `LearnerND` sometimes.
We were saving the rounded loss and then when a subsimplex was created from an existing simplex, w...I suddenly realized the reason why the `test_learner_performance_is_invariant_under_scaling` was failing for the `LearnerND` sometimes.
We were saving the rounded loss and then when a subsimplex was created from an existing simplex, we divided the rounded loss instead of dividing the unrounded loss and then rounding it again.
Using a `SortedKeyList` will make sure this doesn't happen.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/140Resolve "(LearnerND) fix plotting of scaled domains"2018-12-07T16:20:42ZJorn HoofwijkResolve "(LearnerND) fix plotting of scaled domains"Closes #128Closes #128https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/139Resolve "(Learner1D) improve time complexity"2018-12-10T08:03:10ZJorn HoofwijkResolve "(Learner1D) improve time complexity"Closes #126 #104
improves time complexity from O(N^2) to O(N log N) (for learning a function from 0 to N points).
Speed is now ~1000 points per second on my computer (given the default loss) and remains more or less constant until...Closes #126 #104
improves time complexity from O(N^2) to O(N log N) (for learning a function from 0 to N points).
Speed is now ~1000 points per second on my computer (given the default loss) and remains more or less constant untill 5000 points
obviously for small N (say 10 points) the old method was faster, for larger N (say >20, the new method is faster).
It does need to be said that according to my profiler, most of the time is spend on this line:
https://gitlab.kwant-project.org/qt/adaptive/commit/be74ad62c3b55b04d8e70007ad33f9097042ee91#3156b0acd24e9337c609628ec36d95e30b67736d_568_572 (line 572)
please do verify that the results are noticeable for you as well.
Also do note that the complexity now is as follows:
| Operation | Comblexity @ master | Complexity @ branch |
|-----------|---------------------|----------------------|
| tell(x,y) | O(N) | O(log N) |
| ask(1) | O(N) | O(log N) |
| loss() | O(N) | O(1) |
| ask(k) | O(N*k) | O(k * log N) |
| run 1..N | O(N^2) | O(N log N) |https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/138Fix typo in curvature-loss computation2018-11-28T20:07:41ZJorn HoofwijkFix typo in curvature-loss computationI had a typo in !131.
actually compute the dx by subtracting two DIFFERENT numbers.
The commit is self-explanatory I supposeI had a typo in !131.
actually compute the dx by subtracting two DIFFERENT numbers.
The commit is self-explanatory I supposehttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/137adhere to PEP008 by using absolute imports2018-11-30T11:08:06ZBas Nijholtadhere to PEP008 by using absolute importsSee https://www.python.org/dev/peps/pep-0008/#imports.
This is also how it's done in all of Python's standard library.
@jbweston and @anton\-akhmerov is this good to merge?See https://www.python.org/dev/peps/pep-0008/#imports.
This is also how it's done in all of Python's standard library.
@jbweston and @anton\-akhmerov is this good to merge?https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/136build the Dockerimage used in CI2018-11-30T11:08:06ZBas Nijholtbuild the Dockerimage used in CIhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/135test all the different loss functions in each test2018-11-30T11:08:06ZBas Nijholttest all the different loss functions in each testImprove the tests
* test all the different loss functions in each test
* add `Learner1D._recompute_losses_factor` to remove `xfail` from two tests
* speed up the testsImprove the tests
* test all the different loss functions in each test
* add `Learner1D._recompute_losses_factor` to remove `xfail` from two tests
* speed up the testshttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/134change resolution_loss to a factory function2018-11-30T11:08:06ZBas Nijholtchange resolution_loss to a factory functionhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/133make 'fname' a parameter to 'save' and 'load' only2018-11-30T11:08:06ZJoseph Westonmake 'fname' a parameter to 'save' and 'load' onlyThis simplifies the API by making sure that the filenames are
only provided in one place (the calls to save and load).
Closes #122This simplifies the API by making sure that the filenames are
only provided in one place (the calls to save and load).
Closes #122https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/132WIP: add support for neighbours in loss computation in LearnerND2019-02-07T12:09:50ZJorn HoofwijkWIP: add support for neighbours in loss computation in LearnerNDCloses #120
TODO add support to output in $`R^N`$
TODO rewrite the code to be more readable, I will do this next week
![image](/uploads/7da085a4974369b21915ac683aed6337/image.png)
As you can see in the plot, it is getting hard to di...Closes #120
TODO add support to output in $`R^N`$
TODO rewrite the code to be more readable, I will do this next week
![image](/uploads/7da085a4974369b21915ac683aed6337/image.png)
As you can see in the plot, it is getting hard to distinguish the LearnerND from the Learner2D :Dhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/131Resolve "(Learner1D) add possibility to use the direct neighbors in the loss"2018-11-30T11:08:06ZJorn HoofwijkResolve "(Learner1D) add possibility to use the direct neighbors in the loss"Closes #119
Currently works for one $`R^1 \to R^1`$
Still have to make it work for $`R^1 \to R^N`$
Also performance is actually quite good. As in: the learner slows down about 1.5 times. Going (on my laptop) from 3 seconds per 1000 p...Closes #119
Currently works for one $`R^1 \to R^1`$
Still have to make it work for $`R^1 \to R^N`$
Also performance is actually quite good. As in: the learner slows down about 1.5 times. Going (on my laptop) from 3 seconds per 1000 points to 4.5 seconds per 1000 points.
Which, I believe, will be more than compensated by the fact that the chosen points are generally betterAnton AkhmerovAnton Akhmerovhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/130save execution time on futures inside runners2018-11-30T11:08:06ZJoseph Westonsave execution time on futures inside runnersNow `async def` functions can be learned. This was previously documented as working, but has been broken since we started timing the execution of the learned function by wrapping it.Now `async def` functions can be learned. This was previously documented as working, but has been broken since we started timing the execution of the learned function by wrapping it.https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/129add tutorials2018-11-30T11:08:06ZBas Nijholtadd tutorialsUses https://github.com/jbweston/jupyter-sphinx/tree/feature/execute and depends on https://github.com/jupyter-widgets/jupyter-sphinx/pull/22Uses https://github.com/jbweston/jupyter-sphinx/tree/feature/execute and depends on https://github.com/jupyter-widgets/jupyter-sphinx/pull/22https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/128LearnerND scale output values before computing loss2018-12-07T16:13:33ZJorn HoofwijkLearnerND scale output values before computing lossCloses #78
In a way similar to the Learner1D
Recompute all losses when the scale increases with a factor 1.1.
My simple measurements indicate that with the `ring_of_fire` as test function the learner slows down approximately 10...Closes #78
In a way similar to the Learner1D
Recompute all losses when the scale increases with a factor 1.1.
My simple measurements indicate that with the `ring_of_fire` as test function the learner slows down approximately 10%. But we get in return that we scale everything to equivalent sizes before computing the loss.
TODO:
- [x] Add some test(s).https://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/127Resolve "(LearnerND) allow any convex hull as domain"2018-11-30T11:08:06ZJorn HoofwijkResolve "(LearnerND) allow any convex hull as domain"Closes #114
Works especially well when combined with !124
usage:
```python
import scipy
import adaptive
adaptive.notebook_extension()
def f(xyz):
x, y, z = xyz
return x**4 + y**4 + z**4 - (x**2+y**2+z**2)**2
b...Closes #114
Works especially well when combined with !124
usage:
```python
import scipy
import adaptive
adaptive.notebook_extension()
def f(xyz):
x, y, z = xyz
return x**4 + y**4 + z**4 - (x**2+y**2+z**2)**2
b = [(-1, -1, -1),
(-1, 1, -1),
(-1, -1, 1),
(-1, 1, 1),
( 1, 1, -1),
( 1, -1, -1)]
bounds = scipy.spatial.ConvexHull(b)
learner = adaptive.LearnerND(f, bounds)
runner = adaptive.Runner(learner, goal=lambda l:l.npoints>1000)
runner.live_info()
```
Now, if you indeed also have !124 at your disposal, you are able to run something like this:
```python
learner.plot_isosurface(-0.5)
```
in order to get this:
![image](/uploads/744006209c9fce87ac2f5ec827b2eb26/image.png)
otherwise you could run:
```python
learner.plot_slice({1:0})
```
to get this:
![image](/uploads/b5cdadc6f7fdbbb735b802219fdce71a/image.png)
Joseph WestonJoseph Westonhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/126add check_whitespace2018-11-30T11:08:06ZBas Nijholtadd check_whitespacehttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/125update to the latest miniver2018-11-30T11:08:06ZBas Nijholtupdate to the latest miniverand add the url to the [miniver repo](https://github.com/jbweston/miniver).and add the url to the [miniver repo](https://github.com/jbweston/miniver).Joseph WestonJoseph Westonhttps://gitlab.kwant-project.org/qt/adaptive/-/merge_requests/124Resolve "(LearnerND) add iso-surface plot feature"2018-11-30T11:08:06ZJorn HoofwijkResolve "(LearnerND) add iso-surface plot feature"Closes #112Closes #112