Skip to content
Snippets Groups Projects

Resolve "(Learner1D) add possibility to use the direct neighbors in the loss"

Merged Jorn Hoofwijk requested to merge 119-add-second-order-loss-to-adaptive into master

Closes #119 (closed)

Currently works for one

R1R1R^1 \to R^1

Still have to make it work for

R1RNR^1 \to R^N

Also performance is actually quite good. As in: the learner slows down about 1.5 times. Going (on my laptop) from 3 seconds per 1000 points to 4.5 seconds per 1000 points.

Which, I believe, will be more than compensated by the fact that the chosen points are generally better

Merge request reports

Pipeline #13516 passed

Pipeline passed for fde1774b on 119-add-second-order-loss-to-adaptive

Test coverage 80.00% (1.00%) from 1 job
Approval is optional

Merged by Bas NijholtBas Nijholt 6 years ago (Nov 22, 2018 10:52am UTC)

Merge details

Pipeline #13518 passed

Pipeline passed for 7408ed57 on master

Test coverage 80.00% (1.00%) from 1 job

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Jorn Hoofwijk added 1 commit

    added 1 commit

    Compare with previous version

  • Bas Nijholt resolved all discussions

    resolved all discussions

  • I've just tried to use this IRL and I discovered a bug.

    tell_many isn't working yet.

  • Nooooo, working on it

  • Jorn Hoofwijk added 1 commit

    added 1 commit

    • b393efc2 - use _get_loss_in_interval in tell_many method

    Compare with previous version

  • should be good now

  • Perfect! I am using it ATM and it seems to work really well.

    The only thing I don't like it that it adds an extra complication for the user.

    There might be an automatic way of determining that loss_depends_on_neighbors = True such that the user wouldn't have to specify it. We could of course also change the API such that neighbors are always passed, which is a very cheap operation to, the updating of the neigboring losses is a bit expensive to always do though. Probably @anton-akhmerov and @jbweston have an opinion.

  • Well, the only way would be to either make it a seperate learner (like CurvatureLearner1D) or make it true by default. As I cannot think of an easy way to automatically determine it.

    Unless one would use a decorator around a loss or something that indicates to the learner we need the neighbours:

    e.g.

    @uses_neighbours
    def loss(interval, neighbours, scale, all_points):
        return "something"
    Edited by Jorn Hoofwijk
  • Bas Nijholt added 1 commit

    added 1 commit

    • be4765b7 - remove 'loss_depends_on_neighbours' and replace by 'nn_neighbors'

    Compare with previous version

  • I removed loss_depends_on_neighbours and replaced it by nn_neighbors (N nearest neighbors). This makes the code work with any number of neighbors. Now the new triangle loss even works with nn_neigbors=0. I also added that we pass the neighbors in all loss_per_interval functions.

    Edited by Bas Nijholt
  • Bas Nijholt added 1 commit

    added 1 commit

    • 9d49bc09 - rename 'function_values' to 'data' because it's more obvious what it is

    Compare with previous version

  • Bas Nijholt added 2 commits

    added 2 commits

    • 0b397ad2 - remove 'loss_depends_on_neighbours' and replace by 'nn_neighbors'
    • 7fdd00a4 - rename 'function_values' to 'data' because it's more obvious what it is

    Compare with previous version

  • Bas Nijholt added 2 commits

    added 2 commits

    • 69334d61 - remove 'loss_depends_on_neighbours' and replace by 'nn_neighbors'
    • 544b0dc6 - fix typo

    Compare with previous version

  • Bas Nijholt added 1 commit

    added 1 commit

    • 0e0dd5da - rename 'function_values' to 'data' because its more obvious

    Compare with previous version

  • Bas Nijholt added 1 commit

    added 1 commit

    Compare with previous version

  • @Jorn I am really confused:

    loss = loss_per_interval=adaptive.learner.learner1D.get_curvature_loss()
    learner1 = adaptive.Learner1D(f, bounds=(-1, 1), nn_neighbors=0, loss_per_interval=loss)
    learner2 = adaptive.Learner1D(f, bounds=(-1, 1), nn_neighbors=1, loss_per_interval=loss)
    runner = adaptive.BlockingRunner(learner1, goal=lambda l: l.npoints > 400, log=True)
    adaptive.runner.replay_log(learner2, runner.log)
    assert learner1.data == learner2.data
    assert learner1.losses == learner2.losses

    This implies that the losses of the neighbors don't really need to be updated, or not?

    edit: I understand this now. This happens on my computer with just 2 cores, however on io (with 48 cores) there are more interpolated intervals, so there an AssertionError is raised.

    Edited by Bas Nijholt
  • I would expect that the second assertion should fail for any function that is not a straight line, regardless of the number of cores.

    Edited by Jorn Hoofwijk
  • Jorn Hoofwijk added 1 commit

    added 1 commit

    • f9032bc4 - take the correct neigbours into account

    Compare with previous version

  • @basnijholt I found why it goes wrong and solved it in: f9032bc4

  • Bas Nijholt added 1 commit

    added 1 commit

    • aff1f1e7 - fix deprecation warning and simplify

    Compare with previous version

  • Good catch!

    I think it only needs some more documentation and then this is ready :)

  • Bas Nijholt added 1 commit

    added 1 commit

    • 1d637cc8 - introduce 'fast_det' for 2x2 and 3x3 matrices

    Compare with previous version

  • Bas Nijholt mentioned in merge request !132

    mentioned in merge request !132

  • Jorn Hoofwijk
  • Bas Nijholt added 1 commit

    added 1 commit

    Compare with previous version

  • Bas Nijholt resolved all discussions

    resolved all discussions

  • Jorn Hoofwijk added 1 commit

    added 1 commit

    Compare with previous version

  • Bas Nijholt added 1 commit

    added 1 commit

    • 86363644 - introduce 'annotate_with_nn_neighbors' decorator

    Compare with previous version

  • @Jorn, I just figured that the user shouldn't need to worry about setting nn_neighbors him/herself, so I added the annotate_with_nn_neighbors decorator in 86363644.

    What do you think?

  • Jorn Hoofwijk
  • Joseph Weston
  • Jorn Hoofwijk added 1 commit

    added 1 commit

    • 51b8ea9c - remove nn_neighbors from Learner1D signature

    Compare with previous version

  • Other than that seems good.

  • Jorn Hoofwijk added 1 commit

    added 1 commit

    • f2b60936 - update tutorial to use new api

    Compare with previous version

  • Jorn Hoofwijk added 1 commit

    added 1 commit

    • f73878e0 - add two samples of loss functions with nn_neighbors=1

    Compare with previous version

  • Some of the changes seem unrelated, why is simplex volume modified?

  • The simplex volume is used in the loss here.

    It's changed slightly to speed up things.

  • Bas Nijholt added 10 commits

    added 10 commits

    • 008d47e1 - 1 commit from branch master
    • bcd213c3 - added a curvature_loss function to learner1D
    • ff20135d - remove 'loss_depends_on_neighbours' and replace by 'nn_neighbors'
    • d1c4e279 - rename 'function_values' to 'data' because its more obvious
    • a0e73d8b - simplifications
    • 4a41f4f5 - introduce 'fast_det' for 2x2 and 3x3 matrices
    • 20eba318 - fix tests
    • 590e9738 - introduce 'annotate_with_nn_neighbors' decorator
    • a28192b3 - added curvature docs
    • 260b0ddb - add two samples of loss functions with nn_neighbors=1

    Compare with previous version

  • Bas Nijholt added 5 commits

    added 5 commits

    • 2ccd615a - remove 'loss_depends_on_neighbours' and replace by 'nn_neighbors'
    • 541e01e5 - rename 'function_values' to 'data' because its more obvious
    • 9a8e2e3c - introduce 'fast_det' for 2x2 and 3x3 matrices
    • aad2b8c0 - introduce 'annotate_with_nn_neighbors' decorator
    • c35a0977 - added curvature docs

    Compare with previous version

  • Bas Nijholt added 1 commit

    added 1 commit

    Compare with previous version

  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Please register or sign in to reply
    Loading