Skip to content
Snippets Groups Projects

LearnerND scale output values before computing loss

Merged Jorn Hoofwijk requested to merge 78-scale-output-values into master
All threads resolved!

Closes #78 (closed)

In a way similar to the Learner1D

Recompute all losses when the scale increases with a factor 1.1.

My simple measurements indicate that with the ring_of_fire as test function the learner slows down approximately 10%. But we get in return that we scale everything to equivalent sizes before computing the loss.

TODO:

  • Add some test(s).
Edited by Jorn Hoofwijk

Merge request reports

Pipeline #13939 passed

Pipeline passed for 1a002b1b on 78-scale-output-values

Test coverage 81.00% (34.00%) from 1 job
Approval is optional

Merged by Bas NijholtBas Nijholt 6 years ago (Dec 7, 2018 4:13pm UTC)

Merge details

  • Changes merged into master with a90d2579 (commits were squashed).
  • Deleted the source branch.
  • Auto-merge enabled

Pipeline #13940 passed

Pipeline passed for a90d2579 on master

Test coverage 81.00% (34.00%) from 1 job

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Jorn Hoofwijk added 1 commit

    added 1 commit

    Compare with previous version

  • Jorn Hoofwijk unmarked as a Work In Progress

    unmarked as a Work In Progress

  • Jorn Hoofwijk changed the description

    changed the description

  • Jorn Hoofwijk added 1 commit

    added 1 commit

    • a8666ddc - use relative scale rather than absolute scale to avoid float errors

    Compare with previous version

  • Jorn Hoofwijk added 1 commit

    added 1 commit

    Compare with previous version

  • then I think it should be good like this.

  • Jorn Hoofwijk added 1 commit

    added 1 commit

    • e8378ed4 - make scale computation somewhat more readlable

    Compare with previous version

  • Bas Nijholt added 66 commits

    added 66 commits

    Compare with previous version

  • Bas Nijholt added 1 commit

    added 1 commit

    • 13c8c2d6 - scale the losses when the scale increases with a factor 1.1

    Compare with previous version

  • Bas Nijholt added 12 commits

    added 12 commits

    Compare with previous version

  • Bas Nijholt resolved all discussions

    resolved all discussions

  • I have rebased, made some small changes, and think this is good to go!

    @anton-akhmerov and @jbweston do you want to take another look at it?

  • Bas Nijholt added 3 commits

    added 3 commits

    • 06b9200f - 1 commit from branch master
    • 5b74661c - scale the losses when the scale increases with a factor 1.1
    • b480a6f8 - round the loss to a certain precision

    Compare with previous version

  • Bas Nijholt added 1 commit

    added 1 commit

    • f935f352 - round the loss to a certain precision

    Compare with previous version

  • mentioned in issue #125 (closed)

  • @Jorn I think there was just this last issue before we can merge; is that right?

  • Jorn Hoofwijk added 2 commits

    added 2 commits

    • 6769c45d - make plotting function for low number of points
    • f8117ec7 - call update_range from the correct position!!

    Compare with previous version

  • Jorn Hoofwijk resolved all discussions

    resolved all discussions

  • Jorn Hoofwijk added 1 commit

    added 1 commit

    • 6af10b15 - Revert "make plotting function for low number of points"

    Compare with previous version

  • Bas Nijholt added 8 commits

    added 8 commits

    Compare with previous version

  • I rebased. The tests seem to go through sometimes but not always.

    For example, the following still fails:

    import adaptive
    from adaptive import LearnerND
    import numpy as np
    from adaptive.tests.test_learners import *
    
    def f(xyz, d=0.4850418018218789):
        a = 0.2
        x, y, z = xyz
        return x + math.exp(-(x**2 + y**2 + z**2 - d**2)**2 / a**4) + z**2
    
    learner_kwargs = dict(bounds=((-1, 1), (-1, 1), (-1, 1)),
                          loss_per_simplex=adaptive.learner.learnerND.uniform_loss)
    
    control_kwargs = dict(learner_kwargs)
    control = LearnerND(f, **control_kwargs)
    
    xscale, yscale = 212.71376207339554, 905.2031939573303
    
    l_kwargs = dict(learner_kwargs)
    l_kwargs['bounds'] = xscale * np.array(l_kwargs['bounds'])
    learner = LearnerND(lambda x: yscale * f(np.array(x) / xscale),
                           **l_kwargs)
    
    learner._recompute_losses_factor = 1
    control._recompute_losses_factor = 1
    
    npoints = 201
    
    for n in range(npoints):
        cxs, _ = control.ask(1)
        xs, _ = learner.ask(1)
        control.tell_many(cxs, [control.function(x) for x in cxs])
        learner.tell_many(xs, [learner.function(x) for x in xs])
    
        # Check whether the points returned are the same
        xs_unscaled = np.array(xs) / xscale
        assert np.allclose(xs_unscaled, cxs)
  • Actually, the problem occurs earlier, at point 57. Check with

    for n in range(npoints):
        ...
        assert control._simplex_queue == learner._simplex_queue  # add this line
  • image

    I'm not entirely sure if it's possible to fully mitigate all errors. Because if we have a simplex with some loss and then divide this equally among 2 children with equal area, obviously you expect the loss to be equal, yet by numerical precision it may have some error.

    Now if you happen to have the following loss rounded to say 4 decimal places (4 because it's easy)

    0.1111 then dividing it in two will yield 0.05555 for each of the two children. Now we apply some numerical error to it: loss1 = 0.055550000001 and loss2 = 0.0555499999999 or something like that. Now we will round the thing to four decimal places again:

    loss1_rounded = 0.0556 and loss2_rounded = 0.0555. The same applies if we round to more digits. This is even possible if we round the loss to 12 digits and the subdivided loss to 8 digits. No matter how we round there is allways the possibility that we may end up slightly below the rounding cutoff or slightly above it, resulting in a different number.

  • ok. Seems we can't fix this in general then.... Should probably keep the test as xfailing, and merge?

  • Yeah I would suggest to do so. However maybe we should insert some test that is not randomized, that does succeed now, and as such we will be notified whenever something that works now gets broken.

    Because if we just x-fail, we will never notice

  • And also possibly open an issue, maybe it's something we want to solve, maybe it is not. We will have to see in the future I suppose

  • Bas Nijholt added 5 commits

    added 5 commits

    • e0841808...fa4696ed - 2 commits from branch master
    • e7ad124b - scale the losses when the scale increases with a factor 1.1
    • a995d832 - round the loss to a certain precision
    • 1a002b1b - xfail(LearnerND) for test_learner_performance_is_invariant_under_scaling

    Compare with previous version

  • Bas Nijholt enabled an automatic merge when the pipeline for 1a002b1b succeeds

    enabled an automatic merge when the pipeline for 1a002b1b succeeds

  • Bas Nijholt mentioned in commit a90d2579

    mentioned in commit a90d2579

  • merged

  • Please register or sign in to reply
    Loading