LearnerND scale output values before computing loss
Closes #78 (closed)
In a way similar to the Learner1D
Recompute all losses when the scale increases with a factor 1.1.
My simple measurements indicate that with the ring_of_fire
as test function the learner slows down approximately 10%. But we get in return that we scale everything to equivalent sizes before computing the loss.
TODO:
-
Add some test(s).
Merge request reports
Activity
added 1 commit
- 3f283697 - add second test (very similar to the first one) that only looks if the
- Resolved by Bas Nijholt
- Resolved by Jorn Hoofwijk
added 1 commit
- a8666ddc - use relative scale rather than absolute scale to avoid float errors
added 1 commit
- e8378ed4 - make scale computation somewhat more readlable
added 66 commits
-
e8378ed4...353bebb8 - 65 commits from branch
master
- 439f2b53 - scale the losses when the scale increases with a factor 1.1
-
e8378ed4...353bebb8 - 65 commits from branch
added 1 commit
- 13c8c2d6 - scale the losses when the scale increases with a factor 1.1
added 12 commits
-
13c8c2d6...2f2e80d0 - 11 commits from branch
master
- f5ff42d3 - scale the losses when the scale increases with a factor 1.1
-
13c8c2d6...2f2e80d0 - 11 commits from branch
I have rebased, made some small changes, and think this is good to go!
@anton-akhmerov and @jbweston do you want to take another look at it?
- Resolved by Jorn Hoofwijk
- Resolved by Joseph Weston
mentioned in issue #125 (closed)
@Jorn I think there was just this last issue before we can merge; is that right?
added 1 commit
- 6af10b15 - Revert "make plotting function for low number of points"
added 8 commits
-
6af10b15...31bebf0d - 6 commits from branch
master
- 7b7ed623 - scale the losses when the scale increases with a factor 1.1
- e0841808 - round the loss to a certain precision
-
6af10b15...31bebf0d - 6 commits from branch
I rebased. The tests seem to go through sometimes but not always.
For example, the following still fails:
import adaptive from adaptive import LearnerND import numpy as np from adaptive.tests.test_learners import * def f(xyz, d=0.4850418018218789): a = 0.2 x, y, z = xyz return x + math.exp(-(x**2 + y**2 + z**2 - d**2)**2 / a**4) + z**2 learner_kwargs = dict(bounds=((-1, 1), (-1, 1), (-1, 1)), loss_per_simplex=adaptive.learner.learnerND.uniform_loss) control_kwargs = dict(learner_kwargs) control = LearnerND(f, **control_kwargs) xscale, yscale = 212.71376207339554, 905.2031939573303 l_kwargs = dict(learner_kwargs) l_kwargs['bounds'] = xscale * np.array(l_kwargs['bounds']) learner = LearnerND(lambda x: yscale * f(np.array(x) / xscale), **l_kwargs) learner._recompute_losses_factor = 1 control._recompute_losses_factor = 1 npoints = 201 for n in range(npoints): cxs, _ = control.ask(1) xs, _ = learner.ask(1) control.tell_many(cxs, [control.function(x) for x in cxs]) learner.tell_many(xs, [learner.function(x) for x in xs]) # Check whether the points returned are the same xs_unscaled = np.array(xs) / xscale assert np.allclose(xs_unscaled, cxs)
I'm not entirely sure if it's possible to fully mitigate all errors. Because if we have a simplex with some loss and then divide this equally among 2 children with equal area, obviously you expect the loss to be equal, yet by numerical precision it may have some error.
Now if you happen to have the following loss rounded to say 4 decimal places (4 because it's easy)
0.1111
then dividing it in two will yield0.05555
for each of the two children. Now we apply some numerical error to it:loss1 = 0.055550000001
andloss2 = 0.0555499999999
or something like that. Now we will round the thing to four decimal places again:loss1_rounded = 0.0556
andloss2_rounded = 0.0555
. The same applies if we round to more digits. This is even possible if we round the loss to 12 digits and the subdivided loss to 8 digits. No matter how we round there is allways the possibility that we may end up slightly below the rounding cutoff or slightly above it, resulting in a different number.added 5 commits
-
e0841808...fa4696ed - 2 commits from branch
master
- e7ad124b - scale the losses when the scale increases with a factor 1.1
- a995d832 - round the loss to a certain precision
- 1a002b1b - xfail(LearnerND) for test_learner_performance_is_invariant_under_scaling
Toggle commit list-
e0841808...fa4696ed - 2 commits from branch
enabled an automatic merge when the pipeline for 1a002b1b succeeds
mentioned in commit a90d2579