Learner1D doesn't correctly set the interpolated loss when a point is added
l = adaptive.Learner1D(lambda x: x, (0, 4))
l.tell(0, 0)
l.tell(1, 0)
l.tell(2, 0)
assert l.ask(1) == ([4], [np.inf])
assert l.losses == {(0, 1): 0.25, (1, 2): 0.25}
assert l.losses_combined == {(0, 1): 0.25, (1, 2): 0.25, (2, 4.0): np.inf}
# assert l.ask(1) == ([3], [np.inf])
l.ask(1)
assert l.losses == {(0, 1): 0.25, (1, 2): 0.25}
assert l.losses_combined == {(0, 1): 0.25, (1, 2): 0.25, (2, 3.0): np.inf, (3.0, 4.0): np.inf}
l.tell(4, 0)
assert l.losses_combined == {(0, 1): 0.25, (1, 2): 0.25, (2, 3): 0.25, (3, 4): 0.25}
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-93-ff4ff3682c03> in <module>()
18 l.tell(4, 0)
19
---> 20 assert l.losses_combined == {(0, 1): 0.25, (1, 2): 0.25, (2, 3): 0.25, (3, 4): 0.25}
AssertionError:
Instead l.losses_combined == {(0, 1): 0.25, (1, 2): 0.25, (2, 3.0): inf, (3.0, 4.0): 0.25}
Where (2, 3.0): inf
should be (2, 3.0): 0.25
.