Discontinuities in zero should be detected and be approximated with some margin
If you have a discontinuity in your function around x=0 where the step of the discontinuity is larger than the desired, the runner will approximate the step really really close (point can get as close as 1.04e-322 in the sample below).
Sample case:
import adaptive
import time
adaptive.notebook_extension()
def f(x):
time.sleep(0.1)
return 1 if x>0 else -1
l = adaptive.Learner1D(f, (-1, 1))
r = adaptive.Runner(l, goal=lambda l:l.loss() < 0.05)
r.live_info()
r.live_plot(update_interval=0.1)
Somehow it seems that discontinuities in other points are more or less detected, possibly this has something to do with floating point accuracy. I think that source of the difference is that around zero a float can be really small due to the exponent just getting more negative (ie 5.0e-200 can be easily stored in a float) while around a non-zero number, the float has much less accuracy. (ie 1.00000001 can be stored in a float, but (1 + 1e-50) will result in 1)
Edited by Jorn Hoofwijk