Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Register
  • Sign in
  • adaptive adaptive
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
  • Issues 36
    • Issues 36
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 5
    • Merge requests 5
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Container Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Quantum TinkererQuantum Tinkerer
  • adaptiveadaptive
  • Issues
  • #104
Closed
Open
Issue created Sep 20, 2018 by Jorn Hoofwijk@JornGuest0 of 1 checklist item completed0/1 checklist item

Learner1D could in some situations return -inf as loss improvement, which would make balancinglearner never choose to improve

The following discussion from !99 (merged) should be addressed:

  • @basnijholt started a discussion: (+2 comments)

to show an example: try running:

import adaptive
import numpy as np
adaptive.notebook_extension()

# if we would define f1 to have some features in the interval -1,0 we would never see them using the balancinglearner

def f1(x):
    return -1 if x <= 0.1 else 1

def f2(x):
    return x**2


l1 = adaptive.Learner1D(f1, (-1, 1))
l2 = adaptive.Learner1D(f2, (-1, 1))

# now let's create the balancinglearner and do some balancing :D
bl = adaptive.BalancingLearner([l1, l2])
for i in range(1000):
    xs, _ = bl.ask(1)
    x, = xs
    y = bl.function(x)
    bl.tell(x, y)
    
asked = l1.ask(1, add_data=False)
print(f"l1 requested {asked}, but since loss_improvement is -inf, \n\tbalancinglearner will never choose this")
print(f"npoints: l1: {l1.npoints}, l2: {l2.npoints}, almost all points are added to l2")
print(f"loss():  l1: {l1.loss()}, l2: {l2.loss()}, the actual loss of l1 is much higher than l2.loss")

l1.plot() + l2.plot()

this will output:

l1 requested ([0.10000000000000009], [-inf]), but since loss_improvement is -inf, 
	balancinglearner will never choose this
npoints: l1: 53, l2: 947, almost all points are added to l2
loss():  l1: 1.0, l2: 0.003584776382870768, the actual loss of l1 is much higher than l2.loss

I also have a notebook here: bug_learner1d_infinite_loss_improvement.ipynb that constructs what is happening artificially and indicates on why this happens a bit more.

The reason why this happens:

the interval is bigger than _dx_eps, so it has a finite loss associated with it. When asked to improve, it finds that by dividing this interval, it creates two intervals which are smaller than dx_eps, so it claims the loss_improvement is -inf. Which results in the balancinglearner always choosing another learner to add a point (this second bug is a result of the first one).

the problem with the balancinglearner can be solved by solving #103 (closed) however, the underlying issue (returning -inf as loss_improvement) is then still not really solved, although one will almost never notice anymore.

Assignee
Assign to
Time tracking