Skip to content
Snippets Groups Projects

test all the different loss functions in each test

Merged Bas Nijholt requested to merge test_loss_functions into master

Improve the tests

  • test all the different loss functions in each test
  • add Learner1D._recompute_losses_factor to remove xfail from two tests
  • speed up the tests
Edited by Bas Nijholt

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Bas Nijholt changed the description

    changed the description

  • Joseph Weston added 1 commit

    added 1 commit

    • 72bebd5d - refactor adding all loss functions to tests involving a learner

    Compare with previous version

  • Now some of the tests fail; we can either merge immediately or try and fix the breakages in this MR

  • Bas Nijholt added 1 commit

    added 1 commit

    • b0e4253a - round the losses to 12 digets to make them equal

    Compare with previous version

  • Bas Nijholt added 1 commit

    added 1 commit

    • 5495da77 - round the losses to 12 digets to make them equal

    Compare with previous version

  • Perhaps we should go back to what we had previously @basnijholt; explicitly set the loss functions.

    Now the learner tests take 5 minutes to run, which is pretty unacceptable; you can't just quickly run them on your laptop to see if you broke something.

  • Bas Nijholt added 2 commits

    added 2 commits

    • 5b4407b3 - round the losses to 12 digits to make them equal
    • ff0c6b5f - add 'with_all_loss_functions' to 'run_with'

    Compare with previous version

  • Author Maintainer

    OK I added with_all_loss_functions to run_with such that not all tests run with ALL loss functions.

  • Author Maintainer

    This are all the slow tests at the moment.

    =================================================================================================================================================== slowest test durations ===================================================================================================================================================
    30.56s call     adaptive/tests/test_learners.py::test_balancing_learner[LearnerND-sphere_of_fire-learner_kwargs4]
    20.11s call     adaptive/tests/test_learners.py::test_learner_performance_is_invariant_under_scaling[Learner1D-quadratic-learner_kwargs0]
    14.49s call     adaptive/tests/test_learners.py::test_learner_performance_is_invariant_under_scaling[Learner1D-quadratic-learner_kwargs1]
    13.31s call     adaptive/tests/test_learners.py::test_learner_performance_is_invariant_under_scaling[Learner1D-linear_with_peak-learner_kwargs5]
    11.91s call     adaptive/tests/test_learners.py::test_balancing_learner[LearnerND-ring_of_fire-learner_kwargs3]
    11.35s call     adaptive/tests/test_learners.py::test_learner_performance_is_invariant_under_scaling[Learner1D-linear_with_peak-learner_kwargs4]
    10.54s call     adaptive/tests/test_learners.py::test_saving_of_balancing_learner[LearnerND-sphere_of_fire-learner_kwargs4]
    9.72s call     adaptive/tests/test_learners.py::test_learner_performance_is_invariant_under_scaling[Learner1D-quadratic-learner_kwargs2]
    7.65s call     adaptive/tests/test_learners.py::test_learner_performance_is_invariant_under_scaling[Learner1D-linear_with_peak-learner_kwargs3]
    4.89s call     adaptive/tests/test_learners.py::test_learner_performance_is_invariant_under_scaling[Learner2D-ring_of_fire-learner_kwargs9]
    3.95s call     adaptive/tests/test_learners.py::test_learner_performance_is_invariant_under_scaling[Learner2D-ring_of_fire-learner_kwargs6]
    3.44s call     adaptive/tests/test_learners.py::test_saving_of_balancing_learner[LearnerND-ring_of_fire-learner_kwargs3]
    2.40s call     adaptive/tests/test_learners.py::test_saving[LearnerND-sphere_of_fire-learner_kwargs4]
    2.11s call     adaptive/tests/test_learners.py::test_saving_with_datasaver[LearnerND-sphere_of_fire-learner_kwargs4]
    1.28s call     adaptive/tests/test_learners.py::test_balancing_learner[Learner1D-quadratic-learner_kwargs0]
    1.25s call     adaptive/tests/test_learners.py::test_balancing_learner[Learner1D-linear_with_peak-learner_kwargs1]
    1.21s call     adaptive/tests/test_learners.py::test_expected_loss_improvement_is_less_than_total_loss[LearnerND-sphere_of_fire-learner_kwargs13]
    1.13s call     adaptive/tests/test_learners.py::test_expected_loss_improvement_is_less_than_total_loss[LearnerND-sphere_of_fire-learner_kwargs14]

    I'll reduce the number of points in test_learner_performance_is_invariant_under_scaling.

    Edited by Bas Nijholt
  • Bas Nijholt changed the description

    changed the description

  • Bas Nijholt added 2 commits

    added 2 commits

    • 03e68f4a - add '_recompute_losses_factor' such that we can set it in the test
    • ffb242c8 - run fewer points in 'test_learner_performance_is_invariant_under_scaling'

    Compare with previous version

  • Author Maintainer

    After those changes now these are the slowest:

    =================================================================================================================================================== slowest test durations ===================================================================================================================================================
    24.46s call     adaptive/tests/test_learners.py::test_balancing_learner[LearnerND-sphere_of_fire-learner_kwargs4]
    10.98s call     adaptive/tests/test_learners.py::test_balancing_learner[LearnerND-ring_of_fire-learner_kwargs3]
    9.71s call     adaptive/tests/test_learners.py::test_saving_of_balancing_learner[LearnerND-sphere_of_fire-learner_kwargs4]
    3.44s call     adaptive/tests/test_learners.py::test_saving_of_balancing_learner[LearnerND-ring_of_fire-learner_kwargs3]
    1.98s call     adaptive/tests/test_learners.py::test_saving_with_datasaver[LearnerND-sphere_of_fire-learner_kwargs4]
    1.92s call     adaptive/tests/test_learners.py::test_saving[LearnerND-sphere_of_fire-learner_kwargs4]
    1.77s call     adaptive/tests/test_learners.py::test_learner_performance_is_invariant_under_scaling[Learner1D-quadratic-learner_kwargs2]
    1.55s call     adaptive/tests/test_learners.py::test_learner_performance_is_invariant_under_scaling[Learner1D-linear_with_peak-learner_kwargs5]
    1.49s call     adaptive/tests/test_learners.py::test_learner_performance_is_invariant_under_scaling[Learner1D-linear_with_peak-learner_kwargs3]
    1.46s call     adaptive/tests/test_learners.py::test_balancing_learner[Learner1D-linear_with_peak-learner_kwargs1]

    as you can see, it's the LearnerND that is the problem now, mostly because it's horrendously slow when adding non-chosen points.

  • Bas Nijholt added 1 commit

    added 1 commit

    Compare with previous version

  • Author Maintainer

    This PR is ready, @jbweston or @anton-akhmerov do you want another look at it?

  • Bas Nijholt added 8 commits

    added 8 commits

    • 65ea8401 - 1 commit from branch master
    • 7a25a5f0 - test all the different loss functions in each test
    • 84bdc885 - refactor adding all loss functions to tests involving a learner
    • 00694f34 - round the losses to 12 digits to make them equal
    • f853f997 - add 'with_all_loss_functions' to 'run_with'
    • 3f074c1b - add '_recompute_losses_factor' such that we can set it in the test
    • b9254a1d - run fewer points in 'test_learner_performance_is_invariant_under_scaling'
    • 3be4c939 - use math.isclose

    Compare with previous version

  • Bas Nijholt added 1 commit

    added 1 commit

    • 46a78bf6 - use '_recompute_losses_factor' in other tests that had special treatment of the Learner1D

    Compare with previous version

  • Bas Nijholt added 3 commits

    added 3 commits

    • 952f318f - add '_recompute_losses_factor' such that we can set it in the test
    • 516b74b2 - run fewer points in 'test_learner_performance_is_invariant_under_scaling'
    • d0ac5d6d - use math.isclose

    Compare with previous version

  • Bas Nijholt added 3 commits

    added 3 commits

    • 57dc3415 - add '_recompute_losses_factor' such that we can set it in the test
    • 08e85333 - run fewer points in 'test_learner_performance_is_invariant_under_scaling'
    • 0e458fad - use math.isclose

    Compare with previous version

  • Bas Nijholt added 1 commit

    added 1 commit

    • d4723938 - add a __setattr__ method to the DataSaver

    Compare with previous version

  • Bas Nijholt added 9 commits

    added 9 commits

    • 353bebb8 - 1 commit from branch master
    • 42d94191 - test all the different loss functions in each test
    • 195ba967 - refactor adding all loss functions to tests involving a learner
    • 0e387701 - round the losses to 12 digits to make them equal
    • 741deab2 - add 'with_all_loss_functions' to 'run_with'
    • e116b7cb - add '_recompute_losses_factor' such that we can set it in the test
    • 22e23da1 - run fewer points in 'test_learner_performance_is_invariant_under_scaling'
    • ca37eef5 - use math.isclose
    • 1a0c8ad1 - add a __setattr__ method to the DataSaver

    Compare with previous version

  • Bas Nijholt added 3 commits

    added 3 commits

    • a994aa9f - add '_recompute_losses_factor' such that we can set it in the test
    • 84daf58f - run fewer points in 'test_learner_performance_is_invariant_under_scaling'
    • b86abb32 - use math.isclose

    Compare with previous version

  • Bas Nijholt added 3 commits

    added 3 commits

    • cf064824 - add '_recompute_losses_factor' such that we can set it in the test
    • 0a9b03dd - run fewer points in 'test_learner_performance_is_invariant_under_scaling'
    • 7c3aa215 - use math.isclose

    Compare with previous version

  • Bas Nijholt added 3 commits

    added 3 commits

    • e2f7bb9f - add '_recompute_losses_factor' such that we can set it in the test
    • 966d0aad - run fewer points in 'test_learner_performance_is_invariant_under_scaling'
    • 6ba7812d - use math.isclose

    Compare with previous version

  • Bas Nijholt added 10 commits

    added 10 commits

    • 6ba7812d...d82fbae7 - 3 commits from branch master
    • ef227b84 - test all the different loss functions in each test
    • c589a2f6 - refactor adding all loss functions to tests involving a learner
    • be74ad62 - round the losses to 12 digits to make them equal
    • d6b2187e - add 'with_all_loss_functions' to 'run_with'
    • 38ff5e26 - add '_recompute_losses_factor' such that we can set it in the test
    • 4da759fb - run fewer points in 'test_learner_performance_is_invariant_under_scaling'
    • d2d955e0 - use math.isclose

    Compare with previous version

  • Bas Nijholt enabled an automatic merge when the pipeline for d2d955e0 succeeds

    enabled an automatic merge when the pipeline for d2d955e0 succeeds

  • merged

  • Bas Nijholt mentioned in commit 2f2e80d0

    mentioned in commit 2f2e80d0

  • mentioned in issue #125 (closed)

Please register or sign in to reply
Loading