LearnerND scale output values before computing loss
Closes #78 (closed)
In a way similar to the Learner1D
Recompute all losses when the scale increases with a factor 1.1.
My simple measurements indicate that with the ring_of_fire
as test function the learner slows down approximately 10%. But we get in return that we scale everything to equivalent sizes before computing the loss.
TODO:
-
Add some test(s).
Edited by Jorn Hoofwijk