Suggestion: hide the actual function to be evaluated from learners
Learners do not need to have access to the function object they are trying to predict. I believe that they should not have access to it.
Learners request points and are fed back results. At no point do they need to know how the results are actually obtained. Not knowing something that one does not need to know can be a good thing. Learners can be useful to predict all kinds of functions, not only things that can be represented well as a Python function.
Of course any way to obtain results can be expressed as a function (if necessary with internal state and blocking). But this can lead to considerable unnecessary complexity and inefficiency.
Examples:
-
The function to be learned is an asynchronous function, defined with
async def
. One can write a callable object that internally uses async programming and blocks on calls, but if this one is driven by something like the current runner (that uses asyncio itself), complexity rapidly explodes. -
The function to be learned is ran by a remote procedure call (over the network, using some messaging library) to one of 20 available nodes. The remote requests have to be collected, load-balanced and perhaps submitted in some particular way that is only known by the runner. Again, this can be all abstracted as a callable object, but the much better approach is to have a runner that is specific to that messaging library and knows how to deal with it.
-
The function to be learned is the measurement of some experiment that involves doing something mechanical (moving a robotic arm, say). As such, requests should be sorted by x coordinate, so that the arm has to move as little as possible. Again, a custom runner is the most elegant solution.
@jbweston, I'm not convinced by the argument that a learner should encapsulate everything needed, including what should be learned. IMHO the whole point of a learner is that it is useful without the function. E.g. it should be possible to pickle a learner without pickling the possible monstrosity of a function (that could make calls to some horrible DFT library) that it approximates.