6.2. Migrating to Predictor Evaluations

6.2.1. Summary

In version 4.0, Predictor Evaluation Workflows and Predictor Evaluation Executions (collectively, PEWs) will be merged into a single entity called Predictor Evaluations. The new entity will retain the functionality of its predecessors, while simplyfing interactions with it. And it will support the continuing evolution of the platform.

6.2.2. Basic Usage

The most common pattern for interacting with PEWs is executing the default evaluators and waiting for the result:

pew = project.predictor_evaluation_workflows.create_default(predictor_id=predictor.uid)
execution = next(pew.executions.list(), None)
execution = wait_while_executing(collection=project.predictor_evaluation_executions, execution=execution)

With Predictor Evaluations, it’s more straight-forward:

evaluation = project.predictor_evaluations.trigger_default(predictor_id=predictor.uid)
evaluation = wait_while_executing(collection=project.predictor_evaluations, execution=evaluation)

The evaluators used are available with evaluators().

6.2.3. Working With Evaluators

You can still construct evaluators (such as CrossValidationEvaluator) the same way as you always have, and run them against your predictor:

evaluation = project.predictor_evaluations.trigger(predictor_id=predictor.uid, evaluators=evaluators)

If you don’t wish to construct evaluators by hand, you can retrieve the default one(s):

evaluators = project.predictor_evaluations.default(predictor_id=predictor.uid)

You can evaluate your predictor even if it hasn’t been registered to the platform yet:

evaluators = project.predictor_evaluations.default_from_config(predictor)

Once evaluation is complete, the results will be available by calling results() with the name of the desired evaluator (which are all available through evaluator_names()).