citrine.resources.predictor_evaluation module
- class citrine.resources.predictor_evaluation.PredictorEvaluationCollection(project_id: UUID, session: Session)
Bases:
Collection
[PredictorEvaluation
]Represents the collection of predictor evaluations.
- Parameters:
project_id (UUID) – the UUID of the project
- archive(uid: UUID | str)
Archive an evaluation.
- build(data: dict) PredictorEvaluation
Build an individual predictor evaluation.
- default(*, predictor_id: UUID | str, predictor_version: int | str = 'latest') List[PredictorEvaluator]
Retrieve the default evaluators for a stored predictor.
The current default evaluators perform 5-fold, 3-trial cross-validation on all valid predictor responses. Valid responses are those that are not produced by the following predictors:
GeneralizedMeanPropertyPredictor
IngredientsToSimpleMixturePredictor
- Parameters:
predictor_id (UUID) – Unique identifier of the predictor to evaluate
predictor_version (Option[Union[int, str]]) – The version of the predictor to evaluate
- Return type:
- default_from_config(config: GraphPredictor) List[PredictorEvaluator]
Retrieve the default evaluators for an arbitrary (but valid) predictor config.
See
default()
for details on the resulting evaluators.
- delete(uid: UUID | str)
Cannot delete an evaluation.
- get(uid: UUID | str) ResourceType
Get a particular element of the collection.
- list(*, per_page: int = 100, predictor_id: UUID | None = None, predictor_version: int | str | None = None) Iterable[PredictorEvaluation]
List non-archived predictor evaluations.
- list_all(*, per_page: int = 100, predictor_id: UUID | None = None, predictor_version: int | str | None = None) Iterable[PredictorEvaluation]
List all predictor evaluations.
- list_archived(*, per_page: int = 100, predictor_id: UUID | None = None, predictor_version: int | str | None = None) Iterable[PredictorEvaluation]
List archived predictor evaluations.
- register(model: PredictorEvaluation) PredictorEvaluation
Cannot register an evaluation.
- restore(uid: UUID | str)
Restore an archived evaluation.
- trigger(*, predictor_id: UUID | str, predictor_version: int | str = 'latest', evaluators: List[PredictorEvaluator]) PredictorEvaluation
Evaluate a predictor using the provided evaluators.
- Parameters:
predictor_id (UUID) – Unique identifier of the predictor to evaluate
predictor_version (Option[Union[int, str]]) – The version of the predictor to evaluate. Defaults to the latest trained version.
evaluators (List[PredictorEvaluator]) – The evaluators to use to measure predictor performance.
- Return type:
- trigger_default(*, predictor_id: UUID | str, predictor_version: int | str = 'latest') PredictorEvaluation
Evaluate a predictor using the default evaluators.
See
default()
for details on the evaluators.- Parameters:
predictor_id (UUID) – Unique identifier of the predictor to evaluate
predictor_version (Option[Union[int, str]]) – The version of the predictor to evaluate
- Return type:
- update(model: PredictorEvaluation) PredictorEvaluation
Cannot update an evaluation.