syndi.task_evaluator.Task_Evaluator.evaluate_task¶
-
Task_Evaluator.evaluate_task(metrics=None)[source]¶ Run benchmark testing on a task. Save intermedia data, trained models, and optimized hyperparameters. Return testing results.
- Parameters:
task (Task) – a task instance storing meta information of the task.
metrics (list) – a list of strings to identify the metric functions.
output_path (str) – a directory path to store the intermedia data, model and hyperparametes.
agnostic_metrics (boolean) – whether to record dataset agnostic metrics in results
- Returns:
benchmarking results of each run.
- Return type:
list