ferret.Benchmark.evaluate_explanations#
- Benchmark.evaluate_explanations(explanations: List[Explanation | ExplanationWithRationale], human_rationale=None, class_explanations=None, show_progress=True, **evaluation_args) List[ExplanationEvaluation] [source]#
Evaluate explanations using all the evaluators stored in the class.
- Parameters:
explanation (List[Union[Explanation, ExplanationWithRationale]]) – list of explanations to evaluate.
target (int) – class label for which the explanations are evaluated
rationale (human) – one-hot-encoding indicating if the token is in the human rationale (1) or not (0). If available, all explanations are evaluated for the human rationale (if provided)
class_explanation (list) – list of list of explanations. The k-th element represents the list of explanations computed varying the target class: the explanation in position k, i is computed using as target class the class label i. The size is # explanation, #target classes. If available, class-based scores are computed.
show_progress (bool) – enable progress bar
- Returns:
the evaluation for each explanation
- Return type:
List[ExplanationEvaluation]