ferret.Benchmark.evaluate_explanation#
- Benchmark.evaluate_explanation(explanation: Explanation | ExplanationWithRationale, human_rationale=None, class_explanation: List[Explanation | ExplanationWithRationale] | None = None, show_progress: bool = True, **evaluation_args) ExplanationEvaluation [source]#
Evaluate an explanation using all the evaluators stored in the class.
- Parameters:
explanation (Union[Explanation, ExplanationWithRationale]) – explanation to evaluate.
target (int) – class label for which the explanation is evaluated
human_rationale (list) – list with values 0 or 1. A value of 1 means that the corresponding token is part of the human (or ground truth) rationale, 0 otherwise. Tokens are indexed by position. The size of the list is the number of tokens.
class_explanation (list) – list of explanations. The explanation in position i is computed using as target class the class label i. The size is #target classes. If available, class-based scores are computed.
show_progress (bool) – enable progress bar
- Returns:
the evaluation of the explanation
- Return type:
ExplanationEvaluation