Modules
Benchmark
- class ferret.benchmark.Benchmark(model, tokenizer, explainers: Optional[List] = None, evaluators: Optional[List] = None, class_based_evaluators: Optional[List] = None)[source]
Generic interface to compute multiple explanations.
- evaluate_explanation(explanation: Union[Explanation, ExplanationWithRationale], target, human_rationale=None, class_explanation: Optional[List[Union[Explanation, ExplanationWithRationale]]] = None, progress_bar=True, **evaluation_args) ExplanationEvaluation [source]
explanation: Explanation to evaluate. target: target class for which we evaluate the explanation human rationale: List in one-hot-encoding indicating if the token is in the rationale (1) or not (i) class_explanation: list of explanations. The explanation in position ‘i’ is computed using as target class the class ‘i’.
len = #target classes. If available, class-based scores are computed
- evaluate_explanations(explanations: List[Union[Explanation, ExplanationWithRationale]], target, human_rationale=None, class_explanations=None, progress_bar=True, **evaluation_args) List[ExplanationEvaluation] [source]
- evaluate_samples(dataset: BaseDataset, sample: Union[int, List[int]], target=None, show_progress_bar: bool = True, n_workers: int = 1, **evaluation_args) Dict [source]
Explain a dataset sample, evaluate explanations, and compute average scores.
- explain(text, target=1, progress_bar: bool = True) List[Explanation] [source]
Compute explanations.
- show_evaluation_table(explanation_evaluations: List[ExplanationEvaluation], apply_style: bool = True) DataFrame [source]
Format explanations and evaluations scores into a colored table.
- show_samples_evaluation_table(evaluation_scores_by_explainer, apply_style: bool = True) DataFrame [source]
Format dataset average evaluations scores into a colored table.
Explainers
- class ferret.explainers.gradient.GradientExplainer(model, tokenizer, multiply_by_inputs: bool = True)[source]
- NAME = 'Gradient'
- class ferret.explainers.gradient.IntegratedGradientExplainer(model, tokenizer, multiply_by_inputs: bool = True)[source]
- NAME = 'Integrated Gradient'