Modules

Benchmark

class ferret.benchmark.Benchmark(model, tokenizer, explainers: Optional[List] = None, evaluators: Optional[List] = None, class_based_evaluators: Optional[List] = None)[source]

Generic interface to compute multiple explanations.

evaluate_explanation(explanation: Union[Explanation, ExplanationWithRationale], target, human_rationale=None, class_explanation: Optional[List[Union[Explanation, ExplanationWithRationale]]] = None, progress_bar=True, **evaluation_args) ExplanationEvaluation[source]

explanation: Explanation to evaluate. target: target class for which we evaluate the explanation human rationale: List in one-hot-encoding indicating if the token is in the rationale (1) or not (i) class_explanation: list of explanations. The explanation in position ‘i’ is computed using as target class the class ‘i’.

len = #target classes. If available, class-based scores are computed

evaluate_explanations(explanations: List[Union[Explanation, ExplanationWithRationale]], target, human_rationale=None, class_explanations=None, progress_bar=True, **evaluation_args) List[ExplanationEvaluation][source]
evaluate_samples(dataset: BaseDataset, sample: Union[int, List[int]], target=None, show_progress_bar: bool = True, n_workers: int = 1, **evaluation_args) Dict[source]

Explain a dataset sample, evaluate explanations, and compute average scores.

explain(text, target=1, progress_bar: bool = True) List[Explanation][source]

Compute explanations.

get_dataframe(explanations) DataFrame[source]
load_dataset(dataset_name: str, **kwargs)[source]
score(text, return_dict: bool = True)[source]
show_evaluation_table(explanation_evaluations: List[ExplanationEvaluation], apply_style: bool = True) DataFrame[source]

Format explanations and evaluations scores into a colored table.

show_samples_evaluation_table(evaluation_scores_by_explainer, apply_style: bool = True) DataFrame[source]

Format dataset average evaluations scores into a colored table.

show_table(explanations, apply_style: bool = True, remove_first_last: bool = True) DataFrame[source]

Format explanations scores into a colored table.

style_evaluation(table)[source]

Explainers

class ferret.explainers.gradient.GradientExplainer(model, tokenizer, multiply_by_inputs: bool = True)[source]
NAME = 'Gradient'
compute_feature_importance(text: str, target: False, **explainer_args)[source]
class ferret.explainers.gradient.IntegratedGradientExplainer(model, tokenizer, multiply_by_inputs: bool = True)[source]
NAME = 'Integrated Gradient'
compute_feature_importance(text, target, **explainer_args)[source]
class ferret.explainers.shap.SHAPExplainer(model, tokenizer)[source]
NAME = 'Partition SHAP'
compute_feature_importance(text, target=1, **explainer_args)[source]
class ferret.explainers.lime.LIMEExplainer(model, tokenizer)[source]
NAME = 'LIME'
compute_feature_importance(text, target=1, **explainer_args)[source]