ferret.Benchmark.explain#

Benchmark.explain(text, target=1, show_progress: bool = True, normalize_scores: bool = True, order: int = 1, target_token: str | None = None, target_option: str | None = None) List[Explanation][source]#

Compute explanations using all the explainers stored in the class.

Parameters:
  • text (str) – Text string to explain.

  • target (int) – Class label to produce the explanations for.

  • show_progress (bool, default False) – Enable progress bar.

  • normalize_scores (bool, default True) – Apply lp-normalization across tokens to make attribution weights comparable across different explainers.

  • order (int, default 1) – If normalize_scores=True, this is the normalization order, as passed to numpy.linalg.norm.

Returns:

List of all explanations produced.

Return type:

List[Explanation]

Notes

Please reference to User Guide for more information.

Examples

>>> bench = Benchmark(model, tokenizer)
>>> explanations = bench.explain("I love your style!", target=2)

Please note that by default we apply L1 normalization across tokens, to make feature attribution weights comparable among explainers. To turn it off, you should use:

>>> bench = Benchmark(model, tokenizer)
>>> explanations = bench.explain("I love your style!", target=2, normalize_scores=False)