Welcome to ferret’s documentation!

ferret

Latest PyPI version Documentation Status HuggingFace Spaces Demo YouTube Video arxiv preprint downloads badge

Ferret circular logo with the name to the right

A python package for benchmarking interpretability techniques.

from transformers import AutoModelForSequenceClassification, AutoTokenizer
from ferret import Benchmark

name = "cardiffnlp/twitter-xlm-roberta-base-sentiment"
model = AutoModelForSequenceClassification.from_pretrained(name)
tokenizer = AutoTokenizer.from_pretrained(name)

bench = Benchmark(model, tokenizer)
explanations = bench.explain("You look stunning!", target=1)
evaluations = bench.evaluate_explanations(explanations, target=1)

bench.show_evaluation_table(evaluations)

Features

ferret offers a painless integration with Hugging Face models and naming conventions. If you are already using the transformers library, you immediately get access to our Explanation and Evaluation API.

Supported Post-hoc Explainers

Supported Evaluation Metrics

Faithfulness measures:

Plausibility measures:

See our paper for details.

Visualization

The Benchmark class exposes easy-to-use table visualization methods (e.g., within Jupyter Notebooks)

bench = Benchmark(model, tokenizer)

# Pretty-print feature attribution scores by all supported explainers
explanations = bench.explain("You look stunning!")
bench.show_table(explanations)

# Pretty-print all the supported evaluation metrics
evaluations = bench.evaluate_explanations(explanations)
bench.show_evaluation_table(evaluations)

Dataset Evaluations

The Benchmark class has a handy method to compute and average our evaluation metrics across multiple samples from a dataset.

import numpy as np
bench = Benchmark(model, tokenizer)

# Compute and average evaluation scores one of the supported dataset
samples = np.arange(20)
hatexdata = bench.load_dataset("hatexplain")
sample_evaluations =  bench.evaluate_samples(hatexdata, samples)

# Pretty-print the results
bench.show_samples_evaluation_table(sample_evaluations)

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

Logo and graphical assets made by Luca Attanasio.

Installation

Stable release

To install ferret, run this command in your terminal:

$ pip install -U ferret-xai

This is the preferred method to install ferret, as it will always install the most recent stable release.

If you don’t have pip installed, this Python installation guide can guide you through the process.

From sources

The sources for ferret can be downloaded from the Github repo.

You can either clone the public repository:

$ git clone git://github.com/g8a9/ferret

Or download the tarball:

$ curl -OJL https://github.com/g8a9/ferret/tarball/master

Once you have a copy of the source, you can install it with:

$ python setup.py install

Usage

To use ferret in a project:

import ferret

Modules

Benchmark

class ferret.benchmark.Benchmark(model, tokenizer, explainers: Optional[List] = None, evaluators: Optional[List] = None, class_based_evaluators: Optional[List] = None)[source]

Generic interface to compute multiple explanations.

evaluate_explanation(explanation: Union[Explanation, ExplanationWithRationale], target, human_rationale=None, class_explanation: Optional[List[Union[Explanation, ExplanationWithRationale]]] = None, progress_bar=True, **evaluation_args) ExplanationEvaluation[source]

explanation: Explanation to evaluate. target: target class for which we evaluate the explanation human rationale: List in one-hot-encoding indicating if the token is in the rationale (1) or not (i) class_explanation: list of explanations. The explanation in position ‘i’ is computed using as target class the class ‘i’.

len = #target classes. If available, class-based scores are computed

evaluate_explanations(explanations: List[Union[Explanation, ExplanationWithRationale]], target, human_rationale=None, class_explanations=None, progress_bar=True, **evaluation_args) List[ExplanationEvaluation][source]
evaluate_samples(dataset: BaseDataset, sample: Union[int, List[int]], target=None, show_progress_bar: bool = True, n_workers: int = 1, **evaluation_args) Dict[source]

Explain a dataset sample, evaluate explanations, and compute average scores.

explain(text, target=1, progress_bar: bool = True) List[Explanation][source]

Compute explanations.

get_dataframe(explanations) DataFrame[source]
load_dataset(dataset_name: str, **kwargs)[source]
score(text, return_dict: bool = True)[source]
show_evaluation_table(explanation_evaluations: List[ExplanationEvaluation], apply_style: bool = True) DataFrame[source]

Format explanations and evaluations scores into a colored table.

show_samples_evaluation_table(evaluation_scores_by_explainer, apply_style: bool = True) DataFrame[source]

Format dataset average evaluations scores into a colored table.

show_table(explanations, apply_style: bool = True, remove_first_last: bool = True) DataFrame[source]

Format explanations scores into a colored table.

style_evaluation(table)[source]

Explainers

class ferret.explainers.gradient.GradientExplainer(model, tokenizer, multiply_by_inputs: bool = True)[source]
NAME = 'Gradient'
compute_feature_importance(text: str, target: False, **explainer_args)[source]
class ferret.explainers.gradient.IntegratedGradientExplainer(model, tokenizer, multiply_by_inputs: bool = True)[source]
NAME = 'Integrated Gradient'
compute_feature_importance(text, target, **explainer_args)[source]
class ferret.explainers.shap.SHAPExplainer(model, tokenizer)[source]
NAME = 'Partition SHAP'
compute_feature_importance(text, target=1, **explainer_args)[source]
class ferret.explainers.lime.LIMEExplainer(model, tokenizer)[source]
NAME = 'LIME'
compute_feature_importance(text, target=1, **explainer_args)[source]

Contributing

Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.

You can contribute in many ways:

Types of Contributions

Report Bugs

Report bugs at https://github.com/g8a9/ferret/issues.

If you are reporting a bug, please include:

  • Your operating system name and version.

  • Any details about your local setup that might be helpful in troubleshooting.

  • Detailed steps to reproduce the bug.

Fix Bugs

Look through the GitHub issues for bugs. Anything tagged with “bug” and “help wanted” is open to whoever wants to implement it.

Implement Features

Look through the GitHub issues for features. Anything tagged with “enhancement” and “help wanted” is open to whoever wants to implement it.

Write Documentation

ferret could always use more documentation, whether as part of the official ferret docs, in docstrings, or even on the web in blog posts, articles, and such.

Submit Feedback

The best way to send feedback is to file an issue at https://github.com/g8a9/ferret/issues.

If you are proposing a feature:

  • Explain in detail how it would work.

  • Keep the scope as narrow as possible, to make it easier to implement.

  • Remember that this is a volunteer-driven project, and that contributions are welcome :)

Get Started!

Ready to contribute? Here’s how to set up ferret for local development.

  1. Fork the ferret repo on GitHub.

  2. Clone your fork locally:

    $ git clone git@github.com:your_name_here/ferret.git
    
  3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development:

    $ mkvirtualenv ferret
    $ cd ferret/
    $ python setup.py develop
    
  4. Create a branch for local development:

    $ git checkout -b name-of-your-bugfix-or-feature
    

    Now you can make your changes locally.

  5. When you’re done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox:

    $ flake8 ferret tests
    $ python setup.py test or pytest
    $ tox
    

    To get flake8 and tox, just pip install them into your virtualenv.

  6. Commit your changes and push your branch to GitHub:

    $ git add .
    $ git commit -m "Your detailed description of your changes."
    $ git push origin name-of-your-bugfix-or-feature
    
  7. Submit a pull request through the GitHub website.

Pull Request Guidelines

Before you submit a pull request, check that it meets these guidelines:

  1. The pull request should include tests.

  2. If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in README.rst.

  3. The pull request should work for Python 3.5, 3.6, 3.7 and 3.8, and for PyPy. Check https://travis-ci.com/g8a9/ferret/pull_requests and make sure that the tests pass for all supported Python versions.

Tips

To run a subset of tests:

$ python -m unittest tests.test_ferret

Deploying

A reminder for the maintainers on how to deploy. Make sure all your changes are committed (including an entry in HISTORY.rst). Then run:

$ bump2version patch # possible: major / minor / patch
$ git push
$ git push --tags

Travis will then deploy to PyPI if tests pass.

Credits

Development Lead

Contributors

None yet. Why not be the first?

History

0.1.0 (2022-05-30)

  • First release on PyPI.

Indices and tables