Skip to main content

🔥LIT: The Learning Interpretability Tool

Project description

🔥 Learning Interpretability Tool (LIT)

The Learning Interpretability Tool (🔥LIT, formerly known as the Language Interpretability Tool) is a visual, interactive ML model-understanding tool that supports text, image, and tabular data. It can be run as a standalone server, or inside of notebook environments such as Colab, Jupyter, and Google Cloud Vertex AI notebooks.

LIT is built to answer questions such as:

  • What kind of examples does my model perform poorly on?
  • Why did my model make this prediction? Can this prediction be attributed to adversarial behavior, or to undesirable priors in the training set?
  • Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender?

Example of LIT UI

LIT supports a variety of debugging workflows through a browser-based UI. Features include:

  • Local explanations via salience maps, attention, and rich visualization of model predictions.
  • Aggregate analysis including custom metrics, slicing and binning, and visualization of embedding spaces.
  • Counterfactual generation via manual edits or generator plug-ins to dynamically create and evaluate new examples.
  • Side-by-side mode to compare two or more models, or one model on a pair of examples.
  • Highly extensible to new model types, including classification, regression, span labeling, seq2seq, and language modeling. Supports multi-head models and multiple input features out of the box.
  • Framework-agnostic and compatible with TensorFlow, PyTorch, and more.

LIT has a website with live demos, tutorials, a setup guide and more.

Stay up to date on LIT by joining the lit-announcements mailing list.

For a broader overview, check out our paper and the user guide.

Documentation

Download and Installation

LIT can be run via container image, installed via pip or built from source. Building from source is necessary if you update any of the front-end or core back-end code.

Build container image

Build the image using docker or podman:

git clone https://github.com/PAIR-code/lit.git && cd lit
docker build --file Dockerfile --tag lit-nlp .

See the advanced guide for detailed instructions on using the default LIT Docker image, running LIT as a containerized web app in different scenarios, and how to creating your own LIT images.

pip installation

pip install lit-nlp

The pip installation will install all necessary prerequisite packages for use of the core LIT package.

It does not install the prerequisites for the provided demos, so you need to install those yourself. See requirements_examples.txt for the list of packages required to run the demos.

Install from source

Clone the repo:

git clone https://github.com/PAIR-code/lit.git && cd lit

Note: be sure you are running Python 3.10. If you have a different version on
your system, use the conda instructions below to set up a Python 3.10 environment.

Set up a Python environment with venv:

python -m venv .venv
source .venv/bin/activate

Or set up a Python environment using conda:

conda create --name lit-nlp
conda activate lit-nlp
conda install python=3.10
conda install pip

Once you have the environment, install LIT's dependencies:

python -m pip install -r requirements.txt
python -m pip install cudnn cupti  # optional, for GPU support
python -m pip install torch  # optional, for PyTorch

# Build the frontend
(cd lit_nlp; yarn && yarn build)

Note: Use the -r requirements.txt option to install every dependency required for the LIT library, its test suite, and the built-in examples. You can also install subsets of these using the -r requirements_core.txt (core library), -r requirements_test.txt (test suite), -r requirements_examples.txt (examples), and/or any combination thereof.

Note: if you see an error running yarn on Ubuntu/Debian, be sure you have the correct version installed.

Running LIT

Explore a collection of hosted demos on the demos page.

Quick-start: classification and regression

To explore classification and regression models tasks from the popular GLUE benchmark:

python -m lit_nlp.examples.glue_demo --port=5432 --quickstart

Or, using docker:

docker run --rm -e DEMO_NAME=glue_demo -p 5432:5432 -t lit-nlp --quickstart

Navigate to http://localhost:5432 to access the LIT UI.

Your default view will be a small BERT-based model fine-tuned on the Stanford Sentiment Treebank, but you can switch to STS-B or MultiNLI using the toolbar or the gear icon in the upper right.

Quick-start: language modeling

To explore predictions from a pre-trained language model (BERT or GPT-2), run:

python -m lit_nlp.examples.lm_demo --models=bert-base-uncased --port=5432

Or, using docker:

docker run --rm -e DEMO_NAME=lm_demo -p 5432:5432 -t lit-nlp --models=bert-base-uncased

And navigate to http://localhost:5432 for the UI.

Notebook usage

Colab notebooks showing the use of LIT inside of notebooks can be found at lit_nlp/examples/notebooks.

We provide a simple Colab demo. Run all the cells to see LIT on an example classification model in the notebook.

More Examples

See lit_nlp/examples. Most are run similarly to the quickstart example above:

python -m lit_nlp.examples.<example_name> --port=5432 [optional --args]

User Guide

To learn about LIT's features, check out the user guide, or watch this video.

Adding your own models or data

You can easily run LIT with your own model by creating a custom demo.py launcher, similar to those in lit_nlp/examples. The basic steps are:

  • Write a data loader which follows the Dataset API
  • Write a model wrapper which follows the Model API
  • Pass models, datasets, and any additional components to the LIT server class

For a full walkthrough, see adding models and data.

Extending LIT with new components

LIT is easy to extend with new interpretability components, generators, and more, both on the frontend or the backend. See our documentation to get started.

Pull Request Process

To make code changes to LIT, please work off of the dev branch and create pull requests (PRs) against that branch. The main branch is for stable releases, and it is expected that the dev branch will always be ahead of main.

Draft PRs are encouraged, especially for first-time contributors or contributors working on complex tasks (e.g., Google Summer of Code contributors). Please use these to communicate ideas and implementations with the LIT team, in addition to issues.

Prior to sending your PR or marking a Draft PR as "Ready for Review", please run the Python and TypeScript linters on your code to ensure compliance with Google's Python and TypeScript Style Guides.

# Run Pylint on your code using the following command from the root of this repo
pushd lit_nlp & pylint & popd

# Run ESLint on your code using the following command from the root of this repo
pushd lit_nlp & yarn lint & popd

Citing LIT

If you use LIT as part of your work, please cite our EMNLP paper:

@misc{tenney2020language,
    title={The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for {NLP} Models},
    author={Ian Tenney and James Wexler and Jasmijn Bastings and Tolga Bolukbasi and Andy Coenen and Sebastian Gehrmann and Ellen Jiang and Mahima Pushkarna and Carey Radebaugh and Emily Reif and Ann Yuan},
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    year = "2020",
    publisher = "Association for Computational Linguistics",
    pages = "107--118",
    url = "https://www.aclweb.org/anthology/2020.emnlp-demos.15",
}

Disclaimer

This is not an official Google product.

LIT is a research project and under active development by a small team. There will be some bugs and rough edges, but we're releasing at an early stage because we think it's pretty useful already. We want LIT to be an open platform, not a walled garden, and we would love your suggestions and feedback - drop us a line in the issues.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lit-nlp-1.1rc1.tar.gz (11.8 MB view details)

Uploaded Source

Built Distribution

lit_nlp-1.1rc1-py3-none-any.whl (12.0 MB view details)

Uploaded Python 3

File details

Details for the file lit-nlp-1.1rc1.tar.gz.

File metadata

  • Download URL: lit-nlp-1.1rc1.tar.gz
  • Upload date:
  • Size: 11.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.0

File hashes

Hashes for lit-nlp-1.1rc1.tar.gz
Algorithm Hash digest
SHA256 f729c87161d41ab320f94f5d79e399f8ba7e71802b471716971212cf08cb4c55
MD5 cf9c3f677e8f811be66b2cb2e8b09268
BLAKE2b-256 d1e312fbf27bcacd3b40054b9bcf2bf60cf2bbdb6787ca32fe6db9f0103b3da0

See more details on using hashes here.

File details

Details for the file lit_nlp-1.1rc1-py3-none-any.whl.

File metadata

  • Download URL: lit_nlp-1.1rc1-py3-none-any.whl
  • Upload date:
  • Size: 12.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.0

File hashes

Hashes for lit_nlp-1.1rc1-py3-none-any.whl
Algorithm Hash digest
SHA256 a0f6beea71845f7a7dfc2e06fee8815bc97788ff60b9e0c903605189f4d9683d
MD5 ba2c214039b385e59684dd43affab4f1
BLAKE2b-256 9a9ac58a4d5e913f661ccc60a6464237fa119abf19919f0bf80cad5391d916c1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page