Skip to main content

Fairness Indicators TensorBoard Plugin

Project description

Evaluating Models with the Fairness Indicators Dashboard [Beta]

Fairness Indicators

Fairness Indicators for TensorBoard enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers. With the plugin, you can visualize fairness evaluations for your runs and easily compare performance across groups.

In particular, Fairness Indicators for TensorBoard allows you to evaluate and visualize model performance, sliced across defined groups of users. Feel confident about your results with confidence intervals and evaluations at multiple thresholds.

Many existing tools for evaluating fairness concerns don’t work well on large scale datasets and models. At Google, it is important for us to have tools that can work on billion-user systems. Fairness Indicators will allow you to evaluate across any size of use case, in the TensorBoard environment or in Colab.

Requirements

To install Fairness Indicators for TensorBoard, run:

python3 -m virtualenv ~/tensorboard_demo
source ~/tensorboard_demo/bin/activate
pip install --upgrade pip
pip install tensorboard_plugin_fairness_indicators
pip install "tensorflow_model_analysis>=0.15.1"
pip uninstall -y tensorboard tb-nightly
pip install --upgrade tb-nightly

Demo

If you want to test out Fairness Indicators in TensorBoard, you can download sample TensorFlow Model Analysis evaluation results (eval_config.json, metrics and plots files) and a demo.py utility from Google Cloud Platform, here. (Checkout this documentation to download files from Google Cloud Platform). This evaluation data is based on the Civil Comments dataset, calculated using Tensorflow Model Analysis's model_eval_lib library. It also contains a sample TensorBoard summary data file for reference. See the TensorBoard tutorial for more information on summary data files.

The demo.py utility writes a TensorBoard summary data file, which will be read by TensorBoard to render the Fairness Indicators dashboard. Flags to be used with the demo.py utility:

  • --logdir: Directory where TensorBoard will write the summary
  • --eval_result_output_dir: Directory containing evaluation results evaluated by TFMA (downloaded in last step)

Run the demo.py utility to write the summary results in the log directory:

python demo.py --logdir=<logdir>/demo --eval_result_output_dir=<eval_result_dir>

Run TensorBoard:

Note: For this demo, please run TensorBoard from the same directory where you have downloaded the evaluation results.

tensorboard --logdir=<logdir>

This will start a local instance. After the local instance is started, a link will be displayed to the terminal. Open the link in your browser to view the Fairness Indicators dashboard.

Usage

To use the Fairness Indicators with your own data and evaluations:

  1. Train a new model and evaluate using tensorflow_model_analysis.run_model_analysis or tensorflow_model_analysis.ExtractEvaluateAndWriteResult API in model_eval_lib. For code snippets on how to do this, see the Fairness Indicators colab here.

  2. Write Fairness Indicators Summary using tensorboard_plugin_fairness_indicators.summary_v2 API.

    writer = tf.summary.create_file_writer(<logdir>)
    with writer.as_default():
        summary_v2.FairnessIndicators(<eval_result_dir>, step=1)
    writer.close()
    
  3. Run TensorBoard

    • tensorboard --logdir=<logdir>
    • Select the new evaluation run using the drop-down on the left side of the dashboard to visualize results.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file tensorboard_plugin_fairness_indicators-0.0.2.tar.gz.

File metadata

  • Download URL: tensorboard_plugin_fairness_indicators-0.0.2.tar.gz
  • Upload date:
  • Size: 295.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.6.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.3

File hashes

Hashes for tensorboard_plugin_fairness_indicators-0.0.2.tar.gz
Algorithm Hash digest
SHA256 8760914733660f41c1ad0ea7976ea9267462ac0c460961763c50b8754a70c88f
MD5 3d7e2723804aa24805160b0b3194a453
BLAKE2b-256 f5e1e9a93d9e8ed0f5db0f088c6cc5358e4789003f43b80d123b86304c33df79

See more details on using hashes here.

Provenance

File details

Details for the file tensorboard_plugin_fairness_indicators-0.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for tensorboard_plugin_fairness_indicators-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 64581429844215ceab6a399a8c7d360afdf7b6334371cf46b601841cba7570af
MD5 7b80853bb0d1507a42d1600703019d96
BLAKE2b-256 7ad74a3f8550635173abed226fe4c214399b56b5676d653e041acedec6a5700f

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page