Skip to main content

Fairness Indicators TensorBoard Plugin

Project description

Evaluating Models with the Fairness Indicators Dashboard [Beta]

Fairness Indicators

Fairness Indicators for TensorBoard enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers. With the plugin, you can visualize fairness evaluations for your runs and easily compare performance across groups.

In particular, Fairness Indicators for TensorBoard allows you to evaluate and visualize model performance, sliced across defined groups of users. Feel confident about your results with confidence intervals and evaluations at multiple thresholds.

Many existing tools for evaluating fairness concerns don’t work well on large scale datasets and models. At Google, it is important for us to have tools that can work on billion-user systems. Fairness Indicators will allow you to evaluate across any size of use case, in the TensorBoard environment or in Colab.

Requirements

To install Fairness Indicators for TensorBoard, run:

python3 -m virtualenv ~/tensorboard_demo
source ~/tensorboard_demo/bin/activate
pip install --upgrade pip
pip install fairness_indicators
pip install tensorboard-plugin-fairness-indicators

Demo Colab

Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb contains an end-to-end demo to train and evaluate a model and visualize fairness evaluation results in TensorBoard.

Usage

To use the Fairness Indicators with your own data and evaluations:

  1. Train a new model and evaluate using tensorflow_model_analysis.run_model_analysis or tensorflow_model_analysis.ExtractEvaluateAndWriteResult API in model_eval_lib. For code snippets on how to do this, see the Fairness Indicators colab here.

  2. Write a summary data file using demo.py, which will be read by TensorBoard to render the Fairness Indicators dashboard (See the TensorBoard tutorial for more information on summary data files).

    Flags to be used with the demo.py utility:

    • --logdir: Directory where TensorBoard will write the summary
    • --eval_result_output_dir: Directory containing evaluation results evaluated by TFMA
    python demo.py --logdir=<logdir> --eval_result_output_dir=<eval_result_dir>`
    

    Or you can also use tensorboard_plugin_fairness_indicators.summary_v2 API to write the summary file.

    writer = tf.summary.create_file_writer(<logdir>)
    with writer.as_default():
        summary_v2.FairnessIndicators(<eval_result_dir>, step=1)
    writer.close()
    
  3. Run TensorBoard

    Note: This will start a local instance. After the local instance is started, a link will be displayed to the terminal. Open the link in your browser to view the Fairness Indicators dashboard.

    • tensorboard --logdir=<logdir>
    • Select the new evaluation run using the drop-down on the left side of the dashboard to visualize results.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file tensorboard_plugin_fairness_indicators-0.0.5.tar.gz.

File metadata

  • Download URL: tensorboard_plugin_fairness_indicators-0.0.5.tar.gz
  • Upload date:
  • Size: 304.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.2 requests-toolbelt/0.9.1 tqdm/4.40.2 CPython/3.7.7

File hashes

Hashes for tensorboard_plugin_fairness_indicators-0.0.5.tar.gz
Algorithm Hash digest
SHA256 cb909569e389cfdc81a10aa9c94accfc1fad1d978d5fe4d096063312dbe2bfea
MD5 ceb48322b4d1d6c9d429617ff460006b
BLAKE2b-256 41a626e07d704e71912f72b7342205a26a08ed2ab33944bc0c2325665239e150

See more details on using hashes here.

Provenance

File details

Details for the file tensorboard_plugin_fairness_indicators-0.0.5-py3-none-any.whl.

File metadata

File hashes

Hashes for tensorboard_plugin_fairness_indicators-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 e39d134f574cf08749b95668ebf67f55816ebdb680cac6632199c6dfe0916d08
MD5 9ebcd74e9dbe8db168f9a01b1de0ae28
BLAKE2b-256 33ec37fd9995e2605321e5e64c6007af3c3c3a9b6d3426a012f75a0b998f5795

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page