Run benchmarks and generate reports measuring the behavior of many AI Systems.
Project description
ModelBench
Run safety benchmarks against AI models and view detailed reports showing how well they performed.
Badges
Background
This is a MLCommons project, part of the AI Safety Working Group. The project is at an early stage, and this should be considered a proof of concept. Results are not intended to indicate actual levels of AI system safety. You can see sample benchmarks here and read the white paper here.
ModelBench is closely related to ModelGauge. ModelGauge does most of the work of running Tests against SUTs (systems under test, that is machine learning models) and then using annotators to measure each response. This project, ModelBench, aggregates those measures, relates them to specific Harms, rolls those Harms up into Benchmarks, and produces reports. If you are looking to run a benchmark for your model, start by adding a SUT that works with ModelGauge.
Requirements
The current benchmark uses LlamaGuard to evaluate the safety of responses. For now you will need a TogetherAI account to use it. For 0.5, all included models are also run via TogetherAI. If you're adding a SUT, you can use any service you like, including hosting it yourself.
Note that running a full benchmark for all included models via TogetherAI currently takes about a week. Depending
on response time, running your own SUT may be faster. We aim to speed things up substantially for 1.0. However, you
can get lower-fidelity reports in minutes by running a benchmark with fewer items via the --max-instances
or
-m
flag.
Installation
Since this is under heavy development, the best way to run it is to check it out from GitHub. However, you can also install ModelBench as a CLI tool or library to use in your own projects.
Install ModelBench with Poetry for local development.
- Install Poetry using one of these recommended methods. For example:
pipx install poetry
- Clone this repository.
git clone https://github.com/mlcommons/modelbench.git
- Install ModelBench and dependencies.
cd modelbench
poetry install
At this point you may optionally do poetry shell
which will put you in a virtual environment that uses the installed packages
for everything. If you do that, you don't have to explicitly say poetry run
in the commands below.
Install ModelBench from PyPi.
- Install ModelBench into your local environment or project the way you normally would. For example:
pip install modelbench
Running Tests
To verify that things are working properly on your machine, you can run all the tests::
poetry run pytest tests
Trying It Out
We encourage interested parties to try it out and give us feedback. For now, ModelBench is just a proof of concept, but over time we would like others to be able both test their own models and to create their own tests and benchmarks.
Running Your First Benchmark
Before running any benchmarks, you'll need to create a secrets file that contains any necessary API keys and other sensitive information.
Create a file at config/secrets.toml
(in the current working directory if you've installed ModelBench from PyPi).
You can use the following as a template.
[together]
api_key = "<your key here>"
To obtain an API key for Together, you can create an account here.
With your keys in place, you are now ready to run your first benchmark!
Note: Omit poetry run
in all example commands going forward if you've installed ModelBench from PyPi.
poetry run modelbench benchmark -m 10
You should immediately see progress indicators, and depending on how loaded TogetherAI is, the whole run should take about 15 minutes.
[!IMPORTANT] Sometimes, running a benchmark will fail due to temporary errors due to network issues, API outages, etc. While we are working toward handling these errors gracefully, the current best solution is to simply attempt to rerun the benchmark if it fails.
Viewing the Scores
After a successful benchmark run, static HTML pages are generated that display scores on benchmarks and tests.
These can be viewed by opening web/index.html
in a web browser. E.g., firefox web/index.html
.
If you would like to dump the raw scores, you can do:
poetry run modelbench grid -m 10 > scoring-grid.csv
To see all raw requests, responses, and annotations, do:
poetry run modelbench responses -m 10 response-output-dir
That will produce a series of CSV files, one per Harm, in the given output directory. Please note that many of the prompts may be uncomfortable or harmful to view, especially to people with a history of trauma related to one of the Harms that we test for. Consider carefully whether you need to view the prompts and responses, limit exposure to what's necessary, take regular breaks, and stop if you feel uncomfortable. For more information on the risks, see this literature review on vicarious trauma.
Managing the Cache
To speed up runs, ModelBench caches calls to both SUTs and annotators. That's normally what a benchmark-runner wants.
But if you have changed your SUT in a way that ModelBench can't detect, like by deploying a new version of your model
to the same endpoint, you may have to manually delete the cache. Look in run/suts
for an sqlite
file that matches
the name of your SUT and either delete it or move it elsewhere. The cache will be created anew on the next run.
Contributing
ModelBench uses the following tools for development, code quality, and packaging:
To contribute:
- Fork the repository
- Create your feature branch
- Ensure there are tests for your changes and that they pass
- Create a pull request
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file modelbench-0.5.1.tar.gz
.
File metadata
- Download URL: modelbench-0.5.1.tar.gz
- Upload date:
- Size: 69.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.12.3 Darwin/23.4.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 37f5d9b5908efb000f078c89f80eba06986f006796a012131873043990afc51d |
|
MD5 | 415fd6a664d580cd49d4edcd24701dde |
|
BLAKE2b-256 | 34da0e59fffc251473094004b24ff2ee593d091802466afe071dc59f37a726a6 |
File details
Details for the file modelbench-0.5.1-py3-none-any.whl
.
File metadata
- Download URL: modelbench-0.5.1-py3-none-any.whl
- Upload date:
- Size: 76.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.12.3 Darwin/23.4.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bd3cd24be507559a3c5d66deb867597234a98fa067adebbbbefcdfef31d34427 |
|
MD5 | 6f01a511f236f0e0f6145975e9e7f036 |
|
BLAKE2b-256 | 19041618a4f130a3b0a4bd070b68066242a7d1e4fe00565dff5e26ead3592bd2 |