Translator Benchmarks Runner
Project description
Translator Benchmarks Runner
This repository provides a set of benchmarks as well as the code to send the queries and evaluate the returned results of a benchmark.
benchmarks-runner
contains the code to query targets and evaluate results.
benchmarks-runner.config
contains the data sets, query templates, targets, and benchmark definitions necessary to run a benchmark. See config/README.md
for details about targets and benchmarks.
Usage
Running a benchmark is a two-step process:
- Execute the queries of a benchmark and store the scored results.
- Evaluate the scored results against the set of ground-truth relevant results.
Installation of the benchmarks-runner
package provides access to the functions and command-line interface necessary to run benchmarks.
CLI
The command-line interface is the easiest way to run a benchmark.
-
benchmarks_fetch
- Fetches (un)scored results given the name of a benchmark (specified in
config/benchmarks.json
), target (specified inconfig/targets.json
), and a directory to store results. - By default,
benchmarks_fetch
fetches scored results using 5 concurrent requests. Runbenchmarks_fetch --help
for more details.
- Fetches (un)scored results given the name of a benchmark (specified in
-
benchmarks_score
- Scores results given the name of a benchmark (specified in
config/benchmarks.json
), target (specified inconfig/targets.json
), a directory containing unscored results, and a directory to store scored results. - By default,
benchmarks_score
uses 5 concurrent requests. Runbenchmarks_score --help
for more details.
- Scores results given the name of a benchmark (specified in
-
benchmarks_eval
- Evaluates a set of scored results given the name of a benchmark (specified in
config/benchmarks.json
) and a directory containing scored results. - By default, the evaluation considers the top 20 results of each query, and plots are not generated. Run
benchmarks_eval --help
for more details.
- Evaluates a set of scored results given the name of a benchmark (specified in
Functions
The CLI functionality is also available by importing functions from the benchmarks
package.
from benchmarks.request import fetch_results, score_results
from benchmarks.eval import evaluate_results
# Fetch unscored results
fetch_results('benchmark_name', 'target_name', 'unscored_results_dir', scored=False)
# Score unscored results
score_results('unscored_results_dir', 'target_name', 'results_dir')
# Evaluate scored results
evaluate_results('benchmark_name', 'results_dir OR results_dict')
See the documentation of each function for more information.
Installation
Install the repository as an editable package using pip
.
pip install -e .
UI
These benchmarks come with a frontend for viewing the scored results.
Installation
Requires python 3.9.
- Create a python virtual environment:
python3.9 -m venv benchmark_venv
- Activate your environment:
. ./benchmark_venv/bin/activate
- Install dependencies:
pip install -r requirements.txt
- Start the frontend server:
python server.py
- Open in your browser
Benchmark Runner
The benchmarks can be installed from pypi and used as part of the Translator-wide automated testing.
pip install benchmarks-runner
To run benchmarks:
import asyncio
from benchmarks_runner import run_benchmarks
output = asyncio.run(run_benchmarks(<benchmark>, <target>))
where benchmark is the name of a benchmark that is specified in config/benchmarks.json, and a target that is specified in config/targets.json
Sample Output
Benchmark: GTRx
Results Directory: /tmp/tmpaf10m9_q/GTRx/bte/2023-11-10_13-03-11
k=1 k=5 k=10 k=20
Precision @ k 0.0000 0.0500 0.0250 0.0125
Recall @ k 0.0000 0.2500 0.2500 0.2500
mAP @ k 0.0000 0.0833 0.0833 0.0833
Top-k Accuracy 0.0000 0.2500 0.2500 0.2500
Mean Reciprocal Rank 0.08333333333333333
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file benchmarks-runner-0.1.2.tar.gz
.
File metadata
- Download URL: benchmarks-runner-0.1.2.tar.gz
- Upload date:
- Size: 488.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0e0979d898b887ae4afc603f852feb7204c7a2556f69b285a552288a1c1663ae |
|
MD5 | 834f02abc4b659cc8f970ff3d6b4eec2 |
|
BLAKE2b-256 | eea64b8e058f7e3666c230f7671ad3853231aa2abef8c895fc7fc0ef4af7b072 |
File details
Details for the file benchmarks_runner-0.1.2-py3-none-any.whl
.
File metadata
- Download URL: benchmarks_runner-0.1.2-py3-none-any.whl
- Upload date:
- Size: 519.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 598093a3e66153aff6a473a3cb5d09558b84f2e02c14024f98e8f4e5399ff847 |
|
MD5 | f24b0e79d1a465393ffd716b65bc3057 |
|
BLAKE2b-256 | 6ebc365db175e6f6f0bd095dff039603274b915dd34c8d67943878794f4cd559 |