Skip to main content

Translator Benchmarks Runner

Project description

Translator Benchmarks

This repository provides a set of benchmarks as well as the code to send the queries and evaluate the returned results of a benchmark.

benchmarks contains the code to query targets and evaluate results.

config contains the data sets, query templates, targets, and benchmark definitions necessary to run a benchmark. See config/README.md for details about targets and benchmarks.

Usage

Running a benchmark is a two-step process:

  1. Execute the queries of a benchmark and store the scored results.
  2. Evaluate the scored results against the set of ground-truth relevant results.

Installation of the benchmarks package provides access to the functions and command-line interface necessary to run benchmarks.

CLI

The command-line interface is the easiest way to run a benchmark.

  • benchmarks_fetch

    • Fetches (un)scored results given the name of a benchmark (specified in config/benchmarks.json), target (specified in config/targets.json), and a directory to store results.
    • By default, benchmarks_fetch fetches scored results using 5 concurrent requests. Run benchmarks_fetch --help for more details.
  • benchmarks_score

    • Scores results given the name of a benchmark (specified in config/benchmarks.json), target (specified in config/targets.json), a directory containing unscored results, and a directory to store scored results.
    • By default, benchmarks_score uses 5 concurrent requests. Run benchmarks_score --help for more details.
  • benchmarks_eval

    • Evaluates a set of scored results given the name of a benchmark (specified in config/benchmarks.json) and a directory containing scored results.
    • By default, the evaluation considers the top 20 results of each query, and plots are not generated. Run benchmarks_eval --help for more details.

Functions

The CLI functionality is also available by importing functions from the benchmarks package.

from benchmarks.request import fetch_results, score_results
from benchmarks.eval import evaluate_results

# Fetch unscored results
fetch_results('benchmark_name', 'target_name', 'unscored_results_dir', scored=False)

# Score unscored results
score_results('unscored_results_dir', 'target_name', 'results_dir')

# Evaluate scored results
evaluate_results('benchmark_name', 'results_dir OR results_dict')

See the documentation of each function for more information.

Installation

Install the repository as an editable package using pip.

pip install -e .

UI

These benchmarks come with a frontend for viewing the scored results.

Installation

Requires python 3.9.

  • Create a python virtual environment: python3.9 -m venv benchmark_venv
  • Activate your environment: . ./benchmark_venv/bin/activate
  • Install dependencies: pip install -r requirements.txt
  • Start the frontend server: python server.py
  • Open in your browser

Benchmark Runner

The benchmarks can be installed from pypi and used as part of the Translator-wide automated testing.

  • pip install benchmarks-runner To run benchmarks:
import asyncio
from benchmarks_runner import run_benchmarks

output = asyncio.run(run_benchmarks(<benchmark>, <target>))

where benchmark is the name of a benchmark that is specified in config/benchmarks.json, and a target that is specified in config/targets.json

Sample Output

Benchmark: GTRx
Results Directory: /tmp/tmpaf10m9_q/GTRx/bte/2023-11-10_13-03-11
                        k=1     k=5     k=10    k=20
Precision @ k           0.0000  0.0500  0.0250  0.0125
Recall @ k              0.0000  0.2500  0.2500  0.2500
mAP @ k                 0.0000  0.0833  0.0833  0.0833
Top-k Accuracy          0.0000  0.2500  0.2500  0.2500
Mean Reciprocal Rank    0.08333333333333333

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

benchmarks-runner-0.1.0.tar.gz (14.8 kB view details)

Uploaded Source

Built Distribution

benchmarks_runner-0.1.0-py3-none-any.whl (17.5 kB view details)

Uploaded Python 3

File details

Details for the file benchmarks-runner-0.1.0.tar.gz.

File metadata

  • Download URL: benchmarks-runner-0.1.0.tar.gz
  • Upload date:
  • Size: 14.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for benchmarks-runner-0.1.0.tar.gz
Algorithm Hash digest
SHA256 5f3c14b4535178190b58a8b139f5aa9b83be68aae30414353b3aee406d0e6aea
MD5 c810896a4e9a04fafaa1fc8e52811da3
BLAKE2b-256 a2718e0e27cd122d3b29b1589a36635b55bc370a0811446dc6a5700a2bf119fa

See more details on using hashes here.

File details

Details for the file benchmarks_runner-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for benchmarks_runner-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e23ba2fec4355047a3e4c7fc775efdd1245531aec76f904fe009e1e052a8b57e
MD5 0eca9efa50c3905324e36287ef93ac37
BLAKE2b-256 2773d810a9328832a3e98e08c387be32f95fc1455333d805d0bfd5938924b7cc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page