Skip to main content

Massive Text Embedding Benchmark

Project description

Massive Text Embedding Benchmark

GitHub release GitHub release License Downloads

Installation | Usage | Leaderboard | Documentation | Citing

Installation

pip install mteb

Usage

from mteb import MTEB
from sentence_transformers import SentenceTransformer

# Define the sentence-transformers model name
model_name = "average_word_embeddings_komninos"
# or directly from huggingface:
# model_name = "sentence-transformers/all-MiniLM-L6-v2"

model = SentenceTransformer(model_name)
evaluation = MTEB(tasks=["Banking77Classification"])
results = evaluation.run(model, output_folder=f"results/{model_name}")
  • Using CLI
mteb --available_tasks

mteb -m sentence-transformers/all-MiniLM-L6-v2 \
    -t Banking77Classification  \
    --verbosity 3

# if nothing is specified default to saving the results in the results/{model_name} folder
  • Using multiple GPUs in parallel can be done by just having a custom encode function that distributes the inputs to multiple GPUs like e.g. here or here.

Advanced Usage (click to unfold)

Advanced Usage

Dataset selection

Datasets can be selected by providing the list of datasets, but also

  • by their task (e.g. "Clustering" or "Classification")
evaluation = MTEB(task_types=['Clustering', 'Retrieval']) # Only select clustering and retrieval tasks
  • by their categories e.g. "S2S" (sentence to sentence) or "P2P" (paragraph to paragraph)
evaluation = MTEB(task_categories=['S2S']) # Only select sentence2sentence datasets
  • by their languages
evaluation = MTEB(task_langs=["en", "de"]) # Only select datasets which are "en", "de" or "en-de"

You can also specify which languages to load for multilingual/cross-lingual tasks like below:

from mteb.tasks import AmazonReviewsClassification, BUCCBitextMining

evaluation = MTEB(tasks=[
        AmazonReviewsClassification(hf_subsets=["en", "fr"]) # Only load "en" and "fr" subsets of Amazon Reviews
        BUCCBitextMining(hf_subsets=["de-en"]), # Only load "de-en" subset of BUCC
])

There are also presets available for certain task collections, e.g. to select the 56 English datasets that form the "Overall MTEB English leaderboard":

from mteb import MTEB_MAIN_EN
evaluation = MTEB(tasks=MTEB_MAIN_EN, task_langs=["en"])

Evaluation split

You can evaluate only on test splits of all tasks by doing the following:

evaluation.run(model, eval_splits=["test"])

Note that the public leaderboard uses the test splits for all datasets except MSMARCO, where the "dev" split is used.

Using a custom model

Models should implement the following interface, implementing an encode function taking as inputs a list of sentences, and returning a list of embeddings (embeddings can be np.array, torch.tensor, etc.). For inspiration, you can look at the mteb/mtebscripts repo used for running diverse models via SLURM scripts for the paper.

class MyModel():
    def encode(
        self, sentences: list[str], **kwargs: Any
    ) -> torch.Tensor | np.ndarray:
        """Encodes the given sentences using the encoder.

        Args:
            sentences: The sentences to encode.
            **kwargs: Additional arguments to pass to the encoder.

        Returns:
            The encoded sentences.
        """
        pass

model = MyModel()
evaluation = MTEB(tasks=["Banking77Classification"])
evaluation.run(model)

If you'd like to use different encoding functions for query and corpus when evaluating on Retrieval or Reranking tasks, you can add separate methods for encode_queries and encode_corpus. If these methods exist, they will be automatically used for those tasks. You can refer to the DRESModel at mteb/evaluation/evaluators/RetrievalEvaluator.py for an example of these functions.

class MyModel():
    def encode_queries(self, queries: list[str], **kwargs) -> list[np.ndarray] | list[torch.Tensor]:
        """
        Returns a list of embeddings for the given sentences.
        Args:
            queries: List of sentences to encode

        Returns:
            List of embeddings for the given sentences
        """
        pass

    def encode_corpus(self, corpus: list[str] | list[dict[str, str]], **kwargs) -> list[np.ndarray] | list[torch.Tensor]:
        """
        Returns a list of embeddings for the given sentences.
        Args:
            corpus: List of sentences to encode
                or list of dictionaries with keys "title" and "text"

        Returns:
            List of embeddings for the given sentences
        """
        pass

Evaluating on a custom dataset

To evaluate on a custom task, you can run the following code on your custom task. See how to add a new task, for how to create a new task in MTEB.

from mteb import MTEB
from mteb.abstasks.AbsTaskReranking import AbsTaskReranking
from sentence_transformers import SentenceTransformer


class MyCustomTask(AbsTaskReranking):
    ...

model = SentenceTransformer("average_word_embeddings_komninos")
evaluation = MTEB(tasks=[MyCustomTask()])
evaluation.run(model)

Documentation

Documentation
📋 Tasks  Overview of available tasks
📈 Leaderboard The interactive leaderboard of the benchmark
🤖 Adding a model Information related to how to submit a model to the leaderboard
👩‍💻 Adding a dataset How to add a new task/dataset to MTEB
👩‍💻 Adding a leaderboard tab How to add a new leaderboard tab to MTEB
🤝 Contributing How to contribute to MTEB and set it up for development

Citing

MTEB was introduced in "MTEB: Massive Text Embedding Benchmark", feel free to cite:

@article{muennighoff2022mteb,
  doi = {10.48550/ARXIV.2210.07316},
  url = {https://arxiv.org/abs/2210.07316},
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},  
  year = {2022}
}

You may also want to read and cite the amazing work that has extended MTEB & integrated new datasets:

For works that have used MTEB for benchmarking, you can find them on the leaderboard.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mteb-1.10.6.tar.gz (948.7 kB view details)

Uploaded Source

Built Distribution

mteb-1.10.6-py3-none-any.whl (1.5 MB view details)

Uploaded Python 3

File details

Details for the file mteb-1.10.6.tar.gz.

File metadata

  • Download URL: mteb-1.10.6.tar.gz
  • Upload date:
  • Size: 948.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for mteb-1.10.6.tar.gz
Algorithm Hash digest
SHA256 d257a45ccde8ddd123d5b52d057e3bb7e1e78370060e9f7c4464ebc9c2816695
MD5 65cb7678a9a14e719a82de1cc69bd622
BLAKE2b-256 238151f427f30be102afa25d32acb53cb672397171406df96e999d1422de51a3

See more details on using hashes here.

File details

Details for the file mteb-1.10.6-py3-none-any.whl.

File metadata

  • Download URL: mteb-1.10.6-py3-none-any.whl
  • Upload date:
  • Size: 1.5 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for mteb-1.10.6-py3-none-any.whl
Algorithm Hash digest
SHA256 ef1efe48e81ba440ec67aea02dd7964b97494e34e11f2ae24b1c003f0b789970
MD5 4994084a77082a2f1bad3b1d0e6a2afa
BLAKE2b-256 5e65b6019a0e8b6654f816b801fd300750e6c4026fa32c06393a8f25ae653743

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page