Massive Text Embedding Benchmark
Project description
Massive Text Embedding Benchmark
Installation | Usage | Leaderboard | Documentation | Citing
Installation
pip install mteb
Usage
- Using a python script (see scripts/run_mteb_english.py and mteb/mtebscripts for more):
from mteb import MTEB
from sentence_transformers import SentenceTransformer
# Define the sentence-transformers model name
model_name = "average_word_embeddings_komninos"
# or directly from huggingface:
# model_name = "sentence-transformers/all-MiniLM-L6-v2"
model = SentenceTransformer(model_name)
evaluation = MTEB(tasks=["Banking77Classification"])
results = evaluation.run(model, output_folder=f"results/{model_name}")
- Using CLI
mteb --available_tasks
mteb -m sentence-transformers/all-MiniLM-L6-v2 \
-t Banking77Classification \
--verbosity 3
# if nothing is specified default to saving the results in the results/{model_name} folder
- Using multiple GPUs in parallel can be done by just having a custom encode function that distributes the inputs to multiple GPUs like e.g. here or here.
Advanced Usage (click to unfold)
Advanced Usage
Dataset selection
Datasets can be selected by providing the list of datasets, but also
- by their task (e.g. "Clustering" or "Classification")
evaluation = MTEB(task_types=['Clustering', 'Retrieval']) # Only select clustering and retrieval tasks
- by their categories e.g. "S2S" (sentence to sentence) or "P2P" (paragraph to paragraph)
evaluation = MTEB(task_categories=['S2S']) # Only select sentence2sentence datasets
- by their languages
evaluation = MTEB(task_langs=["en", "de"]) # Only select datasets which are "en", "de" or "en-de"
You can also specify which languages to load for multilingual/crosslingual tasks like below:
from mteb.tasks import AmazonReviewsClassification, BUCCBitextMining
evaluation = MTEB(tasks=[
AmazonReviewsClassification(langs=["en", "fr"]) # Only load "en" and "fr" subsets of Amazon Reviews
BUCCBitextMining(langs=["de-en"]), # Only load "de-en" subset of BUCC
])
There are also presets available for certain task collections, e.g. to select the 56 English datasets that form the "Overall MTEB English leaderboard":
from mteb import MTEB_MAIN_EN
evaluation = MTEB(tasks=MTEB_MAIN_EN, task_langs=["en"])
Evaluation split
You can evaluate only on test
splits of all tasks by doing the following:
evaluation.run(model, eval_splits=["test"])
Note that the public leaderboard uses the test splits for all datasets except MSMARCO, where the "dev" split is used.
Using a custom model
Models should implement the following interface, implementing an encode
function taking as inputs a list of sentences, and returning a list of embeddings (embeddings can be np.array
, torch.tensor
, etc.). For inspiration, you can look at the mteb/mtebscripts repo used for running diverse models via SLURM scripts for the paper.
class MyModel():
def encode(self, sentences: list[str], **kwargs) -> list[np.ndarray] | list[torch.Tensor]:
"""
Returns a list of embeddings for the given sentences.
Args:
sentences: List of sentences to encode
Returns:
List of embeddings for the given sentences
"""
pass
model = MyModel()
evaluation = MTEB(tasks=["Banking77Classification"])
evaluation.run(model)
If you'd like to use different encoding functions for query and corpus when evaluating on Retrieval or Reranking tasks, you can add separate methods for encode_queries
and encode_corpus
. If these methods exist, they will be automatically used for those tasks. You can refer to the DRESModel
at mteb/evaluation/evaluators/RetrievalEvaluator.py
for an example of these functions.
class MyModel():
def encode_queries(self, queries: list[str], **kwargs) -> list[np.ndarray] | list[torch.Tensor]:
"""
Returns a list of embeddings for the given sentences.
Args:
queries: List of sentences to encode
Returns:
List of embeddings for the given sentences
"""
pass
def encode_corpus(self, corpus: list[str] | list[dict[str, str]], **kwargs) -> list[np.ndarray] | list[torch.Tensor]:
"""
Returns a list of embeddings for the given sentences.
Args:
corpus: List of sentences to encode
or list of dictionaries with keys "title" and "text"
Returns:
List of embeddings for the given sentences
"""
pass
Evaluating on a custom dataset
To evaluate on a custom task, you can run the following code on your custom task. See how to add a new task, for how to create a new task in MTEB.
from mteb import MTEB
from mteb.abstasks.AbsTaskReranking import AbsTaskReranking
from sentence_transformers import SentenceTransformer
class MyCustomTask(AbsTaskReranking):
...
model = SentenceTransformer("average_word_embeddings_komninos")
evaluation = MTEB(tasks=[MyCustomTask()])
evaluation.run(model)
Documentation
Documentation | |
---|---|
📋 Tasks | Overview of available tasks |
📈 Leaderboard | The interactive leaderboard of the benchmark |
🤖 Adding a model | Information related to how to submit a model to the leaderboard |
👩💻 Adding a dataset | How to add a new task/dataset to MTEB |
🤝 Contributing | How to contribute to MTEB and set it up for development |
Citing
MTEB was introduced in "MTEB: Massive Text Embedding Benchmark", feel free to cite:
@article{muennighoff2022mteb,
doi = {10.48550/ARXIV.2210.07316},
url = {https://arxiv.org/abs/2210.07316},
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
}
You may also want to read and cite the amazing work that has extended MTEB & integrated new datasets:
- Shitao Xiao, Zheng Liu, Peitian Zhang, Niklas Muennighoff. "C-Pack: Packaged Resources To Advance General Chinese Embedding" arXiv 2023
- Michael Günther, Jackmin Ong, Isabelle Mohr, Alaeddine Abdessalem, Tanguy Abel, Mohammad Kalim Akram, Susana Guzman, Georgios Mastrapas, Saba Sturua, Bo Wang, Maximilian Werk, Nan Wang, Han Xiao. "Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long Documents" arXiv 2023
- Silvan Wehrli, Bert Arnrich, Christopher Irrgang. "German Text Embedding Clustering Benchmark" arXiv 2024
For works that have used MTEB for benchmarking, you can find them on the leaderboard.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file mteb-1.7.2.tar.gz
.
File metadata
- Download URL: mteb-1.7.2.tar.gz
- Upload date:
- Size: 287.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5a97cb344eeaf88f9996846d64e02c58c7e5b8ba699af2288fef5f6aabb5b3df |
|
MD5 | 920602874fd106670f6bf860b7416d4f |
|
BLAKE2b-256 | 509c336b55c55e3d0ab5f329f2c86e907d619c4570d4f06eeeb1b29049ded015 |
File details
Details for the file mteb-1.7.2-py3-none-any.whl
.
File metadata
- Download URL: mteb-1.7.2-py3-none-any.whl
- Upload date:
- Size: 531.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 79a7923d0c859acab6130b82a4a496d4ae09f76007c66d8598b8efbef1dda6de |
|
MD5 | adb4adff2c3c383602d4a8f6094ca70e |
|
BLAKE2b-256 | 249da1dd9c28538712e081cc5bbc77d825e05097b596a9e345453e44da050cd3 |