Reranking Models
Boost your search with our crispy reranking models! The mixedbread reranking model family offers state-of-the-art performance across a large variety of domains and can be easily integrated into your existing search stack.
What's new in the mixedbread reranking model family?
We recently finished baking a fresh set of rerank models, the mxbai-rerank-v1 series. After receiving a wave of interest from the community, we're now happy to provide access to the model with the highest demand via our API:
Model | Status | Context Length (tokens) | Description |
---|---|---|---|
mxbai-rerank-large-v1 | API available | 512 | Delivers the highest accuracy and performance |
mxbai-rerank-base-v1 | API unavailable | 512 | Strikes balance between size and performance |
mxbai-rerank-xsmall-v1 | API unavailable | 512 | Focuses on capacity-efficiency while retaining performance |
We are currently investigating finetuning and domain adaptation with a limited number of beta-testers. Please contact us if you are interested in using a reranking model tailored to your data.
Why mixedbread rerank?
Not only are the mixedbread reranking models powerful and fully open-source, they're also extremely easy to integrate into your current search stack. All you need to do is give the original search query as well as your search system's output to our reranking models, and they will tremendously boost your search accuracy - your users will love it!
We evaluated our models by letting them perform the reranking step on the top 100 lexical search results on a subset of the BEIR benchmark, a commonly used collection of evaluation datasets. Specifically, we used the NDCG@10 metric, which measures the overall relevance of the search results compared to the order in which they are ranked by the model, and the accuracy@3 metric, which measures the likelihood of a highly relevant search result appearing in the top 3 results - in our opinion, this is the most important metric to anticipate user satisfaction.
For illustrative purposes, we also included classic keyword search and a current full semantic search model in the evaluation. The results make us confident that our models show best-in-class performance in their size category:
Model | BEIR Accuracy (11 datasets) |
---|---|
Lexical Search (Pyserini) | 66.4 |
bge-reranker-base | 66.9 |
bge-reranker-large | 70.6 |
cohere-embed-v3 | 70.9 |
mxbai-rerank-xsmall-v1 | 70.0 |
mxbai-rerank-base-v1 | 72.3 |
mxbai-rerank-large-v1 | 74.9 |
Why should you use our API?
To get started, you can easily use our open-source version of the models. However, the models provided through the API are trained on new data every month. This ensures that the models understand ongoing developments in the world and can identify the most relevant information for any questions they might be asked without a knowledge cutoff. Naturally, our quality control ensures that the models' performance always remains at least similar to previous versions.