sentence-transformers
AI & LLMsState-of-the-Art Text Embeddings
Release History
v5.2.31 fixThis patch release (v5.2.3) introduces compatibility with the newly released Transformers v5.2, resolving a training failure related to the Trainer class.
v5.2.21 fix1 featureThis patch release removes the mandatory `requests` dependency, making `httpx` the preferred (but optional) dependency, primarily to support Transformers v5.
v5.2.11 featureThis patch release ensures full compatibility with the official Transformers v5.0.0 release and manually specifies numpy in dependencies.
v5.2.0Breaking2 fixes6 featuresVersion 5.2.0 adds multiprocessing to CrossEncoder, multilingual NanoBEIR support, similarity scores in hard‑negative mining, and updates for Transformers 5 while deprecating Python 3.9 and the old `n-tuple-scores` format.
v5.1.29 fixes8 featuresSentence‑Transformers 5.1.2 adds improved saving for StaticEmbedding and Dense modules, introduces Intel XPU as the default device, enhances loss compatibility, and adds Python 3.13 support while fixing several training and loading bugs.
v5.1.1Breaking8 fixes3 featuresVersion 5.1.1 adds explicit validation of unused kwargs in `encode`, introduces FLOPS metrics for SparseEncoder evaluators, supports Knowledgeable Passage Retriever models, and includes several bug fixes around batch size handling, multi‑GPU processing, and evaluator output paths.
v5.1.01 fix6 featuresVersion 5.1.0 adds ONNX and OpenVINO backends for SparseEncoder, a new n‑tuple‑scores format for hard‑negative mining, multi‑GPU gathering, TrackIO support, and updated documentation.
v5.0.01 fix8 featuresSentence-Transformers 5.0.0 adds SparseEncoder support, new encode_query/document methods, multiprocessing encoding, a Router module, custom learning rates, and composite loss logging, while remaining backwards compatible.
v4.1.01 fix5 featuresVersion 4.1.0 adds ONNX and OpenVINO backends for CrossEncoder, a new `backend` argument, and utilities for model optimization, while remaining backward compatible.
v4.0.2Breaking7 fixes4 featuresVersion 4.0.2 introduces safer max-length handling for CrossEncoder models and improves distributed training device placement, while fixing typing, FSDP, and documentation issues.
v4.0.1Breaking17 featuresVersion 4.0.1 introduces a complete overhaul of the CrossEncoder training pipeline with a new `CrossEncoderTrainer`, dataset‑based inputs, multi‑GPU and bf16 support, and many training‑related enhancements, while keeping inference unchanged.
v3.4.16 fixes1 featureVersion 3.4.1 adds native Model2Vec support to SentenceTransformer and fixes several documentation and network‑request bugs.
v3.4.0Breaking10 fixes5 featuresVersion 3.4.0 fixes a major memory‑leak issue, adds compatibility between cached losses and MatryoshkaLoss, introduces several new features, and resolves numerous bugs.
Common Errors
OutOfMemoryError4 reportsOutOfMemoryError in sentence-transformers usually arises from loading excessively large models or batches onto the GPU. Reduce the batch size during training or inference, and consider using a smaller model like `all-MiniLM-L6-v2` which has a lower memory footprint. Alternatively, enable gradient accumulation or offload model weights to CPU during training if possible.
ModuleNotFoundError3 reportsThe "ModuleNotFoundError" in sentence-transformers usually arises from incorrect installation or import paths, particularly when dealing with custom models, private hubs, or testing utilities. Ensure sentence-transformers is correctly installed using pip install -U sentence-transformers, and verify that import statements accurately reflect the module's location within the package structure. Double-check for typos in module names and consider adding the package's root directory to your Python path if necessary.
NotImplementedError2 reportsThe "NotImplementedError" in sentence-transformers often arises when using a feature or model component that hasn't been fully implemented for a specific version of PyTorch, ONNX, or the transformer model itself (e.g., quantization support for Qwen-3). Ensure that your sentence-transformers library, PyTorch version, and ONNX version (if applicable) are compatible and up-to-date. If problems persist, examine the specific error message and model component, and check the sentence-transformers documentation/issue tracker for workarounds or updates addressing the missing implementation or incompatibility.
RepositoryNotFoundError2 reportsRepositoryNotFoundError usually arises when the specified model name in `SentenceTransformer()` is incorrect, or when the model requires authentication (e.g., a private model). Double-check the model name for typos and ensure it exists on the Hugging Face Hub. If the model is private, you must pass your Hugging Face API token to `SentenceTransformer(model_name_or_path, use_auth_token="YOUR_HUGGINGFACE_API_TOKEN")` or set it as an environment variable.
KeyError1 reportKeyError in sentence-transformers often arises when the input data's indexing (e.g., a Pandas Series index) doesn't align with the expected format during processing, especially within the encode() function. To fix this, ensure your input data has a standard integer index starting from 0, or explicitly convert your data (e.g., Pandas Series) to a list before passing it to the encode() function, bypassing any custom indexing issues. This forces sentence-transformers to iterate through the data sequentially without relying on potentially problematic keys.
ImportError1 reportThe "ImportError: cannot import name '...' from 'sentence_transformers'" often arises from outdated or corrupted sentence-transformers installations or dependency conflicts. Try upgrading sentence-transformers to the latest version using `pip install --upgrade sentence-transformers`. If upgrading doesn't work, try reinstalling the library with `pip install --force-reinstall sentence-transformers` to ensure a clean installation and resolve potential dependency clashes.
Related AI & LLMs Packages
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
🦜🔗 The platform for reliable agents.
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
LLM inference in C/C++
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
Subscribe to Updates
Get notified when new versions are released