Change8

sentence-transformers

AI & LLMs

State-of-the-Art Text Embeddings

Latest: v5.2.313 releases5 breaking changes7 common errorsView on GitHub

Release History

v5.2.31 fix
Feb 17, 2026

This patch release (v5.2.3) introduces compatibility with the newly released Transformers v5.2, resolving a training failure related to the Trainer class.

v5.2.21 fix1 feature
Jan 27, 2026

This patch release removes the mandatory `requests` dependency, making `httpx` the preferred (but optional) dependency, primarily to support Transformers v5.

v5.2.11 feature
Jan 26, 2026

This patch release ensures full compatibility with the official Transformers v5.0.0 release and manually specifies numpy in dependencies.

v5.2.0Breaking2 fixes6 features
Dec 11, 2025

Version 5.2.0 adds multiprocessing to CrossEncoder, multilingual NanoBEIR support, similarity scores in hard‑negative mining, and updates for Transformers 5 while deprecating Python 3.9 and the old `n-tuple-scores` format.

v5.1.29 fixes8 features
Oct 22, 2025

Sentence‑Transformers 5.1.2 adds improved saving for StaticEmbedding and Dense modules, introduces Intel XPU as the default device, enhances loss compatibility, and adds Python 3.13 support while fixing several training and loading bugs.

v5.1.1Breaking8 fixes3 features
Sep 22, 2025

Version 5.1.1 adds explicit validation of unused kwargs in `encode`, introduces FLOPS metrics for SparseEncoder evaluators, supports Knowledgeable Passage Retriever models, and includes several bug fixes around batch size handling, multi‑GPU processing, and evaluator output paths.

v5.1.01 fix6 features
Aug 6, 2025

Version 5.1.0 adds ONNX and OpenVINO backends for SparseEncoder, a new n‑tuple‑scores format for hard‑negative mining, multi‑GPU gathering, TrackIO support, and updated documentation.

v5.0.01 fix8 features
Jul 1, 2025

Sentence-Transformers 5.0.0 adds SparseEncoder support, new encode_query/document methods, multiprocessing encoding, a Router module, custom learning rates, and composite loss logging, while remaining backwards compatible.

v4.1.01 fix5 features
Apr 15, 2025

Version 4.1.0 adds ONNX and OpenVINO backends for CrossEncoder, a new `backend` argument, and utilities for model optimization, while remaining backward compatible.

v4.0.2Breaking7 fixes4 features
Apr 3, 2025

Version 4.0.2 introduces safer max-length handling for CrossEncoder models and improves distributed training device placement, while fixing typing, FSDP, and documentation issues.

v4.0.1Breaking17 features
Mar 26, 2025

Version 4.0.1 introduces a complete overhaul of the CrossEncoder training pipeline with a new `CrossEncoderTrainer`, dataset‑based inputs, multi‑GPU and bf16 support, and many training‑related enhancements, while keeping inference unchanged.

v3.4.16 fixes1 feature
Jan 29, 2025

Version 3.4.1 adds native Model2Vec support to SentenceTransformer and fixes several documentation and network‑request bugs.

v3.4.0Breaking10 fixes5 features
Jan 23, 2025

Version 3.4.0 fixes a major memory‑leak issue, adds compatibility between cached losses and MatryoshkaLoss, introduces several new features, and resolves numerous bugs.

Common Errors

OutOfMemoryError4 reports

OutOfMemoryError in sentence-transformers usually arises from loading excessively large models or batches onto the GPU. Reduce the batch size during training or inference, and consider using a smaller model like `all-MiniLM-L6-v2` which has a lower memory footprint. Alternatively, enable gradient accumulation or offload model weights to CPU during training if possible.

ModuleNotFoundError3 reports

The "ModuleNotFoundError" in sentence-transformers usually arises from incorrect installation or import paths, particularly when dealing with custom models, private hubs, or testing utilities. Ensure sentence-transformers is correctly installed using pip install -U sentence-transformers, and verify that import statements accurately reflect the module's location within the package structure. Double-check for typos in module names and consider adding the package's root directory to your Python path if necessary.

NotImplementedError2 reports

The "NotImplementedError" in sentence-transformers often arises when using a feature or model component that hasn't been fully implemented for a specific version of PyTorch, ONNX, or the transformer model itself (e.g., quantization support for Qwen-3). Ensure that your sentence-transformers library, PyTorch version, and ONNX version (if applicable) are compatible and up-to-date. If problems persist, examine the specific error message and model component, and check the sentence-transformers documentation/issue tracker for workarounds or updates addressing the missing implementation or incompatibility.

RepositoryNotFoundError2 reports

RepositoryNotFoundError usually arises when the specified model name in `SentenceTransformer()` is incorrect, or when the model requires authentication (e.g., a private model). Double-check the model name for typos and ensure it exists on the Hugging Face Hub. If the model is private, you must pass your Hugging Face API token to `SentenceTransformer(model_name_or_path, use_auth_token="YOUR_HUGGINGFACE_API_TOKEN")` or set it as an environment variable.

KeyError1 report

KeyError in sentence-transformers often arises when the input data's indexing (e.g., a Pandas Series index) doesn't align with the expected format during processing, especially within the encode() function. To fix this, ensure your input data has a standard integer index starting from 0, or explicitly convert your data (e.g., Pandas Series) to a list before passing it to the encode() function, bypassing any custom indexing issues. This forces sentence-transformers to iterate through the data sequentially without relying on potentially problematic keys.

ImportError1 report

The "ImportError: cannot import name '...' from 'sentence_transformers'" often arises from outdated or corrupted sentence-transformers installations or dependency conflicts. Try upgrading sentence-transformers to the latest version using `pip install --upgrade sentence-transformers`. If upgrading doesn't work, try reinstalling the library with `pip install --force-reinstall sentence-transformers` to ensure a clean installation and resolve potential dependency clashes.

Related AI & LLMs Packages

Subscribe to Updates

Get notified when new versions are released

RSS Feed