1.5.3
Breaking Changes📦 chromaView on GitHub →
⚠ 1 breaking✨ 18 features🐛 4 fixes🔧 1 symbols
Summary
Version 1.5.3 removes the pydantic v1 compatibility layer, enabling Python 3.14 support, and introduces numerous performance enhancements, especially around log fetching and compaction endpoints.
⚠️ Breaking Changes
- Dropped pydantic v1 compatibility layer. Users relying on pydantic v1 models or configurations must migrate to pydantic v2 syntax.
Migration Steps
- Migrate any usage of pydantic v1 models or configurations to pydantic v2 syntax, as v1 compatibility layer has been removed to support Python 3.14.
✨ New Features
- Thread topology name through purge-dirty pipeline.
- Purge dirty operation now supports specifying topology via Spanner.
- Parallelized segment reader initialization in filter and idf operators for performance.
- Preallocation during pull log parsing.
- Implemented ordered sparse vector writer.
- Skip record load when only ID is requested.
- Added pointer-based log fetch via ScoutLogFragments.
- Added ReadLevel to count in backend.
- Added a gauge metric in sysdb to track compaction_failure_count.
- Implemented ListInProgressJobs endpoint for compactor.
- Compaction endpoint now returns where a collection would be assigned.
- Added tracing spans to log fetch path.
- Added OpenTelemetry metrics to the system crate.
- Added dedicated fragment_storage config for fragment fetcher.
- Re-added the fetch_log_concurrency semaphore in the worker.
- Implemented Delete with limit functionality in the server.
- Implemented Delete with limit functionality in clients.
- Updated Gemini Embeddings Functions (EFs).
🐛 Bug Fixes
- Retried batch fetch on channel closure in storage operations.
- Fixed CAS (Compare-And-Swap) on version during reassignment.
- Avoided redundant manifest load in pull_logs_inner.
- Used ResourceExhausted error code for log backpressure handling.