Change8

v0.14.6

📦 llamaindexView on GitHub →
7 features🐛 6 fixes🔧 10 symbols

Summary

This release introduces new integrations for Isaacus and Helicone, adds async support for Bedrock retrievers, and includes critical security fixes for SQL parameterization in PostgresKVStore.

Migration Steps

  1. Update llama-index-core to 0.14.6 for parallel tool call support.
  2. If using PostgresKVStore, upgrade to 0.4.2 to ensure SQL parameterization security fixes are applied.

✨ New Features

  • Added allow_parallel_tool_calls support for non-streaming requests in llama-index-core.
  • New integration for Isaacus embeddings.
  • New integration for Helicone LLM.
  • Added GLM support to Baseten LLM integration.
  • Added async support for AmazonKnowledgeBasesRetriever in Bedrock retrievers.
  • Added GIN index support for text array metadata in PostgreSQL vector store.
  • Updated OCI GenAI to support latest Cohere models.

🐛 Bug Fixes

  • Fixed invalid use of field-specific metadata in core.
  • Fixed edge case where sentence splits exceeded chunk size in core.
  • Fixed BedrockEmbedding to support Cohere v4 response format.
  • Fixed double token stream and content delta filtering in Anthropic LLM.
  • Fixed SQL injection vulnerability in PostgresKVStore by replacing raw string interpolation with SQLAlchemy parameterized APIs.
  • Fixed BasicMCPClient resource signatures in MCP tools.

🔧 Affected Symbols

allow_parallel_tool_callsSemanticSplitterNodeParserBedrockEmbeddingIsaacusEmbeddingAnthropicAmazonKnowledgeBasesRetrieverPostgresKVStorePGVectorStoreBasicMCPClientHeliconeLLM