v0.14.7
📦 llamaindexView on GitHub →
✨ 9 features🐛 5 fixes🔧 10 symbols
Summary
This release introduces SerpEx tool integration and expands tool call block support across Anthropic, MistralAI, and Ollama. It also adds significant updates to Bedrock, Couchbase vector stores, and GitHub authentication.
Migration Steps
- Update llama-index-core to 0.14.7.
- If using Confluence reader without pycairo, ensure SVG processing is disabled if previously failing.
- Update specific provider packages (Anthropic, MistralAI, Ollama) to access new tool call block integrations.
✨ New Features
- Integrated SerpEx tool for search capabilities.
- Integrated Anthropic, MistralAI, and Ollama with tool call blocks.
- Added support for Bedrock Guardrails streamProcessingMode.
- Added optional force parameter for Bedrock structured output.
- Updated FireworksAI model list.
- Added GitHub App authentication support to the GitHub reader.
- Added Hyperscale and Composite Vector Indexes support for Couchbase vector-store.
- Made SVG processing optional in Confluence reader to avoid mandatory pycairo installation.
- Allowed setting temperature for gpt-5-chat in OpenAI LLM.
🐛 Bug Fixes
- Updated outdated error message regarding LLM configuration.
- Fixed FunctionTool to ensure the full docstring is utilized.
- Fixed API documentation build process.
- Resolved issues with lock files and CI workflows.
- Fixed recently failing tests in core and bedrock retrievers.
🔧 Affected Symbols
FunctionToolSerpExToolSpecAnthropicMistralAIOllamaBedrockConverseConfluenceReaderGithubRepositoryReaderCouchbaseVectorStoreOpenAI