v0.12.42
📦 llamaindex
✨ 8 features🐛 8 fixes⚡ 1 deprecations🔧 8 symbols
Summary
This release introduces support for OpenAI's O3-pro and MistralAI reasoning models, adds a new multi-modal OpenAI-like LLM, and provides various fixes for async memory operations and figure retrieval.
Migration Steps
- Update llama-index-postprocessor-bedrock-rerank to use BedrockRerank class name.
- Ensure aioboto3 is updated if using Bedrock embeddings.
✨ New Features
- Added reasoning support to MistralAI LLM and Magistral.
- Added Day 0 support for OpenAI O3-pro model.
- Introduced OpenAI-like multi-modal LLM support.
- Added ArtifactEditorToolSpec for editing Pydantic objects.
- Integrated figure retrieval SDK in LlamaCloud managed indices.
- Added ability to exclude source fields from OpenSearch query responses.
- Added label truncation to workflow visualization.
- Updated aioboto3 dependency in Bedrock embeddings.
🐛 Bug Fixes
- Fixed memory operations to use async variants within async functions in core.
- Fixed input message passing to memory get in core.
- Switched from hashing to UUID in SQLTableNodeMapping for broader compatibility.
- Fixed None type handling for raw_figure_nodes in LlamaCloud figure retrieval.
- Skipped tool description length check in OpenAI response API.
- Fixed filename hashing robustness in papers reader.
- Fixed integration bugs in Perplexity LLM.
- Corrected documentation and tool integration for ElevenLabs voice agents.
🔧 Affected Symbols
SQLTableNodeMappingBedrockRerankAWSBedrockRerankArtifactEditorToolSpecMistralAIOpenAIpage_figure_nodes_to_node_with_scoreOpenSearchVectorStore⚡ Deprecations
- Prefer 'BedrockRerank' over 'AWSBedrockRerank' in llama-index-postprocessor-bedrock-rerank.