v0.13.0
Breaking Changes📦 llamaindexView on GitHub →
⚠ 3 breaking✨ 10 features🐛 5 fixes🔧 16 symbols
Summary
This major update to llama-index-core (v0.13.0) removes legacy agent classes and QueryPipeline in favor of a new Workflow-based agent architecture, while updating various provider integrations for LLMs and vector stores.
⚠️ Breaking Changes
- Removed deprecated agent classes (FunctionCallingAgent, old ReActAgent, AgentRunner, step workers, StructuredAgentPlanner, OpenAIAgent). Users must migrate to workflow-based agents.
- Removed QueryPipeline class and all associated code.
- Default index.as_chat_engine() now returns CondensePlusContextChatEngine instead of agent-based engines. Agent-based chat engines are removed from this method.
Migration Steps
- Replace usage of FunctionCallingAgent, OpenAIAgent, and AgentRunner with FunctionAgent, CodeActAgent, or AgentWorkflow.
- Replace usage of the old ReActAgent with the new workflow-based ReActAgent.
- Remove dependencies on QueryPipeline as the class has been deleted.
- If an agent-based chat engine is required, manually instantiate one of the new agent classes instead of calling index.as_chat_engine().
✨ New Features
- Introduced new workflow-based agents: FunctionAgent, CodeActAgent, ReActAgent, and AgentWorkflow.
- Updated mixedbread embeddings and rerank for latest SDK.
- Added Thought Summaries and signatures for Gemini LLMs.
- Added support for kimi-k2-instruct (NVIDIA) and solar-pro2 (Upstage) models.
- Enhanced Github Reader with file filtering and custom processing.
- Added region_name support via client_kwargs in S3Reader.
- Added Gemini Live beta implementation.
- Added get nodes and delete nodes support for AstraDB vector store.
- Added partition_names support in Milvus search configuration.
- Added support for ANY/ALL postgres operators in Postgres vector store.
🐛 Bug Fixes
- BaseDocumentStore no longer returns Nones in results.
- Fixed FunctionTool parameter doc parsing and signature mutation.
- Handled empty prompt in MockLLM.stream_complete.
- Ensured wrapped exceptions bubble up in llama-index-instrumentation.
- Reduced metadata keys in S3VectorStore to save space.
🔧 Affected Symbols
FunctionCallingAgentReActAgentAgentRunnerStructuredAgentPlannerOpenAIAgentQueryPipelineindex.as_chat_engineCondensePlusContextChatEngineFunctionAgentCodeActAgentAgentWorkflowBaseDocumentStoreFunctionToolMockLLM.stream_completeS3ReaderS3VectorStore