Change8

v0.12.46

Breaking Changes
📦 llamaindexView on GitHub →
1 breaking1 features🐛 6 fixes🔧 7 symbols

Summary

This release introduces async operations for VectorStoreIndex and provides several bug fixes across core, MCP tools, and NVIDIA embeddings. It also includes a significant migration for ElevenLabs voice agents and updated dependency constraints for Google GenAI.

⚠️ Breaking Changes

  • ElevenLabs voice agent integration has been migrated to align with framework standards, which may require updates to existing implementations.

Migration Steps

  1. Update google-genai dependency to the latest version for Google GenAI embeddings and LLMs.
  2. Review ElevenLabs voice agent implementations to ensure compatibility with the new framework standard (v0.3.0-beta).

✨ New Features

  • Added asynchronous delete and insert operations to VectorStoreIndex in llama-index-core.

🐛 Bug Fixes

  • Fixed ChatMessage to string handling when dealing with empty inputs.
  • Fixed function tool context detection issues when using typed context.
  • Resolved inconsistent reference node handling in core.
  • Fixed 404 errors when using NVIDIA embedding models with custom endpoints.
  • Corrected resource configuration from MCP servers in llama-index-tools-mcp.
  • Simplified citation block schema for better consistency.

🔧 Affected Symbols

VectorStoreIndex.adeleteVectorStoreIndex.ainsertChatMessageFunctionToolNvidiaEmbeddingMCPServerToolSpecElevenLabsVoiceAgent