Change8

v0.14.14

📦 llamaindexView on GitHub →
10 features🐛 27 fixes🔧 17 symbols

Summary

This release focuses on stability, security hardening, and compatibility updates across numerous integrations, including fixes for Pydantic validation errors and deprecation warnings. New features include cost governance via TokenBudgetHandler and expanded model support for Anthropic and OpenAI.

✨ New Features

  • Added Langchain1.x support in llama-index-core and llama-index-llms-langchain.
  • Added TokenBudgetHandler for cost governance in llama-index-core.
  • Added support for Claude Opus 4.6 in llama-index-llms-anthropic.
  • Added support for adaptive thinking in Bedrock via llama-index-llms-bedrock-converse.
  • Added custom base_url support to Cohere LLM in llama-index-llms-cohere.
  • Added support for gpt-5.2-chat model in llama-index-llms-openai.
  • Made transformers an optional dependency for openai-like packages (llama-index-llms-openai-like, llama-index-llms-openrouter).
  • Added openai-like server mode for VllmServer in llama-index-llms-vllm.
  • Added event and memory record deletion methods in bedrock-agentcorememory in llama-index-memory-bedrock-agentcore.
  • Added chonkie integration in llama-index-node-parser-chonkie.

🐛 Bug Fixes

  • Fixed potential crashes and improved security defaults in core components (llama-index-callbacks-wandb, llama-index-core).
  • Caught pydantic ValidationError in VectorStoreQueryOutputParser (llama-index-core, llama-index-node-parser-docling).
  • Distinguished empty string from None in MediaResource.hash (llama-index-core).
  • Fixed DeprecationWarning: 'asyncio.iscoroutinefunction' is deprecated (llama-index-core, llama-index-embeddings-siliconflow, llama-index-llms-siliconflow).
  • Fallback to bundled nltk cache if env var missing in llama-index-core.
  • Handled an edge case in truncate_text function in llama-index-core.
  • Fixed type issue in Thread passing None when target is None instead of copy_context().run (llama-index-core).
  • Fixed cache dir path test for Windows compatibility (llama-index-core).
  • Enforced utf-8 encoding in json reader tests for windows compatibility (llama-index-core).
  • Fixed BM25Retriever mapping in upgrade tool (llama-index-core).
  • Handled empty LLM responses with retry logic and added test cases in llama-index-agent (llama-index-core).
  • Added show_progress parameter to run_transformations to prevent unexpected keyword argument error (llama-index-core).
  • Added retry logic with tenacity to llama-index-embeddings-cohere.
  • Added client headers to Gemini API requests (llama-index-embeddings-google-genai, llama-index-llms-google-genai).
  • Fixed MENTIONS relationship creation with triplet_source_id in falkordb graph store (llama-index-graph-stores-falkordb).
  • Fixed bedrock converse empty tool config issue (llama-index-llms-bedrock-converse).
  • Improved bedrock converse retry handling (llama-index-llms-bedrock-converse).
  • Handled additional error types in retry logic for Cohere LLM (llama-index-llms-cohere).
  • Removed empty tool_calls from assistant messages in dashscope LLM (llama-index-llms-dashscope).
  • Added logic to llm_retry_decorator for async methods in google genai LLM (llama-index-llms-google-genai).
  • Fixed google genai cleanup (llama-index-llms-google-genai).
  • Skipped model meta fetch when not needed in google genai LLM (llama-index-llms-google-genai).
  • Updated sensible default provider for huggingface inference api (llama-index-llms-huggingface-api).
  • Fixed OpenAI response issues (llama-index-llms-openai).
  • Made image_url detail optional in message dict for OpenAI LLM (llama-index-llms-openai).
  • Excluded unsupported params for all reasoning models in OpenAI LLM (llama-index-llms-openai).
  • Fixed mem0 integration cleanup + refactor (llama-index-memory-mem0).

Affected Symbols