v0.14.19
📦 llamaindexView on GitHub →
✨ 5 features🐛 10 fixes🔧 5 symbols
Summary
This release focuses heavily on bug fixes across core indexing and SQL functionality, alongside introducing new LLM providers like MiniMax and updating support for models like GPT 5.4 variants and Gemini 3.
Migration Steps
- If you rely on the return value of `structured_predict`, update your code to handle a raised ValueError instead of a returned string.
- Users of llama-index-indices-managed-llama-cloud should note the removal of the llamaparse reader.
✨ New Features
- Added support for Mini and Nano variants of GPT 5.4 in llama-index-llms-openai.
- Introduced MiniMax LLM provider integration with M2.7 as default in llama-index-llms-minimax.
- Enabled support for custom LLM provider in model kwargs for llama-index-llms-litellm.
- Set Gemini 3 as the default model and added temperature control in llama-index-llms-google-genai.
- Added Azure OpenAI responses support in llama-index-llms-azure-openai.
🐛 Bug Fixes
- Fixed passing of `delete_from_docstore` parameter in `BaseIndex.delete_ref_doc`.
- Preserved CTE names during schema prefixing in `SQLDatabase.run_sql`.
- Aligned sync retrieval dedup key with async (hash + ref_doc_id).
- Raised ValueError instead of returning a string from `structured_predict`.
- Removed incorrect per-node delete calls in index helpers.
- Fixed llama-cloud managed index and removed llamaparse reader in llama-index-indices-managed-llama-cloud.
- Fixed Azure OpenAI responses in llama-index-llms-azure-openai.
- Used proper tool choice format in bedrock converse for llama-index-llms-bedrock-converse.
- Avoided mutating messages list in `prepare_chat_params` for llama-index-llms-google-genai.
- Passed custom headers to auto-created clients in llama-index-llms-ollama.