v1.80.12-nightly
📦 litellmView on GitHub →
✨ 31 features🐛 17 fixes⚡ 1 deprecations🔧 8 symbols
Summary
This release introduces significant feature enhancements, including support for image tokens, AWS Polly TTS, Minimax integration, and major UI improvements like error code filtering and key management updates. It also includes substantial internal refactoring to lazy load configuration classes for better performance.
Migration Steps
- If you rely on the 'useTeams' configuration, update your setup as this option is being deprecated.
✨ New Features
- Added image tokens support in chat completion.
- Added usage object in image generation for Gemini.
- Added AWS Polly API support for Text-to-Speech (TTS).
- Implemented Litellm content filter logs page.
- Enabled async_post_call_failure_hook to transform error responses.
- Added SSO Role Mapping functionality.
- Enabled error code filtering on Spend Logs.
- Added enhanced authentication, security features, and custom user-agent support for Databricks provider.
- Added UI support for Error Code Filtering on Spend Logs.
- Added support for 5 new AI providers using the 'openai_like' structure.
- Added pricing details for Azure gpt-image-1.5 to the cost map.
- Added pricing details for azure_ai/gpt-oss-120b model.
- Added support for 'supports_response_schema' on all supported Together AI models.
- Added Minimax chat completion support.
- Added Anthropic native endpoint support for Minimax.
- Added Gemini thought signature support via tool call id.
- Added support for Minimax TTS.
- Added SAP credentials support for list operations in the proxy UI.
- Enabled Interactions API to use all Litellm providers (interactions -> responses api bridge).
- Added RAG Search / Query endpoint.
- Added support for MCP stdio header environment variable overrides.
- Added support for generating content in the LLM route.
- Added centralized get_vertex_base_url() helper for global location support in Vertex AI.
- Added support for image generation via Azure AD token.
- Added support for azure/gpt-5.2-chat model.
- Added license endpoint.
- Added MCP test support to completions on the Playground.
- Added support for Google image generation models.
- Added guardrail log for actual event type.
- Added optional query parameter "expand" to /key/list endpoint.
- Added support for Platform Fee / Margins in the AI Gateway.
🐛 Bug Fixes
- Fixed request body handling for image embedding requests.
- Fixed minor styling issues in the UI.
- Allowed deleting Key Expiry settings.
- Properly caught context window exceeded errors for Gemini.
- Fixed lost tool_calls when streaming responses contained both text and tool_calls.
- Removed deprecated Groq models and updated the model registry.
- Fixed UI issue where Key Creation MCP Settings Form submitted unintentionally.
- Fixed double imports in main.py.
- Fixed Datadog span kind fallback when parent_id is missing.
- Fixed health status reporting.
- Required authentication for the MCP connection test endpoint.
- Fixed UI issue where the application disappeared in development environments (reissued fix).
- Disabled the Admin UI Flag.
- Allowed Organization Admins to see the Organization Tab.
- Allowed Organization Admins to view their Organization Info.
- Fixed Model Page Sorting to sort the entire set correctly.
- Fixed formatting in proxy configs documentation.
Affected Symbols
⚡ Deprecations
- The useTeams configuration option is deprecated in favor of resolving Organization Alias in the Team Table view.