v1.80.5.dev2
📦 litellmView on GitHub →
✨ 14 features🐛 25 fixes⚡ 1 deprecations🔧 9 symbols
Summary
This release focuses heavily on bug fixes across various providers (Vertex AI, OCI, Azure, Bedrock) and significant UI/UX improvements, alongside adding support for new models like Claude Opus 4.5 and new features like OAuth2 registration and tool permission guardrails.
✨ New Features
- Added support for Claude Opus 4.5.
- Added new API for Claude Skills API (Anthropic).
- Added UI support for configuring tool permission guardrails.
- Added backend support for OAuth2 auth_type registration via UI.
- Added UI support for registering MCP OAuth2 auth_type.
- Added cost tracking for Cohere embed passthrough endpoint.
- Integrated Eleven Labs text-to-speech.
- Added support for OpenAI compatible bedrock imported models (e.g., qwen).
- Added support for Azure Anthropic models via chat completion.
- Added Vertex AI image generation support for both Gemini and Imagen models.
- Added day 0 support for Anthropic new features.
- Added search API logging and cost tracking in LiteLLM Proxy.
- Added enforce user param functionality.
- Added new feature for deleting a User From Team which also deletes key User Created for Team.
🐛 Bug Fixes
- Reverted UI outputs to console outputs to reduce noise.
- Fixed MCP tool call response logging and removed unmapped param error mid-stream in responses_bridge, allowing gpt-5 web search via responses api in .completion().
- Fixed default sample count for Vertex AI image generation handler.
- Reverted UI change related to Organization Usage.
- Fixed context caching for Vertex AI when handling global location.
- Prevented duplicate spend logs in Responses API for non-OpenAI providers.
- Fixed pydantic validation errors during OCI Provider tool call with streaming.
- Fixed handling of None or empty contents in Gemini token counter.
- Fixed Azure auth format for videos.
- Fixed Bedrock passthrough auth issue.
- Fixed gpt-5.1 temperature support when reasoning_effort is "none" or not specified.
- Propagated x-litellm-model-id in responses.
- Distinguished permission errors from idempotent errors in Prisma migrations.
- Fixed non-root Docker build.
- Added aws_bedrock_runtime_endpoint into Credential Types.
- Fixed image edit endpoint.
- Fixed UI issue where Default Team Settings were hidden from Proxy Admin Viewers.
- Fixed UI issue regarding No Default Models for Team and User Settings.
- Fixed Gemini: skipping thinking config for image models.
- Fixed metadata 401 error when handling audio/transcriptions.
- Made Bedrock image generation more consistent.
- Fixed Vertex AI CreateCachedContentRequest enum error.
- Fixed reasoning_effort="none" not working on Azure for GPT-5.1.
- Fixed transcription exception handling for /audio/transcriptions.
- Fixed Bedrock Claude Opus 4.5 inference profile (currently only global).
🔧 Affected Symbols
vertex_ai/image_generation_handler.pylitellm_logging.pyresponses_bridgeproxy_server.pylitellm.completion()aws_bedrock_runtime_endpointazurebedrock/audio/transcriptions⚡ Deprecations
- Removed unused MCP_PROTOCOL_VERSION_HEADER_NAME constant.