v1.80.5.dev1
📦 litellmView on GitHub →
✨ 17 features🐛 23 fixes🔧 6 symbols
Summary
This release focuses heavily on stability, fixing numerous bugs across various providers (Vertex AI, OCI, Azure, Gemini) and enhancing permission management and UI consistency. New features include support for Claude Opus 4.5, Claude Skills API, and expanded OAuth2/tool permission configurations.
Migration Steps
- If you were relying on specific UI outputs that have been reverted to console outputs, you may need to adjust your monitoring setup.
- If you encounter issues with streaming responses or tool calls with OCI providers, review your setup as pydantic validation errors were fixed.
- If you use Vertex AI and encountered context caching issues related to global location, this has been addressed.
✨ New Features
- Permission Management: disable global guardrails by key/team.
- Model Armor: Logging guardrail response on llm responses.
- Added presidio pii masking tutorial with litellm.
- Tool permission argument check implemented.
- Backend support added for OAuth2 auth_type registration via UI.
- UI support added for registering MCP OAuth2 auth_type.
- Support for Claude Opus 4.5 added.
- New API: Claude Skills API (Anthropic).
- UI support added for configuring tool permission guardrails.
- Cost tracking added for cohere embed passthrough endpoint.
- Header forwarding added in embeddings.
- Integration of eleven labs text-to-speech.
- Enforce user param functionality added.
- Search API logging and cost tracking in LiteLLM Proxy added.
- Support added for azure anthopic models via chat completion.
- Vertex AI image gen support added for both gemini and imagen models.
- Feature: Deleting a User From Team Deletes key User Ceated for Team.
🐛 Bug Fixes
- Revert to console outputs in UI to reduce noise.
- Fix mcp tool call response logging and remove unmapped param error mid-stream in responses_bridge, allowing gpt-5 web search to work via responses api in .completion().
- Fix default sample count in vertex_ai/image_generation_handler.py.
- Revert UI change for Organization Usage.
- Fix(Vertex AI): handle global location in context caching.
- Prevent duplicate spend logs in Responses API for non-OpenAI providers.
- OCI Provider: Fix pydantic validation errors during tool call with streaming.
- Preserve content field even if null in proxy_server.py.
- Fix unspecified issue.
- Fix broken documentation links in README.
- Fix azure auth format for videos.
- Fix bedrock passthrough auth issue.
- Fix gpt-5.1 temperature support when reasoning_effort is "none" or not specified.
- Propagate x-litellm-model-id in responses.
- Fix permission errors distinction from idempotent errors in Prisma migrations.
- Fix Non Root Docker Build.
- Fix image edit endpoint.
- Fix UI: Add No Default Models for Team and User Settings.
- Fix metadata 401 when audio/transcriptions.
- Make Bedrock image generation more consistent.
- Fix(vertex): fix CreateCachedContentRequest enum error.
- Fix `reasoning_effort="none"` not working on Azure for GPT-5.1.
- Fix(gemini): skip thinking config for image models.
🔧 Affected Symbols
vertex_ai/image_generation_handler.pylitellm_logging.pyresponses_bridgeproxy_server.pyaws_bedrock_runtime_endpointCreateCachedContentRequest