Change8

v1.83.3-stable

📦 litellmView on GitHub →
5 features🐛 29 fixes🔧 21 symbols

Summary

This release focuses heavily on bug fixes across various providers (Anthropic, Gemini, Vertex AI) and internal proxy/UI stability, alongside introducing Docker image signing verification via cosign.

Migration Steps

  1. If you rely on Anthropic's default reasoning summary injection, you may need to update your calls or use the new opt-out flag to maintain previous behavior.

✨ New Features

  • Added opt-out flag for default reasoning summary in Anthropic provider.
  • Added create character endpoints and other new videos Endpoints.
  • Added Akto Guardrails integration to LiteLLM.
  • Added MCP zero trust auth guide documentation.
  • Added support for disabling Custom Virtual Key Values via UI Setting.

🐛 Bug Fixes

  • Preserved thinking.summary when routing Anthropic requests to OpenAI Responses API.
  • Resolved image token undercounting in Gemini usage metadata.
  • Fixed Vertex AI Batch Output File Download failing with 500 error.
  • Fixed filtering of beta header after transformation.
  • Preserved custom attributes on the final stream chunk during streaming.
  • Aligned DefaultInternalUserParams Pydantic default with runtime fallback.
  • Fixed privilege escalation on /key/block, /key/unblock, and /key/update max_budget endpoints.
  • Fixed setting oauth2_flow when building MCPServer in _execute_with_mcp_client.
  • Fixed UI Logs: Empty Filter Results Showing Stale Data.
  • Prevented Internal Users from Creating Invalid Keys.
  • Fixed Key Alias Re-validation on Update Blocking Legacy Aliases.
  • Registered DynamoAI guardrail initializer and enum entry.
  • Skipped #transform=inline for base64 data URLs in Fireworks provider.
  • Avoided no running event loop during synchronous Langsmith initialization.
  • Fixed UI: CSV export being empty on Global Usage page.
  • Corrected supported_regions for Vertex AI DeepSeek models in model prices.
  • Restored gpt-4-0314 pricing/support.
  • Fixed Redis cluster caching issues.
  • Converted max_budget to float when set via environment variable in proxy.
  • Mapped Anthropic 'refusal' finish reason to 'content_filter'.
  • Fixed Vertex AI streaming finish_reason for gemini-3.1-flash-lite-preview to be 'stop' instead of 'tool_calls'.
  • Fixed cache_control directive dropping Anthropic document/file blocks.
  • Mapped Chat Completion file type to Responses API input_file correctly.
  • Fixed Vertex AI respecting vertex_count_tokens_location for Claude count_tokens.
  • Preserved cache directive on Anthropic file-type content blocks.
  • Preserved diarization segments in Mistral transcription response.
  • Passed model to context caching URL builder for custom api_base in Gemini.
  • Auto-routed gpt-5.4+ tools+reasoning to Responses API in Azure.
  • Auto-recovering shared aiohttp session when closed.

Affected Symbols

litellm v1.83.3-stable - Change8