v1.80.10-nightly
📦 litellmView on GitHub →
✨ 26 features🐛 28 fixes🔧 31 symbols
Summary
Litellm v1.80.10 introduces Azure GPT‑5.2 model support, new security guardrail evidence, JWT team‑id selection, OTEL latency metrics, UI enhancements, and a large set of bug fixes and documentation updates.
✨ New Features
- Add Azure GPT‑5.2 models support
- Add opt‑in evidence results for Pillar Security guardrail during monitoring
- Add support for expires after parameter in Files endpoint
- Add support for forwarding client headers in /rerank endpoint
- Add v0 support for target storage and documentation
- New badge for Agent Usage in UI
- JWT authentication now allows selecting team_id from request header
- Add latency metrics (TTFT, TPOT, Total Generation Time) to OTEL payload
- Add benchmark_proxy_vs_provider.py script
- UI now shows version on top left near logo
- Support reasoning_effort='xhigh' for GPT‑5.2 models
- UI playground allows custom model name as first option
- Support using MCPs on /chat/completions
- Re‑organize UI left navigation and expose agents on root
- Add Usage Entity labels
- Add EU Claude Opus 4.5 model for Bedrock
- Add all proxy models to default user settings
- Add agent cost tracking on UI
- A2A Gateway now allows adding Azure Foundry agents via UI
- Show progress and pause on hover for notifications
- Add extra_headers and allowed_tools to UpdateMCPServerRequest
- Add storage_backend and storage_url columns to Prisma schema
- Add import image to A2A documentation
- Add agent_id field to GCS PubSub spend_logs_payload.json test expectation
- Add Regex guardrails update
- Add Cursor integration documentation
🐛 Bug Fixes
- Add semver prerelease suffix to Helm chart versions
- Add speechConfig to GenerationConfig for Gemini TTS
- Add 'exception_status' field to Prometheus logger
- Handle string content in is_cached_message cache check
- Fix token array input decoding for embeddings
- Add minimum request threshold for error‑rate cooldown in router
- Fix Milvus client documentation
- Make create_litellm_branch tool more robust
- Add health endpoint tests with database and Redis support
- Use Docker executor for CI
- Remove dependency on database and Redis from health test
- Replace time.perf_counter() with time.time() in router
- Fix x‑litellm‑key‑spend header update
- Fix Bedrock header forwarding with custom API
- Fix UI playground custom model name option issue
- Fix UI Agent Usage page minor issues
- Fix MCP tool name prefix
- Fix CI/CD mypy, check_code_and_doc_quality, and MCP testing
- Fix failing proxy unit test and Langfuse trace_id test
- Fix failing proxy and core integration tests
- Fix JWT authentication tests failing with KeyError: 'headers'
- Fix UI request and response logs display
- Fix Bedrock tool calling test failures with non‑serializable objects and internal parameters
- Fix filter internal params in fallback code
- Fix A2A Gateway Azure Foundry agents functionality
- Fix schema.prisma files to include storage_backend and storage_url columns
- Fix GCS PubSub spend_logs_payload.json test expectation for agent_id
- Fix links in documentation
Affected Symbols
GenerationConfigPrometheusLoggeris_cached_messageEmbeddingHandlerRoutercreate_litellm_branchHealthEndpointDockerExecutorFilesEndpointHeaderHandlerBedrockClientRerankEndpointTargetStorageAgentUsageBadgeJWTAuthOTELPayloadBenchmarkProxyVsProviderUIVersionDisplayOpenAIModelConfigUIPlaygroundMCPHandlerUILeftNavUsageEntityBedrockModelUserSettingsCostTrackingUIAzureFoundryAgentUIDocusaurusThemeMermaidCIConfigRegexGuardrailCursorIntegration