v1.79.3.dev5
📦 litellmView on GitHub →
✨ 10 features🐛 10 fixes🔧 11 symbols
Summary
This release adds several UI enhancements, new provider support, and numerous bug fixes, including a Zscaler AI Guard hook and RunwayML video generation provider.
✨ New Features
- Add Zscaler AI Guard hook.
- UI: Add Tags to Edit Key flow.
- Add RunwayML provider for video generation.
- UI: Improve usage indicator.
- UI: Add LiteLLM params to Edit Model screen.
- Add atexit handlers to flush callbacks for async completions.
- Update model logging format for custom LLM provider.
- Vertex AI Rerank: safe load Vertex AI credentials.
- Add support for filtering Bedrock Knowledge Base queries.
- Bedrock Embeddings: ensure correct aws_region is used when provided dynamically.
🐛 Bug Fixes
- Removed enterprise restriction from guardrails list endpoint.
- Litellm tags usage now includes request_id.
- Use user budget instead of key budget when creating new team.
- Sanitize null token usage in OpenAI-compatible responses.
- Added new models, deleted duplicate models, and updated pricing.
- Fix: Convert SSE stream iterator to async for proper streaming support.
- Fix: Use vllm passthrough config for hosted vllm provider instead of raising error.
- Fix: app_roles missing from JWT payload.
- Fix: Bedrock Knowledge Bases filtering support.
- Fix: Bedrock Embeddings correct aws_region handling.
🔧 Affected Symbols
litellm.guardrails.list_endpointlitellm.tags.add_request_idlitellm.team.create_new_teamlitellm.callbacks.async_flushlitellm.logging.model_formatlitellm.providers.vertex_ai.reranklitellm.agentcore.sse_stream_iteratorlitellm.providers.vllm.passthrough_configlitellm.auth.jwt_payloadlitellm.bedrock.knowledge_baselitellm.bedrock.embeddings