v4.8.0rc2
📦 datadog-sdkView on GitHub →
✨ 7 features🐛 12 fixes⚡ 1 deprecations🔧 8 symbols
Summary
This release introduces new tracing support for Azure CosmosDB and llama-index, adds an AI Guardrail integration for LiteLLM, and fixes several bugs related to profiling, internal thread leaks, and LLM Observability span reporting.
Migration Steps
- If using RAGAS integration for LLM Observability, switch to manually submitting evaluation results and refer to the external evaluation documentation.
- To enable eager flushing for CI Visibility test event buffering fixes in pytest-xdist worker crashes, set `DD_TRACE_PARTIAL_FLUSH_MIN_SPANS=1`.
- To enable automatic log correlation for CI Visibility, set `DD_AGENTLESS_LOG_SUBMISSION_ENABLED=true` for agentless setups, or `DD_LOGS_INJECTION=true` when using the Datadog Agent.
✨ New Features
- ASM adds a LiteLLM proxy guardrail integration for Datadog AI Guard using `ddtrace.appsec.ai_guard.ai_guard.integrations.litellm.DatadogAIGuardGuardrail`. Requires `litellm>=1.46.1`.
- Added tracing support for Azure CosmosDB, tracing CRUD operations on databases, containers, and items.
- CI Visibility adds automatic log correlation and submission so that test logs appear alongside their corresponding test run in Datadog. Enable via `DD_AGENTLESS_LOG_SUBMISSION_ENABLED=true` or `DD_LOGS_INJECTION=true`.
- Adds APM tracing and LLM Observability support for `llama-index-core>=0.11.0`, tracing LLM calls, query engines, retrievers, embeddings, and agents.
- Tracing now supports exporting traces in OTLP HTTP/JSON format via libdatadog by setting `OTEL_TRACES_EXPORTER=otlp`.
- LLM Observability introduces a `decorator` tag to spans traced by a function decorator.
- LLM Observability experiments now accept a `pydantic_evals` `ReportEvaluator` as a summary evaluator if its `evaluate` return annotation is exactly `ScalarResult`.
🐛 Bug Fixes
- Profiling: Fixed lock profiling samples not appearing in the Thread Timeline view for events collected on macOS.
- Profiling: Fixed a rare crash that could occur post-fork in fork-based applications.
- Profiling: Fixed a bug in Lock Profiling that could cause crashes when accessing attributes of custom Lock subclasses (e.g., in Ray).
- Internal: Fixed a potential internal thread leak in fork-heavy applications.
- Internal: Resolved an issue where a `ModuleNotFoundError` could be raised at startup in Python environments without the `_ctypes` extension module.
- Internal: Fixed a crash that could occur post-fork in fork-heavy applications.
- LLM Observability: Fixed incorrect span hierarchy when using ddtrace SDK alongside OTel-based instrumentation (e.g., Strands Agents), ensuring OTel gen_ai spans are nested correctly.
- LLM Observability: Fixed multimodal OpenAI chat completion inputs being rendered as raw iterable objects; content parts are now materialized and formatted as readable text.
- LLM Observability: Fixed `model_name` and `model_provider` reporting for AWS Bedrock LLM spans to use values matching backend pricing data.
- LLM Observability: Fixed an issue where deferred tools (`defer_loading=True`) in Anthropic and OpenAI integrations caused span payloads to include full tool descriptions/schemas; definitions are now stripped, preserving only the tool name.
- CI Visibility: Fixed loss of buffered test events caused by pytest-xdist worker crashes (`os._exit`, SIGKILL, segfault) by enabling eager flushing via `DD_TRACE_PARTIAL_FLUSH_MIN_SPANS=1`.
- CI Visibility: Fixed git metadata upload falling back to sending full 30-day commit history upon failure of the `/search_commits` endpoint; the upload now aborts on failure.
Affected Symbols
⚡ Deprecations
- Support for the RAGAS integration in LLM Observability has been removed. Users should manually submit RAGAS evaluation results as an alternative.