Change8

v4.7.0

📦 datadog-sdkView on GitHub →
18 features🔧 19 symbols

Summary

This release introduces significant performance improvements to profiling via Cython compilation and adds extensive new features across MLFlow, AI Guard, Azure Durable Functions, and deep enhancements to LLM Observability, including Pydantic evaluation support and incremental experiment reporting. Process tags are now propagated across many components by default.

Migration Steps

  1. If using OpenFeature, ensure `openfeature-sdk` version is 0.8.0 or higher.
  2. If you relied on flag evaluations for non-existent flags returning `Reason.DEFAULT` when configuration was available, be aware that they now return `Reason.ERROR` with `ErrorCode.FLAG_NOT_FOUND`. The previous behavior is preserved only when no configuration is loaded.

✨ New Features

  • Profiling hot path compiled to C via Cython, improving performance.
  • MLFlow integration adds a request header provider using DD_API_KEY and DD_APP_KEY environment variables.
  • AI Guard evaluation calls now block by default if configured in the UI; this can be disabled with `block=False`.
  • AI Guard SDK response now includes Sensitive Data Scanner (SDS) results.
  • AI Guard support introduced for Strands Agents.
  • Tracing support added for Azure Durable Functions (activity and entity functions).
  • Process tags are now added to profiler payloads, runtime metrics tags, remote configuration payloads, debugger payloads, crash tracking payloads, Data Streams Monitoring payloads, Database Monitoring SQL service hash propagation, and stats computation payloads. Deactivatable via DD_EXPERIMENTAL_PROPAGATE_PROPAGATE_PROCESS_TAGS_ENABLED=false.
  • LLM Observability: Support for capturing stop_reason and structured_output from the Claude Agent SDK integration.
  • LLM Observability: Support for user-defined dataset record IDs via an optional 'id' field in dataset creation/appending methods, or using the new 'id_column' parameter in create_dataset_from_csv().
  • LLM Observability: Experiment tasks can optionally receive dataset record metadata as a third 'metadata' parameter.
  • LLM Observability: Introduction of RemoteEvaluator to reference LLM-as-Judge evaluations configured in the Datadog UI by name.
  • LLM Observability: Cache creation breakdown metrics added for the Anthropic integration (ephemeral_5m_input_tokens and ephemeral_1h_input_tokens).
  • LLM Observability: Support added for reasoning and extended thinking content in Anthropic, LiteLLM, and OpenAI-compatible integrations.
  • LLM Observability: LLMJudge forwards extra client_options to the underlying provider client constructor.
  • LLM Observability: New Dataset methods added for tag manipulation: dataset.add_tags, dataset.remove_tags, and dataset.replace_tags.
  • LLM Observability: Experiment execution changed to run evaluators immediately after each task completion instead of batching, posting spans and metrics incrementally.
  • LLM Observability: Support added for Pydantic AI evaluations in Experiments.
  • Tracer: API endpoint discovery support introduced for Tornado applications.

Affected Symbols

datadog-sdk v4.7.0 - Change8