v3.2.0
📦 mlflowView on GitHub →
✨ 26 features🐛 21 fixes🔧 11 symbols
Summary
MLflow 3.2.0 introduces major GenAI tracing enhancements, feedback tracking, UI redesigns, Polars dataset support, and a suite of new features and bug fixes, while adding optional usage tracking.
Migration Steps
- If you previously passed tracking_uri as a positional argument to artifact repository functions, update your code to use the keyword argument form.
- To opt out of anonymous usage tracking, set the environment variable MLFLOW_USAGE_TRACKING_ENABLED=false or follow the instructions in the usage‑tracking documentation.
- If you rely on detailed schema error messages, ensure the new environment variable MLFLOW_DISABLE_SCHEMA_DETAILS is set to false; otherwise, set it to true to retain previous behavior.
✨ New Features
- Tracing TypeScript SDK added for GenAI applications in TypeScript environments (https://github.com/mlflow/mlflow/tree/master/libs/typescript).
- Automatic tracing support for Semantic Kernel integrated (https://mlflow.org/docs/latest/genai/tracing/integrations/listing/semantic_kernel/).
- Feedback Tracking now supports human feedback, ground truths, and LLM judges on traces.
- MLflow UI redesigned experiment home view with pagination on the model page.
- Trace UI now renders images in chat messages for OpenAI, Langchain, and Anthropic and adds a summary view.
- PII masking support via custom span post‑processor in tracing (https://mlflow.org/docs/latest/genai/tracing/observe-with-traces/masking).
- Polars dataset support added.
- Usage tracking enabled by default with opt‑out option (https://mlflow.org/docs/latest/community/usage-tracking/).
- mlflow-tracing is now a dependency of the core mlflow package.
- DatabricksRM output conversion to MLflow document format.
- Unified token usage tracking for Bedrock LLMs and for agent frameworks (Anthropic, Autogen, LlamaIndex, etc.).
- Multi‑modal trace rendering for LangChain.
- Async tracing support for Gemini.
- Global sampling for tracing.
- ResponsesAgent tracing aggregation.
- Agent and LLM complete name added to traces.
- Thread‑local tracing destination can be set via mlflow.tracing.set_destination.
- MLFLOW_DISABLE_SCHEMA_DETAILS environment variable introduced to toggle detailed schema errors.
- Support for chat‑style prompts with structured output using prompt objects.
- Support for responses.parse calls in OpenAI autologger.
- Support for 'uv' environment manager in mlflow run.
- Guideline adherence API renamed to guidelines.
- Scheduled Scorers API replaced by a Scorer Registration System.
- Tag filter added to experiments page and ability to edit experiment tags in UI.
- Runs table can be created using selected columns in experiment view.
- spark_udf now supports 'uv' environment manager.
🐛 Bug Fixes
- Added missing default headers and replaced absolute URLs in new browser client requests (GraphQL & logged models).
- Fixed tracking_uri positional argument bug in artifact repositories.
- Fixed UnionType support for Python 3.10 style union syntax.
- Fixed OpenAI autolog Pydantic validation for enum values.
- Fixed tracing for Anthropic and Langchain combination.
- Fixed OpenAI multimodal message logging support.
- Avoided nested threading for Azure Databricks trace export.
- Databricks GenAI evaluation dataset source now returns a DatasetSource instance instead of a string.
- Fixed get_model_info to provide logged model info.
- Fixed serialization and deserialization for Python scorers.
- Fixed GraphQL handler error on NaN metric values.
- Restored video artifact preview in UI.
- Proper chat message reconstruction from OpenAI streaming response.
- Converted trace column in search_traces() response to JSON string.
- Fixed mlflow.evaluate crashes in _get_binary_classifier_metrics.
- Fixed trace detection logic for mlflow.genai.evaluate.
- Enabled use of make_genai_metric_from_prompt for mlflow.evaluate.
- Added explicit encoding for decoding streaming Responses.
- Prevented tracing of DSPy model API keys.
- Fixed PyTorch datetime issue.
- Fixed predict with pre‑releases.