Change8

python-v0.7.5

Breaking Changes
📦 autogenView on GitHub →
1 breaking5 features🐛 10 fixes🔧 9 symbols

Summary

AutoGen 0.7.5 introduces thinking mode for Anthropic, reasoning effort for OpenAI, and linear memory for Redis. It also defaults to Docker-based code execution for improved security and includes numerous fixes for streaming and graph state management.

⚠️ Breaking Changes

  • The default code executor has been changed to DockerCommandLineCodeExecutor for security reasons. Users must ensure Docker is installed and running, or explicitly configure a different executor if local execution is required.

Migration Steps

  1. If using code execution, verify that Docker is available as DockerCommandLineCodeExecutor is now the default.

✨ New Features

  • Added support for linear memory in RedisMemory.
  • Added thinking mode support for the Anthropic client.
  • Added support for the reasoning_effort parameter for OpenAI models.
  • Added comprehensive GitHub Copilot instructions for AutoGen development.
  • Added OllamaChatCompletionClient to WELL_KNOWN_PROVIDERS for improved component loading.

🐛 Bug Fixes

  • Fixed Bedrock streaming responses failing when tool usage had empty arguments.
  • Fixed message ID correlation between streaming chunks and final messages.
  • Fixed issue where extra arguments failed to disable thinking mode.
  • tags caused by empty reasoning_content in streaming.
  • Fixed GraphFlow cycle detection to properly clean up recursion state.
  • Fixed Redis caching always returning False due to unhandled string values.
  • Fixed finish_reason logic in Azure AI client streaming responses.
  • Fixed JSON schema conversion to handle nested objects in array items.
  • Fixed unsupported field warnings in count_tokens_openai.
  • Fixed McpSessionActor to drain pending command futures on failure.

🔧 Affected Symbols

RedisMemoryAnthropic clientOllamaChatCompletionClientDockerCommandLineCodeExecutorGraphFlowAzure AI clientcount_tokens_openaiMcpSessionActorWELL_KNOWN_PROVIDERS