Change8

python-v0.5.4

📦 autogenView on GitHub →
9 features🐛 6 fixes🔧 11 symbols

Summary

This release introduces Agent and Team tools for nested agent workflows, an Azure AI Agent adapter, and experimental Canvas Memory. It also enhances CodeExecutorAgent with self-debugging capabilities and improves SelectorGroupChat compatibility with streaming models.

Migration Steps

  1. To enable streaming for SelectorGroupChat with models like QwQ, set emit_team_events=True and model_client_streaming=True.
  2. To use self-debugging in CodeExecutorAgent, configure the new max_retries_on_error parameter.

✨ New Features

  • Introduced AgentTool and TeamTool to wrap agents and teams as tools for use by other agents.
  • Added Azure AI Agent adapter with support for file search and code interpreter.
  • Added Docker Jupyter Code Executor for sandboxed local execution.
  • Introduced experimental Canvas Memory for shared 'whiteboard' collaboration between agents.
  • Added support for autogen-oaiapi and autogen-contextplus community extensions.
  • Updated SelectorGroupChat to support streaming-only models and optional inner reasoning emission.
  • Added max_retries_on_error to CodeExecutorAgent for self-debugging loops.
  • Added multiple_system_message support to ModelInfo.
  • Added support for exposing GPUs to the Docker code executor.

🐛 Bug Fixes

  • Fixed query type in Azure AI Search Tool.
  • Ensured serialized messages are passed to LLMStreamStartEvent.
  • Fixed Ollama failure when tools use optional arguments.
  • Prevented re-registering already registered message types.
  • Fixed model_context deserialization in AssistantAgent, SocietyOfMindAgent, and CodeExecutorAgent.
  • Generalized SystemMessage merging using model_info instead of hardcoded string checks.

🔧 Affected Symbols

AgentToolTeamToolAzureAIAgentDockerJupyterCodeExecutorCanvasMemorySelectorGroupChatCodeExecutorAgentAssistantAgentSocietyOfMindAgentModelInfoLLMStreamStartEvent