AutoGen
AI & LLMsA programming framework for agentic AI
Release History
python-v0.7.5Breaking10 fixes5 featuresAutoGen 0.7.5 introduces thinking mode for Anthropic, reasoning effort for OpenAI, and linear memory for Redis. It also defaults to Docker-based code execution for improved security and includes numerous fixes for streaming and graph state management.
python-v0.7.42 fixes1 featureVersion 0.7.4 focuses on bug fixes for Redis integration, specifically regarding deserialization and streaming, alongside documentation updates for the agent-as-tool feature.
python-v0.7.3Breaking4 fixes3 featuresRelease 0.7.3 introduces support for GPT-5 model info, improves Pydantic model typing for anyOf/oneOf, and fixes serialization issues in RedisStore and OpenAIAgent tool schemas.
python-v0.7.2Breaking1 fix4 featuresRelease 0.7.2 introduces Redis memory enhancements for JSON/Markdown, adds execution approval functions, and migrates MagenticOne to Docker-based code execution by default while cleaning up OpenAIAgent methods.
python-v0.7.15 fixes7 featuresThis release introduces RedisMemory, supports nested Teams in GroupChats, and enables all built-in tools for OpenAIAgent. It also includes significant upgrades for MCP and GraphRAG integrations alongside various stability fixes.
python-v0.6.4Breaking6 fixes5 featuresThis release improves GraphFlow state persistence and termination logic, adds tool override capabilities to Workbench implementations, and expands model support for Claude and Qwen2.5-VL.
python-v0.6.27 fixes11 featuresThis release introduces streaming tool support, inner tool calling loops for AssistantAgent, and OpenTelemetry GenAI tracing. It also adds a Mem0 memory extension and several improvements to workflow handling and model compatibility.
python-v0.6.12 fixes1 featureThis release focuses on bug fixes for GraphFlow validation and cycle detection, while enhancing ToolCallSummaryMessage with detailed function call results.
python-v0.6.0Breaking7 fixes13 featuresThis release introduces concurrent agent execution in GraphFlow and a new OpenAIAgent. It also features experimental callable edge conditions and various improvements to model clients and code executors.
python-v0.5.7Breaking7 fixes7 featuresThis release unifies the Azure AI Search Tool API, introduces model context support for SelectorGroupChat speaker selection, and improves OTEL tracing and Agent Runtime registration.
python-v0.5.67 fixes6 featuresThis release introduces GraphFlow for graph-based agent orchestration and concurrent execution, adds Bedrock support for Anthropic models, and includes several fixes for Docker executors and serialization.
python-v0.5.5Breaking2 fixes6 featuresThis release introduces the Workbench API and McpWorkbench for stateful tool management, adds a FunctionalTermination condition, and provides new integration examples for FastAPI and MCP servers.
python-v0.5.46 fixes9 featuresThis release introduces Agent and Team tools for nested agent workflows, an Azure AI Agent adapter, and experimental Canvas Memory. It also enhances CodeExecutorAgent with self-debugging capabilities and improves SelectorGroupChat compatibility with streaming models.
python-v0.5.32 fixes6 featuresThis release introduces code generation capabilities to CodeExecutorAgent, improves AssistantAgent serialization, and adds shared session support for MCP tools. It also includes bug fixes for Azure AI Search and handoff context management.
python-v0.5.26 fixes5 featuresThis release introduces support for Gemini 2.5 Pro Preview, improves Task-Centric Memory accessibility, and provides several bug fixes for ChromaDB, Azure AI Search, and Docker code execution.
python-v0.5.1Breaking8 featuresThis release introduces structured output support for agents and model clients, refactors AgentChat message types into a class hierarchy for better extensibility, and adds a new Azure AI Search tool.
python-v0.4.9.31 fix2 featuresThis patch release fixes a bug in the MCP Server Tool regarding null argument handling and introduces default timeouts and improved error messaging.
autogenstudio-v0.4.2Breaking4 fixes9 featuresThis release introduces major UI and observability enhancements to AutoGen Studio, including component validation, Anthropic support, token streaming, and experimental multi-user authentication.
python-v0.4.9.2Breaking4 fixesThis patch release focuses on security hardening by preventing API key exposure during serialization and fixes bugs in FileSurfer, SKChatCompletionAdapter, and reflection logic.
python-v0.4.9Breaking3 fixes8 featuresRelease v0.4.9 introduces native Anthropic and LlamaCpp support, experimental task-centric memory, and a breaking change to the serialized state schema to improve state portability.
python-v0.4.8.22 fixesThis patch release fixes a token limit issue in Azure AI streaming and introduces a close() method to model clients to resolve resource leaks and transport warnings.
python-v0.4.8.11 fixThis patch release fixes a critical bug in the SKChatCompletionAdapter that prevented the use of tools.
python-v0.4.8Breaking6 fixes8 featuresThis release introduces the Ollama Chat Completion Client and adds support for model 'thought' processes and custom metadata in messages. It also includes breaking changes to FunctionExecutionResult requirements and makes agent exceptions fatal to improve debugging.
python-v0.4.7Breaking4 fixes4 featuresThis release enforces strict validation for ModelInfo fields and introduces 'strict' mode for tools to support structured outputs. It also enhances the Docker code executor and provides various bug fixes for agent instantiation and tool execution.
python-v0.4.69 fixes8 featuresThis release introduces built-in support for MCP and HTTP tools, enhances MagenticOne and SelectorGroupChat for smaller/text-only models, and simplifies Gemini model integration.
autogenstudio-v0.4.1Breaking10 featuresAutoGen Studio v0.4.1 introduces a unified declarative configuration system allowing Python-defined agents to be exported to JSON, alongside new real-time testing tools and specialized research agent teams.
python-v0.4.5Breaking1 fix6 featuresThis release introduces token streaming for AgentChat, support for R1-style reasoning 'thought' blocks, and the ability to create FunctionTools from partial functions.
v0.4.46 featuresThis release introduces full serialization support for AgentChat configurations and states, enabling persistent and portable agent sessions. It also adds a new Azure AI inference client for broader model support and a rich UI for the Magentic One CLI.
v0.4.34 fixes10 featuresThis release introduces significant features including model client caching, GraphRAG integration, Semantic Kernel adapters, and a new agent memory interface. It also expands declarative configuration support and brings back the Jupyter code executor.
v0.4.21 fixVersion 0.4.2 updates the async input strategy specifically to remove an accidentally included GPL-licensed dependency.
v0.4.1Breaking5 fixes3 featuresThis release fixes critical console input and stop reason bugs, introduces subclassable BaseComponent for better serialization, and disables inaccurate console statistics by default.
v0.4.0Breaking4 fixes7 featuresThis release marks the first stable version of AutoGen v0.4, introducing a new API architecture, support for OpenAI's o1 model, and breaking changes to Azure authentication and intervention handlers.
Common Errors
BadRequestError10 reportsBadRequestError in autogen often arises from exceeding rate limits or providing malformed input to tools or models. To fix it, implement retry mechanisms with exponential backoff and ensure your input data adheres strictly to the expected schema or format of the invoked tool/model, paying special attention to character limits and data types.
GroupChatError5 reportsGroupChatError often arises from incorrect agent configurations within the GroupChat object, such as mismatched or missing LLM configurations or incompatible function definitions. Ensure each agent has a properly defined `llm_config` compatible with its intended model and that function definitions align with the agent's role; verify all agents needed for the ChatGroup are added to the GroupChat object. Also confirm that the model(s) specified in the llm_config are accessible and correctly configured in your environment.
ModuleNotFoundError4 reportsThe "ModuleNotFoundError" typically arises when the required Python package isn't installed or the Python interpreter can't find it. Ensure the package is installed using `pip install <package_name>` (e.g., `pip install autogen`) and that you're using the correct Python environment where the package is installed. If using a virtual environment, activate it before running your script to resolve the issue.
NotImplementedError3 reportsThe "NotImplementedError" in autogen typically arises when a subclass doesn't provide concrete implementations for methods defined as abstract or required in its parent class or interface. To fix this, identify the missing method implementations in the subclass (e.g., `Dump_Component` in `AzureOpenAIChatCompletionClient` or handling serialization/deserialization in `GraphFlow`), and add concrete code for each missing method that aligns with the intended functionality. Ensure the method signature (name, arguments, return type) matches the parent's definition.
PydanticSchemaGenerationError2 reportsPydanticSchemaGenerationError usually arises when autogen attempts to create a Pydantic schema from a class or function with unsupported or complex type annotations. To fix this, ensure all type hints are simple, concrete Pydantic-compatible types (like str, int, List[str]), and avoid using generics or custom classes that lack proper Pydantic schema definitions. When encountering errors with tools, carefully examine the tool's function signature and data types, simplifying complex outputs where possible and providing explicit type converters if needed.
FileNotFoundError2 reportsFileNotFoundError in autogen usually arises when a specified file path is incorrect or the file doesn't exist in the expected location. To fix, carefully verify the file path in your code, ensuring it's absolute or relative to the correct working directory, and that the file exists where specified. Double-check for typos in the filename, ensure correct case sensitivity (if applicable), and confirm that the user running the script has the necessary permissions to access the file.
Related AI & LLMs Packages
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
🦜🔗 The platform for reliable agents.
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
LLM inference in C/C++
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
Subscribe to Updates
Get notified when new versions are released