v0.4.3
📦 autogen
✨ 10 features🐛 4 fixes🔧 14 symbols
Summary
This release introduces significant features including model client caching, GraphRAG integration, Semantic Kernel adapters, and a new agent memory interface. It also expands declarative configuration support and brings back the Jupyter code executor.
Migration Steps
- To use caching, wrap your ChatCompletionClient with ChatCompletionCache and provide a CacheStore implementation.
- If using JupyterCodeExecutor, ensure it is used in a secure local environment as it currently only supports local execution.
- Update AssistantAgent implementations to utilize the new memory parameter if context enrichment is required.
✨ New Features
- Added ChatCompletionCache to wrap model clients and cache completions.
- Introduced CacheStore interface with DiskCacheStore and RedisStore implementations.
- Added GraphRAG support via LocalSearchTool and GlobalSearchTool.
- Added SKChatCompletionAdapter to support Semantic Kernel AI connectors as AutoGen model clients.
- Added KernelFunctionFromTool to adapt AutoGen tools for use within Semantic Kernel.
- Introduced JupyterCodeExecutor for local code execution.
- Added agent memory interface and support for memory parameters in AssistantAgent.
- Expanded declarative configuration support for termination conditions and base chat agents.
- Added sources field to TextMentionTermination.
- Updated default gpt-4o model version to 2024-08-06.
🐛 Bug Fixes
- Improved reliability by retrying multiple times when M1 selects an invalid agent.
- Normalized finish reason in CreateResult responses.
- Fixed context passing between AssistantAgent instances during handoffs.
- Ensured proper handling of structured output in the OpenAI client.
🔧 Affected Symbols
ChatCompletionCacheChatCompletionClientCacheStoreDiskCacheStoreRedisStoreLocalSearchToolGlobalSearchToolSKChatCompletionAdapterKernelFunctionFromToolJupyterCodeExecutorMemoryAssistantAgentTextMentionTerminationCreateResult