0.10.0
Breaking Changes📦 memgptView on GitHub →
⚠ 2 breaking✨ 8 features🐛 5 fixes🔧 5 symbols
Summary
This release introduces the `LettaPing` message for stable long streaming connections and adds support for OAuth MCP providers. It also defaults agents to the new `memgpt_v2_agent` architecture and includes various performance improvements and bug fixes.
⚠️ Breaking Changes
- The default Agent Architecture has changed to `memgpt_v2_agent`. If you rely on the previous default architecture, you must explicitly configure the agent to use it.
- Archival memory tools are no longer added by default to agents. If your workflow requires archival memory, you must now add these tools explicitly.
Migration Steps
- If you have long-running tools using streaming endpoints, you may need to add handling logic for the new `LettaPing` message type.
- If you were relying on the previous default agent architecture, update your configuration to explicitly use the desired architecture, as the default is now `memgpt_v2_agent`.
- If your agents previously relied on default archival memory tools, ensure you explicitly add them back if needed.
✨ New Features
- Introduction of the `LettaPing` message type (sent every 90 seconds) to maintain long streaming connections and prevent terminations during long tool executions.
- Support for OAuth MCP providers, including Linear and Github.
- Improved LLM support: LMStudio now supports Qwen and Llama models with manual token counting for streaming.
- Ollama integration has been moved to the new agent loop architecture.
- Added `not_indexable` property to agents.
- Added modal sandbox functionality with conditional imports.
- Memory blocks viewer added for better memory management.
- Multi-function files support added with documentation.
🐛 Bug Fixes
- Fixed null check in the voice endpoint.
- Fixed builtin web search tests.
- Fixed flaky test issues.
- Removed OpenTelemetry file exports.
- Added API key validation before using the token counter for Anthropic.