Change8

0.16.6

📦 memgptView on GitHub →
7 features🐛 6 fixes1 deprecations🔧 6 symbols

Summary

This release significantly expands Conversations API functionality, introducing default conversation mode and immediate system message compilation upon creation. It also includes important fixes for model configuration overrides and improved compatibility with various LLM providers.

Migration Steps

  1. If relying on `model_settings.max_output_tokens` default behavior, note that it will no longer silently override existing `max_tokens` unless explicitly set.

✨ New Features

  • Expanded Conversations API support for default conversation / agent-direct mode.
  • New conversations now initialize with a compiled system message at creation time.
  • Added support for `conversation_id="default"` + `agent_id` across conversation endpoints (send/list/cancel/compact/stream retrieve).
  • Added lock-key handling in agent-direct flows to avoid concurrent execution conflicts.
  • Added model support for: `gpt-5.3-codex` and `gpt-5.3-chat-latest`.
  • Anthropic model settings now allow `effort="max"` where supported.
  • Memory filesystem rendering now includes descriptions for non-`system/` files and condenses skill display.

🐛 Bug Fixes

  • Fixed `model_settings.max_output_tokens` default behavior so it does not silently override existing `max_tokens` unless explicitly set.
  • Git-backed memory frontmatter no longer emits `limit` (legacy `limit` keys are removed on merge).
  • Skills sync now maps only `skills/{name}/SKILL.md` to `skills/{name}` block labels.
  • Added explicit `LLMEmptyResponseError` handling for empty Anthropic streaming responses.
  • Improved Fireworks compatibility by stripping unsupported reasoning fields.
  • Improved Z.ai compatibility by mapping `max_completion_tokens` to `max_tokens`.

Affected Symbols

⚡ Deprecations

  • Kept backwards compatibility for `conversation_id=agent-*` (deprecated path).