Change8

v0.15.5-rc5

📦 ollamaView on GitHub →
3 features

Summary

This release introduces two new models, GLM-OCR and Qwen3-Coder-Next, and enhances core functionality with sub-agent support and VRAM-aware context length defaulting.

Migration Steps

  1. The `ollama signin` command now opens a browser window to facilitate the sign-in process.

✨ New Features

  • Added support for sub-agents with `ollama launch` for planning and research tasks.
  • Introduced automatic default context length setting based on available VRAM: 4,096 for < 24 GiB, 32,768 for 24-48 GiB, and 262,144 for >= 48 GiB.
  • Added support for GLM-4.7-Flash model when using the experimental MLX engine.